A Comparison of Alternative Encoding Mechanisms for Web Services 1
|
|
- Blaze Hart
- 6 years ago
- Views:
Transcription
1 A Comparison of Alternative Encoding Mechanisms for Web Services 1 Min Cai, Shahram Ghandeharizadeh, Rolfe Schmidt, Saihong Song Computer Science Department University of Southern California Los Angeles, California {mincai,shahram,rrs,saihong}@usc.edu Abstract. A web service is a modular application that is published, advertised, discovered, and invoked across a network, i.e., an intranet or the Internet. It is based on a software-as-services model and may participate as a component of other we b services and applications. Binary and XML are two popular encoding/decoding mechanisms for network messages. Binary encoding is used when performance is critical and XML encoding is employed when interoperability with other web services and applications is essential. With each, one may employ compression to reduce message size prior to its transmission across the network. These decisions have a significant impact on response time and throughput. This paper reports on our experiences with a decision support benchmark, TPC-H, using these alternatives on different hardware platforms. We focus on queries and make the following observations. First, compression reduces the message size and enhances the throughput of a shared network. With XML, we present numbers from XMill, a compression technique that employs XML semantics. For queries that produce more than one megabyte of XML data, XMill compressed XML messages are almost always smaller than the Zip compressed Binary messages. While this improves the throughput of a networked environment with a fixed bandwidth, the response time of XMill compressed messages are at times twice slower than Zip compressed Binary messages. The processor speed has a significant impact on the observed response times. 1 Introduction Many organizations envision web services as an enabling component of Internet -scale computing. A web service is either a computation or an information service with a published interface. Its essence is a remote procedure call (RPC) that consumes and processes some input data in order to produce output data. It is a concept that renders web applications extensible: By identifying each component of a web application as a web service, an organization can combine these web services with others to rapidly develop new web applications. The new web application may consist of web services that span the boundaries of several (if not many) organizations. 1 This research is supported using an un-restricted cash gift from Microsoft research.
2 Serialization is an integral component of both web services and network-centric database applications. It is the process of converting objects and their state information into a form appropriate for transmission across the network (and storage in a flat file). With a network connection, the receiving process consumes arriving messages, re-assembles the transmitted objects, and processes them. This process is termed de-serialization. Two common encoding mechanisms are XML and Binary. XML produces human readable text and is employed when interoperability with other web services and applications is essential. Binary produces streams that are compact and fast to parse, but not human-readable. The focus of this study is to quantify the performance tradeoff associated with these two alternatives for a decision support benchmark. We do not report on the query execution times and have eliminated it from the presented response times. In addition, we analyze the role of compression in reducing the number of transmitted bytes with each encoding mechanism. With XML, we analyze two compression schemes: Zip/GZip library and XMill [Liefke & Suciu]. Both employ techniques based on Lempel-Ziv algorithm [Ziv & Lempel]. Our results demonstrate that without compression, the XML encoder results in message sizes that are at times five times larger than their Binary representation. With Zip, compressed XML messages are at most twice the size of their compressed Binary representation. With XMill, compressed XML messages are at times smaller than their Zip compressed Binary representation. This trend holds true for large messages (more than one megabyte). Otherwise, Zip compressed Binary messages are smaller. We also investigated two alternative protocols, namely, TCP/IP [Postel] and HTTP [Fielding et. al.]. Our experiments reveal that these two protocols provide comparable performance. This is because the Hypertext Transfer Protocol (HTTP) is an application-level protocol that employs a reliable transport protocol. In our experimental configuration, it employs TCP/IP connections. HTTP does transmit more bytes based on its request-response paradigm [Fielding et. al.]. However, this is negligible when compared with the message size. We do not investigate the role of HTTP intermediary, i.e., proxy, gateway, and tunnel. To simplify discussion, we eliminate HTTP from further consideration. We use the obtained results to develop analytical models that compute the throughput of the system when the network is a shared resource. An analysis of these models reveals compression can significantly enhance system throughput when the network bandwidth is limit ed. The rest of this paper is organized as follows. Section 2 provides an overview of our experimental environment and the obtained results. Section 3 describes analytical models that compute system throughput using network as a bottleneck. We offer brief conclusions and future research directions in Section 4. 2 Performance Evaluation We quantified the performance of alternative transmission protocols using TPC-H benchmark [Poess & Floyd] because it is a standard that provides documented queries and data sets. This enables others to re-create our experiments. TPC -H includes both retrieval and refresh queries. The refresh commands generate large requests and small
3 responses. The retrieval queries offer a mix of commands that generate either (a) large requests and small responses, and (b) large requests and large responses. This motivated us to focus on retrieval queries and ignore refresh queries from further consideration. We report on 21 out of 22 queries because we could not implement query 15 in a timely manner. Our hardware platform consists of two PCs. One is a server and the other is a client. (We analyze results with PCs configured with three different processor speeds: 450 MHz, 1 GHz, and 2 GHz.) The client and server were connected using a LINKSYS Ethernet (10/100 megabit per second, mbps) switch. Each machine is configured with a 20-gigabyte internal disk, 512 megabytes of memory (unless specified otherwise), and a 100 mbps network interface card. The server is configured with Microsoft Windows 2000 Server, SQL Server 2000, and Visual Studio.NET Beta 2 release. The client is configured with Microsoft Windows 2000 Professional and Visual Studio.NET Beta 2 release. The server implements a web service that accepts one TPC-H retrieval query, processes the query using ADO.NET, and returns the obtained results back to the client. The client employs a TPC-H provided component that generates SQL retrieval query strings, invokes the server s web service, and receives the obtained results. The communication between the client and server uses the.net remoting framework. For transmission, we use the TCP and HTTP channels provided with the.net framework. For message formatting we use the SOAP [Box et. al.] and Binary formatters provided with the.net framework. We extended this framework with two new formatters: a) compression using Zip/GZip library written entirely in C#, and b) XMill compression scheme [Liefke & Suciu]. XMill employs zlib, the library function for gzip. We modified XMill to consume its input from buffers (instead of a file). This framework configures channels and encoders without requiring modification of the application code. When performing our experiments, our system reconfigures at runtime to repeat a query workload while communicating over different channels with different encoders. All of our experiments were conducted in a single user mode with no background load on the underlying hardware. The client was configured to invoke the web service in an asynchronous manner. 2.1 Obtained re sults We used a 1 Gigabyte TPC-H database for evaluation purposes. The presented results ignore the query execution time. They pertain to only the encoding, transmission, and decoding times for processing a query. Different TPC-H queries produce a different amount of data. With the binary formatter, the message sizes vary from a few hundred bytes to a few megabytes. With the XML formatter, the message varies from a few hundred bytes to tens of megabytes. The server produces the largest volume of data for Query 10, approximately 25 megabytes of data with XML. A compression scheme such as Zip can be applied to both the Binary and XML formatters. XMill compresses XML data using its semantics. Figure 1 shows a comparison of these alternatives using Zip compressed Binary (Zip-Binary) messages as a yardstick. The y-axis of this figure shows the ratio in size between a technique such as XMill-XML and Zip-Binary. For example, with Query 1, Zip-XML messages
4 are 1.5 times larger than their Zip-Binary counterparts. A y-axis value less than 1.0 implies a smaller message size relative to Zip-Binary. XMill compressed XML (XMill-XML) produce smaller messages than Zip-Binary, i.e., 0.84 times the size with Query 2. In our experiments, in the worst-case scenario, Zip compressed XML (Zip-XML) messages are twice the size of Zip -Binary messages, see Figure 1. In the best case, they are approximately the same size. This is because a loss-less compression technique can effectively compress the repeated XML tags. To illustrate, Figure 2 shows the compression factor for Zip-XML, and XMill-XML. In general, compression factor is higher with XML. With Binary, compression factor ranges from 1.4 to 5.5. With XML, the Zip compression factor ranges from 2.1 to With XMill, the compression factor is as high as 26 with query 16. Size Ratio Zip-XML / Zip-Binary XMill-XML / Zip-Binary Compression Factor Zip-XML XMill-XML Query ID Query ID Fig. 1. Comparison of XML and Binary message sizes with and without compression Fig. 2. Impact of compression on each encoding scheme Figure 1 shows that XMill-XML messages are at times smaller than Zip-Binary messages, e.g., see query 2. This is because XMill groups data items with related meaning into containers and compresses each independently [Liefke & Suciu]. This column-wise compression is generally better than row-wise compression [Iyer and Wilhite]. At the same time, Figures 1 and 2 show that XMill is not always superior to Zip compression technique for XML messages, e.g., queries 1, 4, 5, 6, 7, 8, 12, 13, 14, 17, 18, 19 and 22. Generally speaking, when compared with Zip, XMill is more effective with large messages. The aforementioned queries transmit fewer than 9000 bytes. With the remaining queries that produce XML messages that are tens of thousands of bytes in size, XMill outperforms Zip. Table 1 shows the response time of TPC -H Query 10 with the alternative formatters with three different processor configurations: a) 450 MHz, b) 1 GHz, and c) 2 GHz. In each case, we used two identical PCs with the same processor speed, one as the client and the other as the server. Each machine was configured with 512 megabytes of memory. The network configuration was identical in each case; see the beginning of Section 2. In these experiments, Binary provides the fastest response times. While Zip-Binary transmits fewer bytes, it is slightly slower because it includes the overhead of compressing the message at the transmitter and uncompressing it at the receiver. This observation applies to a comparison of Zip -XML with XML. These
5 trends change when the network characteristics are less than ideal, e.g., either network latency or loss rate is high, see [Ghandeharizadeh et. al.]. In Table 1, XMill-XML provides the worst response time which is an inaccurate characterization of XMill. It is inaccurate because of how our experimental environment manages memory in conjunction with Microsoft SQL server, increasing XMill compression time dramatically. We invoked our modified XMill-XML on the result of query 10 (without running SQL server) and observed average total compression and decompression time of 4.6 seconds. The average total time with Zip-XML is 3.1 seconds. The resulting compressed file is smaller with XMill-XML. Hence, XMill-XML should provide a response time comparable to Zip- XML. We intend to investigate this memory limitation and resolve it in the near future. Table 1. Response time (millisecond) of alternative formatters for TPC-H Query MHz 1 GHz 2 GHz Binary 100,148 74,849 35,655 Zip-Binary 113,187 75,645 37,694 XML 183, ,200 64,365 Zip-XML 184, ,930 64,276 XMill-XML 296, ,278 91,547 Figure 3 shows the speedup observed for each formatter as a function of the processor speed. In each case, the speedup is sub-linear. This is because the memory s clock speed is the same for all configurations. This is important because query 10 requires the system to read and write a large amount of data for encoding, compression, transmission, decompression and decoding Linear (Theoretical) Binary Zip-Binary XML Zip-XML XMill-XML Speedup Processor Speed (MHz) Fig. 3. Speedup of each technique as a function of processor (clock) speed
6 3 Analytical Models of Transmission to Estimate Throughput The results of Section 2 demonstrate the impact of alternative formatters on the number of transmitted bytes per TPC-H query. This can quickly dominate and determine the throughput of the environment in realistic Internet deployments when network bandwidth is limited. Here we present an analytical model to compute upper bounds on the throughput of a system as a function of fixed bandwidth between a server and multiple clients. Using these models and the experimental results of Section 2, we quantify the throughput of each formatter with different network characteristics. We start with a simple model of throughput. Next, we use a Markov process to model transmission failures. Finally, we apply these models with different system parameters to compare the alternative encoders. 3.1 A Simple Model In the following discussion, assume a fixed application and communication channel. Moreover, lets assume the average request message size is Q megabits (Mb) and the average response message size is A Mb. The server component resides on a machine that is linked to the network with a bandwidth of B Mb/sec. We use T to denote the system throughput. Its upper bound is defined as: T B (Q + A) (1) No matter what improvements are made to the server or the transmission protocol, this theoretical upper bound cannot be surpassed. 3.2 Transmission Failures Our techniques employ TCP transmission protocol. This reliable protocol may retransmit packets in order to compensate for packet loss, resulting in increased traffic on the underlying shared network. We use a Markov process to model TCP packet transmission. This model computes the expected number of bits to transmit a message based on failure rates, packet sizes, and available bandwidth. Using this expectation, we derive upper bounds on throughput. We start by explaining our model for environments that transmit a single packet of size P. Next, we extend this model to a message that consists of N packets. There are three basic states for transmitting a single packet: 1. The packet is sent, but not received. This state is termed t, or transmitted. 2. The packet is sent and received, but the ACK has not been received. This state is termed r, or received. 3. The packet is sent and received, and the ACK has also been received. This is the terminal state. The following state transition diagram (Figure 4) shows these different states assuming: a) every packet has a failure probability λ that is independent of its size, and b) the ACK message has size ε bits.
7 λ, P λ, ε 1-λ, P 1-λ, ε t r Term Fi g. 4. The state transition diagram for three basic states Where each transition is labeled with the probability of the transition and the number of bits transmitted during the transmission. Given this diagram we can compute P*, the expected number of bits transmitted over the network before successfully completing the transmission. If no failures occur (with probability of (1- λ) 2 ) then a total of P + ε bits are transmitted. When there is a failure, the system restarts with state t. This is described as: Which can be reduced to: P* = (1- λ) 2 (P + ε) + λ(p + P*) + λ(1- λ)( P + ε + P*) (2) P* = P (1- λ) 2 + ε (1-λ) (3) The extension of this to a message that consists of N packets is trivial. We represent this as N transmitted states t 0, t 1, t 2,, t N, and received states r 0, r 1, r 2,, r N. After successfully receiving ACK for the i th packet, the system moves to state t(i+1): transmitted the (i + 1) th packet. The new diagram is shown in Figure 5. λ, P λ, ε 1-λ, P 1-λ, ε 1-λ, P 1-λ, ε t 0 r 0 t 1 r 1 r N λ, P Terminal Fig. 5. The state transition diagram for the message transmission Markov chain Note that once we reach state t i, we will never visit state t j for any j < i, motivating the following two propositions: Proposition 1: The expected number of bits sent over the network to transmit a message of size M with failure rate λ and packet size P is: M* = M[1 (1- λ) 2 + ε (P-Pλ)] (4) With this analysis, we employ our simple model to estimate the expected throughput as a function of the failure rate, packet size, and bandwidth: T(λ, P, B) = (1- λ) B [(Q + A) [1 (1- λ) + ε P]] (5) In fact we can say more about this estimation, see Proposition 2. Proposition 2: For a workload of W queries, T(λ, P, B) is a good estimate for the expected throughput when average total message size is M, where M = Q + A. In particular: T(λ, P, B) Expected Throughput T(λ, P, B) + O(1 ( WM 2 )) (6)
8 Proof: First define µ to be the random variable representing the total number of bytes transmitted to finish the workload. Using Proposition 1, E[µ] = WM*. Now note that we always have µ > 0, and the function F(µ) = 1/µ is convex in this range. This allows us to use Jensen s inequality [Folland] to state: Expected T hroughput = E[B µ] = B E[Φ (µ)] B Φ (E[µ]) = T(λ, P, B) (7) Proving that T is a lower bound. To prove the upper bound, we use the Taylor expansion of Φ around the point WM*. Denoting the minimum number of bytes that must be transmitted to complete the workload by µ min we can find a function α(λ, P) so that the following holds when x > µ min Φ (x) 1 WM* - (x - WM*) (WM*) 2 + α(λ, P) (x - WM*) 2 (WM*) 3 (8) Now write Expected Throughput = B E[Φ (µ)] = B ΣΦ (i)pr[µ = i] B (WM*) - Σ (i WM*) (WM* 2 )Pr[µ = i] + α(λ, P) Σ (I - WM*) 2 (WM*) 3 Pr[µ = i] T(λ, P, B) + α(λ, P) BW Var[µ] (WM*) 3 = T(λ, P, B) + α(λ, P) BW 2 M Var[p] (WM*) 3 = T(λ, P, B) + O(1 ( WM 2 )) Where p is a random variable denoting the number of bytes needed to transmit a single packet. This completes the proof of proposition 2. (9) 3.3 Implications of the Model This formula has several intuitive implications, and can be used to quantify our basic thesis that when bandwidth is limited, an encoding mechanism that transmits fewer bytes results in a higher throughput. Inspection of the formula for T(λ, P, B) leads to the following basic observation: The throughput bound increases as a function of bandwidth, larger packet size, and reduced packet loss rate. We now use these throughput estimates to study the data reported in Section 2. Figures 6-8 plot throughput estimates on the TPC-H query workload using XML, Binary, ZIP-Binary, Zip-XML, XMill-XML. For these figures, we used the average message sizes (request plus response) for the 21 TPC-H queries. These are as follows: for the XML formatter is 1.65 MB, for the Binary formatter is 0.54 MB, for the Zip- XML formatter is 0.22 MB, for the Zip-Binary is 0.21 MB, and for the XMill-XML is 0.17 MB. These plots serve two purposes: First, they show how the throughput bound varies over realistic choices of parameter values. Second, they quantify the benefit of XMill-XML format in a variety of realistic settings. Figure 6 shows the theoretical upper bounds on throughput for a workload of random TPC -H queries as a function of bandwidth to the server. The bandwidth varies from 56Kbps to 1Gbps, and both x-axes and y-axes use a log scale. Because the average message size is the same for TCP and HTTP channels, only the TCP data are displayed. This figure assumes each packet is eight kilobytes, P = 8K, and each
9 acknowledgement packet is one kilobyte, ε = 1K. In Figure 6, XMill-XML encoder provides substantial performance improvement: its throughput is 270 times higher than that of the binary channel (and 800 times higher than the XML encoder). Fig. 6. Throughput estimates as a function of server bandwidth for a packet loss rate of 0.01% Fig. 7. Throughput estimates as a function of packet loss rate for 10 Mbps link Figure 7 shows the throughput as a function of the packet loss rate when the server s network bandwidth is limited to 10 Mbps. Once again, XMill-XML provides a higher throughput because it transmits fewer bytes, i.e., packets. This figure also shows that while performance improves with increased reliability, the improvement is marginal with loss rates less than 10%. For example, as we reduce the packet loss rate from 10% to 0.01%, the throughput improves 21%. Finally we studied the impact of packet size on throughput. This study assumes the packet loss rate is independent of the packet size. (The models can be extended to remove this assumption.) Figure 8 shows the throughput of the alternative communication formatters assuming a 10 Mbps connection to the server and a 0.01 packet loss rate. The obtained results demonstrate a higher throughput with a larger packet size. This is because the number of acknowledgements and retransmissions are reduced with larger packet sizes. Fig. 8. Throughput as a function of packet size for a 10 Mbps link and a 0.01% retransmission rate Collectively, these figures make the following observation: when a fixed amount of bandwidth is available to a server, one may increase system throughput by either
10 increasing bandwidth, decreasing message size, or a combination of these two options. For the query workload given by the TPC -H queries, compressing XML messages improves throughput significantly. Furthermore, these savings can be exploited by reconfiguring existing applications to use a new compressed XML formatter with the communication channel. 4 Conclusion and Future Research Directions This paper presents a performance study of XML and Binary formatters supported by Microsoft.NET (Beta 2) and the role of compression to reduce the number of transmitted bytes. The obtained results demonstrate a tradeoff between response time and throughput. XMill compressed XML messages might be smaller than compressed Binary messages, resulting in a higher throughput for a shared network with a fixed bandwidth. However, this might result in a higher response time depending on the overhead of compression and network characteristics, see [Ghandeharizadeh et. al.]. As a future research direction, we intend to investigate a dynamic framework that decides between XML and compressed XML at run-time. The objective would be to develop a framework that adjusts itself to the evolving characteristics of an environment and the target application, maximizing system performance when XML is essential. It would choose between the alternative compression schemes, namely, Zip versus XMill, when bandwidth is scarce. To render intelligent decisions, it considers the performance requirements of a target application, the amount of physical memory, the network characteristics, and the message size. If the network bandwidth is scarce then it must employ a compression scheme. If the message size is large (larger than 1 megabyte) then it should employ XMill. 5 References 1. Box, D., Ehnebuske, D., Kakivaya, G. Layman, A., Mendelsohn, N., Nielsen, H. F., Thatte, S., and Winer, D. Simple Object Access Protocol (SOAP) 1.1 W3C Note 08, May Fielding R. T., J. Gettys, J. Mogul, H. Nielsen, L. Masinter, P. Leach, T. Berners-Lee. Hypertext Transfer Protocol HTTP/1.1. W3C RFC 2616, June Folland, G. B. Real Analysis: Modern Techniques and Their Applications Wiley- Interscience, Ghandeharizadeh S., C. Papadopoulos, M. Cai and K. Chintalapudi. Performance of Networked XML-Driven Cooperative Applications. Submitted for publication. 5. Iyer B., and D. Wilhite. Data Compression Support in Databases. In Proceedings of the 20 th International Conference on Very Large Data Bases, Liefke H., and D. Suciu. XMill: An Efficient Compressor for XML Data. University of Pennsylvania Technical Report MSCIS M. 7. Poess, M., and C. Floyd. New TPC Benchmarks for Decision Support and Web Commerce. In ACM SIGMOD Record, Volume 29, Number 4, December Postel, J. Transmission Control Protocol. Request for Comments 793, September Ziv, J., and A. Lempel. A Universal Algorithm for Sequential Data Compression. IEEE Trans. Inform. Theory 23, 3 (May), , 1977.
NAM: A Network Adaptable Middleware to Enhance Response Time of Web Services 1
NAM: A Network Adaptable Middleware to Enhance Response Time of Web Services 1 ABSTRACT: S. Ghandeharizadeh, C. Papadopoulos, M. Cai, R. Zhou, P. Pol Department of Computer Science University of Southern
More informationPerformance of Networked XML-Driven Cooperative Applications
Performance of Networked XML-Driven Cooperative Applications Shahram Ghandeharizadeh, Christos Papadopoulos, Min Cai, Krishna K. Chintalapudi Department of Computer Science University of Southern California
More informationA Smart Compressed XML Data on Networks
A Smart Compressed XML Data on Networks Xu Huang and Dharmendra Sharma School of Information Sciences and Engineering, University of Canberra Canberra, ACT, 2617, Australia (Xu.Huang, Dharmendra.Sharma)@canberra.edu.au
More informationEfficacious Transmission Technique for XML Data on Networks
14 IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.3B, March 2006 Efficacious Transmission Technique for XML Data on Networks Xu Huang, Alexander Ridgewell, and Dharmendra
More informationHyper Text Transfer Protocol Compression
Hyper Text Transfer Protocol Compression Dr.Khalaf Khatatneh, Professor Dr. Ahmed Al-Jaber, and Asma a M. Khtoom Abstract This paper investigates HTTP post request compression approach. The most common
More informationTCP performance for request/reply traffic over a low-bandwidth link
TCP performance for request/reply traffic over a low-bandwidth link Congchun He, Vijay Karamcheti Parallel and Distributed Systems Group Computer Sciences Department New York University {congchun, vijayk}@cs.nyu.edu
More informationPerformance Evaluation of XHTML encoding and compression
Performance Evaluation of XHTML encoding and compression Sathiamoorthy Manoharan Department of Computer Science, University of Auckland, Auckland, New Zealand Abstract. The wireless markup language (WML),
More informationMeasurement-based Analysis of TCP/IP Processing Requirements
Measurement-based Analysis of TCP/IP Processing Requirements Srihari Makineni Ravi Iyer Communications Technology Lab Intel Corporation {srihari.makineni, ravishankar.iyer}@intel.com Abstract With the
More informationWeb File Transmission by Object Packaging Performance Comparison with HTTP 1.0 and HTTP 1.1 Persistent Connection
Web File Transmission by Performance Comparison with and Hiroshi Fujinoki, Murugesan Sanjay, and Chintan Shah Department of Computer Science Southern Illinois University at Edwardsville Edwardsville, Illinois
More informationAnalysis of HTTP Performance
Analysis of HTTP Performance Joe Touch, John Heidemann, and Katia Obraczka USC/Information Sciences Institute June 24, 1996 Initial Release, V1.1 Abstract: We discuss the performance effects of using per-transaction
More informationNetwork Design Considerations for Grid Computing
Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom
More informationPerformance Modeling and Evaluation of Web Systems with Proxy Caching
Performance Modeling and Evaluation of Web Systems with Proxy Caching Yasuyuki FUJITA, Masayuki MURATA and Hideo MIYAHARA a a Department of Infomatics and Mathematical Science Graduate School of Engineering
More informationArchitectural Differences nc. DRAM devices are accessed with a multiplexed address scheme. Each unit of data is accessed by first selecting its row ad
nc. Application Note AN1801 Rev. 0.2, 11/2003 Performance Differences between MPC8240 and the Tsi106 Host Bridge Top Changwatchai Roy Jenevein risc10@email.sps.mot.com CPD Applications This paper discusses
More informationWeb File Transmission by Object Packaging Performance Comparison with HTTP 1.0 and HTTP 1.1 Persistent Connection
Web File Transmission by Performance Comparison with HTTP 1. and Hiroshi Fujinoki, Murugesan Sanjay, and Chintan Shah Department of Computer Science Southern Illinois University at Edwardsville Edwardsville,
More informationNAM: A Network Adaptable Middleware to Enhance Response Time of Web Services
NAM: A Network Adaptable Middleware to Enhance Response Time of Web Services S. Ghandeharizadeh, C. Papadopoulos, P. Pol, R. Zhou Department of Computer Science University of Southern California Los Angeles,
More informationApplication Layer Multicast Algorithm
Application Layer Multicast Algorithm Sergio Machado Universitat Politècnica de Catalunya Castelldefels Javier Ozón Universitat Politècnica de Catalunya Castelldefels Abstract This paper presents a multicast
More informationExercises TCP/IP Networking With Solutions
Exercises TCP/IP Networking With Solutions Jean-Yves Le Boudec Fall 2009 3 Module 3: Congestion Control Exercise 3.2 1. Assume that a TCP sender, called S, does not implement fast retransmit, but does
More informationHTRC Data API Performance Study
HTRC Data API Performance Study Yiming Sun, Beth Plale, Jiaan Zeng Amazon Indiana University Bloomington {plale, jiaazeng}@cs.indiana.edu Abstract HathiTrust Research Center (HTRC) allows users to access
More informationComparing SOAP Performance for Various Encodings, Protocols, and Connections
Comparing SOAP Performance for Various Encodings, Protocols, and Connections Jaakko Kangasharju, Sasu Tarkoma, and Kimmo Raatikainen Helsinki Institute for Information Technology PO Box 9800, 02015 HUT,
More informationTechnology Overview. Gallery SIENNA London, England T
Technology Overview Gallery SIENNA London, England T +44 208 340 5677 sales@sienna.tv www.sienna.tv http://ndi.newtek.com SIENNA Cloud for NDI An IP Video Protocol which works today NDI Protocol The NDI
More information7 Performance measurements
15 7 Performance measurements In this Chapter, we present our performance measurements. In the first measurement, we determine the impact of the Web service overhead on web servers and mobile clients.
More informationInformation Technology Department, PCCOE-Pimpri Chinchwad, College of Engineering, Pune, Maharashtra, India 2
Volume 5, Issue 5, May 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Adaptive Huffman
More informationINTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
Rashmi Gadbail,, 2013; Volume 1(8): 783-791 INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK EFFECTIVE XML DATABASE COMPRESSION
More informationEvaluation of a for Streaming Data in Ad-hoc Networks
Evaluation of 802.11a for Streaming Data in Ad-hoc Networks Samip Bararia, Shahram Ghandeharizadeh,, Shyam Kapadia Computer Science Department University of Southern California Los Angeles 90089 bararia@usc.edu,shahram@usc.edu,kapadia@usc.edu
More informationEvaluation of Performance of Cooperative Web Caching with Web Polygraph
Evaluation of Performance of Cooperative Web Caching with Web Polygraph Ping Du Jaspal Subhlok Department of Computer Science University of Houston Houston, TX 77204 {pdu, jaspal}@uh.edu Abstract This
More informationPayload Length and Rate Adaptation for Throughput Optimization in Wireless LANs
Payload Length and Rate Adaptation for Throughput Optimization in Wireless LANs Sayantan Choudhury and Jerry D. Gibson Department of Electrical and Computer Engineering University of Califonia, Santa Barbara
More informationPerformance of Multihop Communications Using Logical Topologies on Optical Torus Networks
Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,
More informationAn Empirical Study of Reliable Multicast Protocols over Ethernet Connected Networks
An Empirical Study of Reliable Multicast Protocols over Ethernet Connected Networks Ryan G. Lane Daniels Scott Xin Yuan Department of Computer Science Florida State University Tallahassee, FL 32306 {ryanlane,sdaniels,xyuan}@cs.fsu.edu
More informationLocal area network (LAN) Wide area networks (WANs) Circuit. Circuit switching. Packets. Based on Chapter 2 of Gary Schneider.
Local area network (LAN) Network of computers located close together Wide area networks (WANs) Networks of computers connected over greater distances Based on Chapter 2 of Gary Schneider. (2009). E-Business.
More informationPerformance Consequences of Partial RED Deployment
Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing
More informationDistributed Systems. Pre-Exam 1 Review. Paul Krzyzanowski. Rutgers University. Fall 2015
Distributed Systems Pre-Exam 1 Review Paul Krzyzanowski Rutgers University Fall 2015 October 2, 2015 CS 417 - Paul Krzyzanowski 1 Selected Questions From Past Exams October 2, 2015 CS 417 - Paul Krzyzanowski
More informationLow Latency via Redundancy
Low Latency via Redundancy Ashish Vulimiri, Philip Brighten Godfrey, Radhika Mittal, Justine Sherry, Sylvia Ratnasamy, Scott Shenker Presenter: Meng Wang 2 Low Latency Is Important Injecting just 400 milliseconds
More informationOSI Layer OSI Name Units Implementation Description 7 Application Data PCs Network services such as file, print,
ANNEX B - Communications Protocol Overheads The OSI Model is a conceptual model that standardizes the functions of a telecommunication or computing system without regard of their underlying internal structure
More informationECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 3: Programming Models RCFile: A Fast and Space-efficient Data
More informationMore on Conjunctive Selection Condition and Branch Prediction
More on Conjunctive Selection Condition and Branch Prediction CS764 Class Project - Fall Jichuan Chang and Nikhil Gupta {chang,nikhil}@cs.wisc.edu Abstract Traditionally, database applications have focused
More informationHeader Compression Capacity Calculations for Wireless Networks
Header Compression Capacity Calculations for Wireless Networks Abstract Deployment of wireless transport in a data-centric world calls for a fresh network planning approach, requiring a balance between
More information100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms
The Influence of the Large Bandwidth-Delay Product on TCP Reno, NewReno, and SACK Haewon Lee Λ, Soo-hyeoung Lee, and Yanghee Choi School of Computer Science and Engineering Seoul National University San
More informationSession 1: Physical and Web Infrastructure
INFM 603: Information Technology and Organizational Context Session 1: Physical and Web Infrastructure Jimmy Lin The ischool University of Maryland Thursday, September 6, 2012 A brief history (How computing
More informationKarisma Network Requirements
Karisma Network Requirements Introduction There are a number of factors influencing the speed of the Karisma response rates of retrieving data from the Karisma server. These factors include the hardware
More informationWeb Services Security. Dr. Ingo Melzer, Prof. Mario Jeckle
Web Services Security Dr. Ingo Melzer, Prof. Mario Jeckle What is a Web Service? Infrastructure Web Service I. Melzer -- Web Services Security 2 What is a Web Service? Directory Description UDDI/WSIL WSDL
More informationAn algorithm for Performance Analysis of Single-Source Acyclic graphs
An algorithm for Performance Analysis of Single-Source Acyclic graphs Gabriele Mencagli September 26, 2011 In this document we face with the problem of exploiting the performance analysis of acyclic graphs
More informationPerformance comparison of DCOM, CORBA and Web service
Performance comparison of DCOM, CORBA and Web service SeongKi Kim School of Computer Science and Engineering Seoul National University, 56-1 Sinlim, Kwanak Seoul, Korea 151-742 Abstract - The distributed
More informationOutline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste
Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,
More informationPCnet-FAST Buffer Performance White Paper
PCnet-FAST Buffer Performance White Paper The PCnet-FAST controller is designed with a flexible FIFO-SRAM buffer architecture to handle traffic in half-duplex and full-duplex 1-Mbps Ethernet networks.
More informationTCP Revisited CONTACT INFORMATION: phone: fax: web:
TCP Revisited CONTACT INFORMATION: phone: +1.301.527.1629 fax: +1.301.527.1690 email: whitepaper@hsc.com web: www.hsc.com PROPRIETARY NOTICE All rights reserved. This publication and its contents are proprietary
More informationAlbis: High-Performance File Format for Big Data Systems
Albis: High-Performance File Format for Big Data Systems Animesh Trivedi, Patrick Stuedi, Jonas Pfefferle, Adrian Schuepbach, Bernard Metzler, IBM Research, Zurich 2018 USENIX Annual Technical Conference
More informationCS Transport. Outline. Window Flow Control. Window Flow Control
CS 54 Outline indow Flow Control (Very brief) Review of TCP TCP throughput modeling TCP variants/enhancements Transport Dr. Chan Mun Choon School of Computing, National University of Singapore Oct 6, 005
More informationThe Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN
The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN By Jim Metzler Jim@Kubernan.Com Background and Goal Many papers have been written on the effect that limited bandwidth
More informationAppendix B. Standards-Track TCP Evaluation
215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error
More informationInvestigating the Use of Synchronized Clocks in TCP Congestion Control
Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle (UNC-CH) November 16-17, 2001 Univ. of Maryland Symposium The Problem TCP Reno congestion control reacts only to packet
More informationIntroduction to TCP/IP Offload Engine (TOE)
Introduction to TCP/IP Offload Engine (TOE) Version 1.0, April 2002 Authored By: Eric Yeh, Hewlett Packard Herman Chao, QLogic Corp. Venu Mannem, Adaptec, Inc. Joe Gervais, Alacritech Bradley Booth, Intel
More informationA Review of IP Packet Compression Techniques
A Review of IP Packet Compression Techniques Ching Shen Tye and Dr. G. Fairhurst Electronics Research Group, Department of Engineering, Aberdeen University, Scotland, AB24 3UE. {c.tye, gorry}@erg.abdn.ac.uk
More informationCOMPUTER NETWORKS PERFORMANCE. Gaia Maselli
COMPUTER NETWORKS PERFORMANCE Gaia Maselli maselli@di.uniroma1.it Prestazioni dei sistemi di rete 2 Overview of first class Practical Info (schedule, exam, readings) Goal of this course Contents of the
More informationTo Optimize XML Query Processing using Compression Technique
To Optimize XML Query Processing using Compression Technique Lalita Dhekwar Computer engineering department Nagpur institute of technology,nagpur Lalita_dhekwar@rediffmail.com Prof. Jagdish Pimple Computer
More informationThis project must be done in groups of 2 3 people. Your group must partner with one other group (or two if we have add odd number of groups).
1/21/2015 CS 739 Distributed Systems Fall 2014 PmWIki / Project1 PmWIki / Project1 The goal for this project is to implement a small distributed system, to get experience in cooperatively developing a
More informationMultimedia Streaming. Mike Zink
Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=
More informationFramework for replica selection in fault-tolerant distributed systems
Framework for replica selection in fault-tolerant distributed systems Daniel Popescu Computer Science Department University of Southern California Los Angeles, CA 90089-0781 {dpopescu}@usc.edu Abstract.
More informationRD-TCP: Reorder Detecting TCP
RD-TCP: Reorder Detecting TCP Arjuna Sathiaseelan and Tomasz Radzik Department of Computer Science, King s College London, Strand, London WC2R 2LS {arjuna,radzik}@dcs.kcl.ac.uk Abstract. Numerous studies
More informationVoice and Data Session Capacity over Local Area Networks
Voice and ata Session Capacity over Local Area Networks Theresa A. Fry, Ikhlaq Sidhu, Guido Schuster, and Jerry Mahler epartment of Electrical and Computer Engineering Northwestern University, Evanston,
More informationAnalysis of the effects of removing redundant header information in persistent HTTP connections
Analysis of the effects of removing redundant header information in persistent HTTP connections Timothy Bower, Daniel Andresen, David Bacon Department of Computing and Information Sciences 234 Nichols
More informationPerformance evaluation and benchmarking of DBMSs. INF5100 Autumn 2009 Jarle Søberg
Performance evaluation and benchmarking of DBMSs INF5100 Autumn 2009 Jarle Søberg Overview What is performance evaluation and benchmarking? Theory Examples Domain-specific benchmarks and benchmarking DBMSs
More informationWhen two-hop meets VoFi
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE CCC 00 proceedings. When two-hop meets VoFi Sathya arayanan *,
More informationDevelopment and Evaluation of QoS Measurement System for Internet Applications by Client Observation
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 18, 891-904 (2002) Development and Evaluation of QoS Measurement System for Internet Applications by Client Observation Department of Information Systems
More informationIntroduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15
Introduction to Real-Time Communications Real-Time and Embedded Systems (M) Lecture 15 Lecture Outline Modelling real-time communications Traffic and network models Properties of networks Throughput, delay
More informationA Simulation: Improving Throughput and Reducing PCI Bus Traffic by. Caching Server Requests using a Network Processor with Memory
Shawn Koch Mark Doughty ELEC 525 4/23/02 A Simulation: Improving Throughput and Reducing PCI Bus Traffic by Caching Server Requests using a Network Processor with Memory 1 Motivation and Concept The goal
More information1 Introduction. 2 Annotations to the Standard. OpenLCB Technical Note. OpenLCB-TCP Transfer
OpenLCB Technical Note OpenLCB-TCP Transfer May 28, 2015 Preliminary 5 1 Introduction This explanatory note contains informative discussion and background for the corresponding OpenLCB-TCP Segment Transfer
More informationChapter 1. Introduction
Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic
More informationImpact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes
Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao Dept. of Elec. Engg., 214 Zachry College Station, TX 77843-3128 A. L. Narasimha Reddy
More informationAn Cross Layer Collaborating Cache Scheme to Improve Performance of HTTP Clients in MANETs
An Cross Layer Collaborating Cache Scheme to Improve Performance of HTTP Clients in MANETs Jin Liu 1, Hongmin Ren 1, Jun Wang 2, Jin Wang 2 1 College of Information Engineering, Shanghai Maritime University,
More informationSummary Cache based Co-operative Proxies
Summary Cache based Co-operative Proxies Project No: 1 Group No: 21 Vijay Gabale (07305004) Sagar Bijwe (07305023) 12 th November, 2007 1 Abstract Summary Cache based proxies cooperate behind a bottleneck
More informationAn Efficient Scheduling Scheme for High Speed IEEE WLANs
An Efficient Scheduling Scheme for High Speed IEEE 802.11 WLANs Juki Wirawan Tantra, Chuan Heng Foh, and Bu Sung Lee Centre of Muldia and Network Technology School of Computer Engineering Nanyang Technological
More informationCCNA 1 Chapter 7 v5.0 Exam Answers 2013
CCNA 1 Chapter 7 v5.0 Exam Answers 2013 1 A PC is downloading a large file from a server. The TCP window is 1000 bytes. The server is sending the file using 100-byte segments. How many segments will the
More informationCOMPUTER NETWORK PERFORMANCE. Gaia Maselli Room: 319
COMPUTER NETWORK PERFORMANCE Gaia Maselli maselli@di.uniroma1.it Room: 319 Computer Networks Performance 2 Overview of first class Practical Info (schedule, exam, readings) Goal of this course Contents
More informationCommunication Networks
Communication Networks Prof. Laurent Vanbever Exercises week 4 Reliable Transport Reliable versus Unreliable Transport In the lecture, you have learned how a reliable transport protocol can be built on
More informationAnalyzing the Receiver Window Modification Scheme of TCP Queues
Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington
More informationChapter 14 Performance and Processor Design
Chapter 14 Performance and Processor Design Outline 14.1 Introduction 14.2 Important Trends Affecting Performance Issues 14.3 Why Performance Monitoring and Evaluation are Needed 14.4 Performance Measures
More informationCS Project Report
CS7960 - Project Report Kshitij Sudan kshitij@cs.utah.edu 1 Introduction With the growth in services provided over the Internet, the amount of data processing required has grown tremendously. To satisfy
More informationOverview Content Delivery Computer Networking Lecture 15: The Web Peter Steenkiste. Fall 2016
Overview Content Delivery 15-441 15-441 Computer Networking 15-641 Lecture 15: The Web Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 Web Protocol interactions HTTP versions Caching Cookies
More informationReasons not to Parallelize TCP Connections for Fast Long-Distance Networks
Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Contents Introduction Analysis of parallel TCP mechanism Numerical
More informationNetwork-Adaptive Video Coding and Transmission
Header for SPIE use Network-Adaptive Video Coding and Transmission Kay Sripanidkulchai and Tsuhan Chen Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
More informationENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY
ENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY L. Donckers, P.J.M. Havinga, G.J.M. Smit, L.T. Smit University of Twente, department of Computer Science, PO Box 217, 7 AE Enschede, the Netherlands
More informationSamKnows test methodology
SamKnows test methodology Download and Upload (TCP) Measures the download and upload speed of the broadband connection in bits per second. The transfer is conducted over one or more concurrent HTTP connections
More informationA Case for Merge Joins in Mediator Systems
A Case for Merge Joins in Mediator Systems Ramon Lawrence Kirk Hackert IDEA Lab, Department of Computer Science, University of Iowa Iowa City, IA, USA {ramon-lawrence, kirk-hackert}@uiowa.edu Abstract
More informationOpen Geospatial Consortium Inc.
Open Geospatial Consortium Inc. Date: 2005-12-16 Reference number of this OGC document: OGC 05-101 Version: 0.0.4 Category: OpenGIS Discussion Paper Editor: David S. Burggraf OWS 3 GML Investigations Performance
More informationImproving TCP Performance in TDMA-based Satellite Access Networks
Improving TCP Performance in TDMA-based Satellite Access Networks Jing Zhu, Sumit Roy {zhuj, roy}@ee.washington.edu Department of Electrical Engineering, University of Washington, Seattle, WA 98105 Abstract-
More informationJoint Entity Resolution
Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute
More informationAn Oracle White Paper April 2010
An Oracle White Paper April 2010 In October 2009, NEC Corporation ( NEC ) established development guidelines and a roadmap for IT platform products to realize a next-generation IT infrastructures suited
More informationQoS Provisioning Using IPv6 Flow Label In the Internet
QoS Provisioning Using IPv6 Flow Label In the Internet Xiaohua Tang, Junhua Tang, Guang-in Huang and Chee-Kheong Siew Contact: Junhua Tang, lock S2, School of EEE Nanyang Technological University, Singapore,
More informationDouble-Take vs. Steeleye DataKeeper Cluster Edition
Double-Take vs. Steeleye DataKeeper Cluster Edition A Competitive Analysis July 2012 Double- Take from Vision Solutions is compared to Steeleye DataKeeper Cluster Edition. A summary of the performance
More informationMaster s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput
Master s Thesis Title TCP Congestion Control Mechanisms for Achieving Predictable Throughput Supervisor Prof. Hirotaka Nakano Author Kana Yamanegi February 14th, 2007 Department of Information Networking
More informationOn the Transition to a Low Latency TCP/IP Internet
On the Transition to a Low Latency TCP/IP Internet Bartek Wydrowski and Moshe Zukerman ARC Special Research Centre for Ultra-Broadband Information Networks, EEE Department, The University of Melbourne,
More information6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS
Chapter 6 Indexing Results 6. INTRODUCTION The generation of inverted indexes for text databases is a computationally intensive process that requires the exclusive use of processing resources for long
More informationA Pipelined Memory Management Algorithm for Distributed Shared Memory Switches
A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches Xike Li, Student Member, IEEE, Itamar Elhanany, Senior Member, IEEE* Abstract The distributed shared memory (DSM) packet switching
More informationW3C Workshop on the Web of Things
W3C Workshop on the Web of Things Enablers and services for an open Web of Devices 25 26 June 2014, Berlin, Germany Position Paper by Kheira Bekara, and Chakib Bekara - Centre de de Dveloppement des Technologies
More information[MS-PCCRTP]: Peer Content Caching and Retrieval: Hypertext Transfer Protocol (HTTP) Extensions
[MS-PCCRTP]: Peer Content Caching and Retrieval: Hypertext Transfer Protocol (HTTP) Extensions Intellectual Property Rights Notice for Open Specifications Documentation Technical Documentation. Microsoft
More informationHypertext Transport Protocol HTTP/1.1
Hypertext Transport Protocol HTTP/1.1 Jim Gettys Digital Equipment Corporation, ISBU Visiting Scientist, World Wide Web Consortium 10/17/96 20-Apr-01 1 Acknowledgments HTTP/1.1 Authors Roy Fielding (UCI)
More informationProblem 7. Problem 8. Problem 9
Problem 7 To best answer this question, consider why we needed sequence numbers in the first place. We saw that the sender needs sequence numbers so that the receiver can tell if a data packet is a duplicate
More informationQLIKVIEW SCALABILITY BENCHMARK WHITE PAPER
QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER Hardware Sizing Using Amazon EC2 A QlikView Scalability Center Technical White Paper June 2013 qlikview.com Table of Contents Executive Summary 3 A Challenge
More informationNET ID. CS519, Prelim (March 17, 2004) NAME: You have 50 minutes to complete the test. 1/17
CS519, Prelim (March 17, 2004) NAME: You have 50 minutes to complete the test. 1/17 Q1. 2 points Write your NET ID at the top of every page of this test. Q2. X points Name 3 advantages of a circuit network
More informationAn Improvement of TCP Downstream Between Heterogeneous Terminals in an Infrastructure Network
An Improvement of TCP Downstream Between Heterogeneous Terminals in an Infrastructure Network Yong-Hyun Kim, Ji-Hong Kim, Youn-Sik Hong, and Ki-Young Lee University of Incheon, 177 Dowha-dong Nam-gu, 402-749,
More information