Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks Kok Soon Thia and Chen Khong Tham Dept of Electrical Engineering National University of Singapore Tel: (65) 874-5095 Fax: (65) 779-1103 Email: engp7448@leonis.nus.edu.sg, eletck@nus.edu.sg Abstract - High speed networks employing ATM technology are becoming necessary as a result of increased traffic load and the transfer of multimedia information on computer networks. However, there is a large base of machines connected to local area networks (LANs) using Ethernet and FDDI technology. Thus, it is important for ATM networks and LANs to inter-operate. This paper gives an analysis of the performance issues of interconnected LAN and ATM networks under low and high load conditions, using two different scenarios. In particular, we simulate Video-on-Demand (VOD) traffic on the network and attention is given to analyse the end-to-end delays at each section of the entire network model. It is found that a severe bottleneck is over the Ethernet segment since congestion on the Ethernet bus arises as packets are routed into the Ethernet network. Otherwise, ATM technology provides a very efficient high speed data network. Figure 1(a) ATM Network (Scenario 1) 1. INTRODUCTION During recent years, significant advancement has been made in communications and computer technologies. Multimedia communications are expected in future high speed network, where many kinds of media data are carried over ATM networks. In this paper, the network transmission delays as packets are transferred from the server to the client application are studied in details. Two network scenarios used for our analysis are shown in Figure 1. In scenario one (Figure 1a), simulated video traffic, in the form of packet streams, is generated from a VOD server located on a FDDI ring on the NUS subnetwork as shown in Figure 1(b). These packets will travel around the ring before being routed into an Ethernet LAN. An Ethernet-ATM gateway located on this LAN will receive these packets and convert them into ATM cells, before sending to the first ATM switch. An ATM connection is subsequently established. At the destination, VOD packet streams are recreated by another router and subsequently channelled to the VOD client on the destination Ethernet LAN shown in Figure 1(c). In scenario two, we have a very similar set-up as that in scenario one, except that the NUS subnetwork is replaced by a group of ATM servers, (Figure 1d) each generating packets for the remote Ethernet subnetworks. Figure 1(b) NUS subnetwork (source) Figure 1(c) SingTel and ISS subnetwork (destination) Figure 1(d) ATM Network (Scenario 2) Figure 1 The two network scenario used for simulation.
2. SOFTWARE TOOLS For our study, we had made used of two software packages available commercially. One of the tool we used is a comprehensive engineering system capable of simulating large communication networks with detailed protocol modelling and performance analysis called Optimised Network Engineering Tools (OPNET). OPNET features include a graphical specification of models; a dynamic, eventscheduling simulation kernel; integrated data analysis tools; and hierarchical, object-based modelling [1]. The hierarchical modelling structure allows for complicated network problems to be solved by distributed algorithm development. The other tool we used for collection of VOD traffic characteristics was achieved with the Foreview ATM Network Management system [2]. Foreview includes graphing and logging usage utilities which allow us to track network usage. Some processing of the data collected reveals that the traffic pattern followed a normal distribution with mean of about 3000 packets/second and variance 5850 packets 2 /second 2. 3. MODEL IMPLEMENTATION Because the network models are intended for the purpose of simulation and performance estimation, certain parts of the protocol have been simplified or omitted. It is important to understand which mechanisms are modelled in order to have an idea of the level of accuracy of our simulation. We shall discuss the scope and limitations of our model implementations of the ATM, Ethernet and FDDI portions of our network [3]. 3.1 ATM The Asynchronous Transfer Mode (ATM) is a highbandwidth, low-delay, packet switching and multiplexing technique. ATM makes use of commonchannel signalling, with all control signals travelling on the same dedicated virtual channel. ATM allows multiple logical connections to exist on a single physical circuit and it uses cells of size 48 bytes (plus a 5 byte header). Due to the high reliability of modern highspeed digital networks, there is little overhead in each cell for error control. 3.1.1 Scope and Implementation Limitations The ATM layer model support many capabilities found in ATM networks. Dynamic call setup and teardown signalling procedures, is provided over subscription based virtual path connections (VPCs). Traffic control, which include Call Admission Control (CAC) and Usage Parameter Control (UPC), prevents any calls with unsupportable traffic requirements from establishing a connection and established calls from degrading network service below specified Quality of Service (QoS) specifications. A distributed, dynamic routing capability is also provided. The ATM layer supports virtual path (VP) and virtual channel (VC) switching. Delay is modelled for the switching process, as well as that due to the switch fabric. This model further assumes that the VP and VC switching is sufficiently fast to support the maximum rate of arriving cells, so input port buffering is not modelled. However, output port buffering is modelled since it is possible to switch cells to an output port faster than the port can transmit. A buffer is available for cells of each QoS class for each output port whose size (in cells) can be individually specified for each switch in our network. The relative priority of each buffer may also be specified. ATM traffic control functions are explicitly modelled. Call Admission Control (CAC) is based on the Peak Cell Rate (PCR) of a call attempting to be established. The CAC algorithm used guarantees that the sum of the PCRs of all virtual channel connections within a VPC must be less than the total bandwidth of the VPC. If a new call would cause the sum of the PCRs to exceed that value, the call will not be accepted. The ATM layer model supports distributed, dynamic routing. Each ATM node learns of all other connected ATM nodes and routes to them in a distributed manner, and over time, each node updates its routing tables to reflect changes in the network. Each node attempts to route a call to the destination node one at a time. The cost of a route is the sum of the costs of a hop. The cost computation is based on the amount of available bandwidth along the link. 3.2 Ethernet Ethernet is a bus-based local area network technology that is in common today. The operation of the local area network is managed by a media access protocol (MAC) which has been standardised as IEEE 802.3. The role of this MAC protocol is to provide efficiency and fair sharing of the communication channel which is the bus connecting the stations of the LAN. The Ethernet MAC accepts data packets from a higher layer protocol (such as the Internet Protocol) and attempts to transmit them at appropriate times to other stations on the bus. Collisions are an unavoidable part of Ethernet MAC protocol and efficient handling of a collision is important. The first important feature of Ethernet is to detect collision quickly in order to invoke a recovery procedure. Upon deciding a collision, a transmitting station will continue to transmit for a short period of time called the
jam time. This ensures that other transmitting stations will also notice a collision. After the jam time, the transmission is aborted entirely. After a short period of time, the bus should be quiet since all stations will have noticed a collision and aborted their transmissions. The silence on the bus is noticed by the carrier sensing mechanisms in each station and those stations that have packets to send can begin transmitting. Provisions is also made to avoid repeated collisions between the same stations by ensuring that transmissions that collide wait a random amount of time before being retransmitted. 3.2.1 Scope and Implementation Limitations The Ethernet MAC model implement the carrier sensing, collision detection, and transmission mechanisms specified in the IEEE 802.3 standard. Explicit modelling is performed for all features other than serialisation of bit transfers to and from the physical layer. 3.3 FDDI Ring The Fibre Distributed Data Interface (FDDI) provides general purpose networking at 100Mbps transmission rates for large numbers of communicating stations configured in a ring topology. Use of ring bandwidth is controlled through a time token rotation protocol, wherein stations must receive a token and meet a set of timing and priority criteria before transmitting frames. 3.3.1 Scope and Limitations of Implementations Our models are faced with two main limitations. Firstly, the ring initialisation and recovery processes are not modelled explicitly. Its usefulness is in obtaining measurements of steady state performance. Secondly, our model makes no attempt to implement the mechanisms related to the detection of damaged frames, or the reporting of errors to the station management entity (SMT). The interface between media access protocol (MAC) and SMT is, in fact, not presently incorporated into the implementation of MAC. The primary data transfer features of FDDI are modelled explicitly, including synchronous and asynchronous transmission, definable priority levels for asynchronous frames, and restricted tokens. The effects of station latency and propagation delay are also incorporated into the model. Implicit in the parameters listed above is a simplification made in the model: the station latency and inter-station propagation delay are assumed to be uniform across the ring. This is primarily done to conveniently accelerate simulation. 4. EXPERIMENTS The two scenarios described in Section 1 are analysed in this paper, each with simulations ran under low and high loading conditions. For low load conditions, we have two video servers sending out traffic compared to 14 servers under high load. The traffic generation patterns are the same for all video servers. Table 4.1 lists some of the parameters used in both sets of simulations. Table 4.1 Parameters used in both simulation sets Source Packet size 2048 bits Call waiting time 0.0 sec Call duration 4,500 sec Mean packet 3000 pkts/sec generation rate Variance packet 5850 pkts 2 /sec 2 generation rate IP layer IP processing rate 5000 pkts/sec FDDI Channel rate 100Mbps Ethernet Channel rate 10Mbps ATM Channel rate 155Mbps VP-VC delay 1 10-10 sec VP switching delay 1 10-11 sec Switch fabric delay 0.0 sec Priority scheme A Class B traffic switch 400 cells buffersize Routing update Every 15 sec frequency AAL type 5 Traffic QoS class B IP-AAL Maximum data rate 1.55Mbps Inactivity time-out 120s 5. RESULTS We shall now look at some of the simulation results, whereupon our conclusions are drawn. 5.1 Scenario One For ease of comparison, the low and high load results for Scenario 1 are referred as the (a) and (b) figures in our discussion below. 5.1.1 End to End Delays We shall look only at the steady state ETE delays experienced globally by the packets at the ATM, Ethernet, and FDDI portions of our network model. From Figure 5.1 and 5.2, we see that the ATM ETE delay and the FDDI ETE delay patterns are very similar whether in low or high load conditions. The order of magnitude of the delays experiences are about 20µs. The Ethernet ETE delays, on the other hand, are about 250µs under low load, and monotonically increasing (in the order of seconds) under high load. These results indicate that the Ethernet contributed most to the total
ETE delays of the packets. The monotonically increasing delay under high load is due to the low 10Mbps Ethernet bandwidth which is insufficient to support the high bandwidth demanded when all 14 video servers are in operation concurrently, each producing 12Mbps of traffic. Figure 5.1 ATM end-to-end delay under (a) low load and (b) high load Figure 5.3 Ethernet end-to-end delay under (a) low load and (b) high load. Besides the above, by comparing Figures 5.3 and 5.4, we observed an interesting fact. The bulk of the Ethernet ETE delays is contributed when packets are sent from the FDDI-Ethernet router to the Ethernet- ATM router through the Ethernet bus in the NUS subnetwork model. This is especially obvious during high load condition. The reason for this is that congestion results on the bus during high load. A proof of congestion is that under low load, the size of the Ethernet queues in the FDDI-Ethernet router is constant at about 2500 bits, whereas under high load, the queue size is monotonically increasing with time and of order 1 10 8 bits (see Figure 5.5). Figure 5.2 FDDI end-to-end delay under (a) low load and (b) high load. Figure 5.4 Localised Ethernet end-to-end delay (to reach Ethernet-ATM router) under (a) low load and (b) high load
network to another. Figure 5.7 shows the ETE delays experienced by the ATM portions of our network. Figure 5.5 Ethernet queue bitsize at FDDI-Ethernet router under (a) low load and (b) high load. 5.1.2 Cell Loss Ratio in ATM Network The cell loss ratio present in our ATM network model is always zero whether under low or high load conditions. This suggests that the ATM network is very efficient even under high traffic loading. In fact, the ATM network is very lightly loaded even under the modelled high load condition. However, we also observe that the bottleneck at the Ethernet bus before the ATM network may have limited the amount of traffic carried by the ATM network. Figure 5.7 ATM end-to-end delay under(a) low load and (b) high load We observed from the two traces above that the ATM ETE delay is very similar (about 75µs) regardless of the loading conditions. This is intuitively correct since both loading conditions do not cause congestion on our ATM portions of the network. The ATM network is still lightly loaded during the high loading condition considered. Figure 5.6 Cell loss ratio under (a) low load and (b) high load. 5.2 Scenario Two We shall now look at the simulation results of our second simulation scenario which models an ATMbased server generating packets to an Ethernet-based client through our ATM testbed. Like scenario one, we have low and high load conditions generating similar traffic conditions as before. We shall label the charts generated by low load and high load simulations with suffix (a) and (b) respectively. 5.2.1 End to End Delays As in the previous scenario, we shall now look at the steady state global ATM and Ethernet end-to-end (ETE) delays experienced by packets as they travel from one Figure 5.8 AAL5 end-to-end delay under (a) low load and (b) high load Figure 5.8 shows the amount of ATM ETE delays including the AAL5 segmentation/reassembly delays. The observed delay is about 330µs under each loading conditions. This result is reasonable because the processing delay is very short at all the loading conditions considered, due to a very fast segmentation/reassembly processor modelled. It is also observed that on both high load simulations, the AAL5 ETE delay is almost identical.
of the queue length at the ATM-Ethernet router, as shown in Figure 5.11, also suggested that the traffic congestion results because the low bandwidth of 10Mbps Ethernet is unable to handle the high traffic volume of 16.8Mbps routed from the ATM network. The bottleneck link is thus the Ethernet bus between the router and the client workstation. 5.2.2 Cell Loss Ratio in ATM Network The cell loss ratio present in our ATM network model is always zero whether under low or high load conditions, as shown in Figure 5.12. This condition is prevalent because the ATM network is lightly loaded in our network model in the three sets of simulations ran. Figure 5.9 Ethernet end-to-end delay under (a) low load and (b) high load Figure 5.9 shows the Ethernet ETE delays observed between congested and non-congested cases, which surfaced when the traffic reaches the Ethernet-based destination. When there is no congestion, we observed that the Ethernet ETE delay is constant at about 270µs at steady state. However, under congested conditions the Ethernet ETE delay is monotonically increasing in the order of seconds. To establish better understanding of the network under congestion, we ran further simulations on high load model with more local probes on the Ethernet network. Figure 5.10 Queueing delay at ATM-Ethernet router under high load with congestion Figure 5.11 Queue length at ATM-Ethernet router under high load Figure 5.10 shows the queueing delay experienced by packets at the ATM-Ethernet router. The similarity of Figure 5.9(b) and 5.10 suggests that the monotonically increasing end-to-end delay experienced by the Ethernet under congestion is contributed largely by the queueing delaying at the router. The similarly increasing bitsize Figure 5.12 Cell loss ratio under both low and high load 6. COMMENTS Some limitations are present in the our investigations. Firstly, our investigations were based on using video servers generating packets with identical video traffic patterns. This assumption is not necessarily true as different quality video sequences will generate traffic loads that are not identical. Secondly, our network model had used a topology whereby one single server will only transmit packets to one single destination. This is not generally true, as a video server in the real world should be able to serve multiple client applications at the same time. Thirdly, we had assumed certain parameters during simulations, as listed in Table 4.1. These parameters are not universal and may vary with the real equipment used in actual VOD implementations. However, our results should be accurate to a large extent. In future work, we plan to refine the network models and their parameterisation in order to improve the realism and accuracy of our simulations. We shall also compare simulation results with empirical results obtained from an ATM protocol analyser. 7. CONCLUSIONS In the first scenario, the ATM and FDDI portions of the network models contribute end-to-end delays of about 20µs. In the second scenario, the ATM portions of the network models contribute end-to-end delays of about
75µs. These delays are small compared to the contribution from Ethernet in either scenarios, which was observed to be 250s and 270s respectively under low load, and monotonically increasing (in the order of seconds) under high load, or congestion. The ATM network is an efficient network since zero cell loss ratio was observed from the simulations, and cells flowing through the network experienced low endto-end delays. The ATM network is lightly utilised even under the high load conditions considered. In this paper, we conclude that the Ethernet can be a major bottleneck in interconnected LAN and ATM networks. This bottleneck surface during high load conditions due to congestion. The low bandwidth of Ethernet is a cause of this bottleneck in an otherwise efficient ATM based network model. REFERENCES [1] OPNET Manual Vol 2, Modelling. MIL 3, Inc. (1996) [2] Foreview 4.0 Network Management User s Manual, Fore Systems. (1996) [3] OPNET Manual Vol 11, Protocols Models. MIL 3, Inc. (1996)