with `ook-ahead for Broadcast WDM Networks TR May 14, 1996 Abstract

Similar documents
Parallelism for Nested Loops with Non-uniform and Flow Dependences

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7

Load Balancing for Hex-Cell Interconnection Network

Efficient Distributed File System (EDFS)

AADL : about scheduling analysis

Simulation Based Analysis of FAST TCP using OMNET++

Parallel matrix-vector multiplication

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

Analysis of Continuous Beams in General

X- Chart Using ANOM Approach

Analysis of Collaborative Distributed Admission Control in x Networks

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

Wishing you all a Total Quality New Year!

Problem Set 3 Solutions

An Optimal Algorithm for Prufer Codes *

On the Exact Analysis of Bluetooth Scheduling Algorithms

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Fibre-Optic AWG-based Real-Time Networks

Cluster Analysis of Electrical Behavior

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Hermite Splines in Lie Groups as Products of Geodesics

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

The Codesign Challenge

Real-Time Guarantees. Traffic Characteristics. Flow Control

ARTICLE IN PRESS. Signal Processing: Image Communication

y and the total sum of

A Binarization Algorithm specialized on Document Images and Photos

An Entropy-Based Approach to Integrated Information Needs Assessment

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) ,

CMPS 10 Introduction to Computer Science Lecture Notes

Performance Analysis of Markov Modulated 1-Persistent CSMA/CA Protocols with Exponential Backoff Scheduling

Mathematics 256 a course in differential equations for engineering students

Shared Running Buffer Based Proxy Caching of Streaming Sessions

Avoiding congestion through dynamic load control

Virtual Machine Migration based on Trust Measurement of Computer Node

an assocated logc allows the proof of safety and lveness propertes. The Unty model nvolves on the one hand a programmng language and, on the other han

A Saturation Binary Neural Network for Crossbar Switching Problem

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

An Approximation to the QoS Aware Throughput Region of a Tree Network under IEEE CSMA/CA with Application to Wireless Sensor Network Design

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

Routing in Degree-constrained FSO Mesh Networks

Reducing I/O Demand in Video-On-Demand Storage Servers. from being transmitted directly from tertiary devices.

Dynamic Bandwidth Allocation Schemes in Hybrid TDM/WDM Passive Optical Networks

Support Vector Machines

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

PERFORMANCE ANALYSIS OF ROUTING ALGORITHMS OF RD-C/TDMA PACKET RADIO NETWORKS UNDER DYNAMIC RANDOM TOPOLOGY1

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

RAP. Speed/RAP/CODA. Real-time Systems. Modeling the sensor networks. Real-time Systems. Modeling the sensor networks. Real-time systems:

Efficient Content Distribution in Wireless P2P Networks

Brave New World Pseudocode Reference

Performance analysis of distributed cluster-based MAC protocol for multiuser MIMO wireless networks

Fast Retransmission of Real-Time Traffic in HIPERLAN/2 Systems

Outline. Digital Systems. C.2: Gates, Truth Tables and Logic Equations. Truth Tables. Logic Gates 9/8/2011

ATM Switch. Traffic Shaper. ATM Switch. Traffic Shaper. ATM Switch. Terminal Equipment. Terminal Equipment UNI UNI

Real-time interactive applications

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

Electrical analysis of light-weight, triangular weave reflector antennas

WIRELESS communication technology has gained widespread

Technical Report. i-game: An Implicit GTS Allocation Mechanism in IEEE for Time- Sensitive Wireless Sensor Networks

Annales UMCS Informatica AI 1 (2003) UMCS. Designing of multichannel optical communication systems topologies criteria optimization

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

VISUAL SELECTION OF SURFACE FEATURES DURING THEIR GEOMETRIC SIMULATION WITH THE HELP OF COMPUTER TECHNOLOGIES

User Authentication Based On Behavioral Mouse Dynamics Biometrics

Related-Mode Attacks on CTR Encryption Mode

Module Management Tool in Software Development Organizations

CS 268: Lecture 8 Router Support for Congestion Control

Private Information Retrieval (PIR)

Solitary and Traveling Wave Solutions to a Model. of Long Range Diffusion Involving Flux with. Stability Analysis

Evaluation of an Enhanced Scheme for High-level Nested Network Mobility

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

SRB: Shared Running Buffers in Proxy to Exploit Memory Locality of Multiple Streaming Media Sessions

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

MULTIHOP wireless networks are a paradigm in wireless

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky

Network Coding as a Dynamical System

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

Load-Balanced Anycast Routing

Network Cloud. Internal Network. Internal Network. Internal Network. Border Switch. Border Switch. Border Switch. Network. Internal. Border.

Efficient QoS Provisioning at the MAC Layer in Heterogeneous Wireless Sensor Networks

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

3. CR parameters and Multi-Objective Fitness Function

MobileGrid: Capacity-aware Topology Control in Mobile Ad Hoc Networks

A Semi-Distributed Load Balancing Architecture and Algorithm for Heterogeneous Wireless Networks

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

CSE 326: Data Structures Quicksort Comparison Sorting Bound

USING GRAPHING SKILLS

A fair buffer allocation scheme

Fusion Performance Model for Distributed Tracking and Classification

Computer Communications

CSE 326: Data Structures Quicksort Comparison Sorting Bound

Quantifying Performance Models

Concurrent Apriori Data Mining Algorithms

DECA: distributed energy conservation algorithm for process reconstruction with bounded relative error in wireless sensor networks

Lecture 5: Multilayer Perceptrons

Abstract Ths paper ponts out an mportant source of necency n Smola and Scholkopf's Sequental Mnmal Optmzaton (SMO) algorthm for SVM regresson that s c

Transcription:

HPeR-`: A Hgh Performance Reservaton Protocol wth `ook-ahead for Broadcast WDM Networks Vjay Svaraman George N. Rouskas TR-96-06 May 14, 1996 Abstract We consder the problem of coordnatng access to the varous channels of a sngle-hop WDM network. We present HPeR-`, a new reservaton protocol speccally desgned to overcome the potental necences of operatng n envronments wth non-neglgble processng, tunng, and propagaton delays. HPeR-` ders from prevous reservaton protocols n that each control packet makes reservatons for all data packets watng n a node's queues, thus sgncantly reducng control overhead. Packets are scheduled for transmsson usng algorthms that can eectvely mask the tunng tmes. HPeR-` also uses ppelnng to mask processng tmes and propagaton delays. We use Markov chan theory to obtan a necessary and sucent condton for the stablty of the protocol. The stablty condton provdes nsght nto the factors aectng the operaton of the protocol, such as the degree of load balancng across the varous channels, and the qualty of the schedulng algorthms. The analyss s farly general, as t holds for MMBP-lke arrval processes wth any number of states, and for non-unform destnatons. Department of Computer Scence North Carolna State Unversty Ralegh, NC 27695-8206 Ths work was supported n part by a grant from the Center of Advanced Computng and Communcaton, North Carolna State Unversty.

1 Introducton It has long been recognzed that Wavelength Dvson Multplexng (WDM) s the most promsng technology for brdgng the gap between the speed of electroncs and the vrtually unlmted bandwdth avalable wthn the optcal medum [1]. One of the canddate WDM archtectures for mplementng a new generaton of hgh speed communcaton networks s the well known and wdely studed sngle-hop archtecture [2]. Sngle-hop networks are especally appealng because of the fact that, once nformaton s transmtted as lght n such a network, t wll reman n the optcal form untl t reaches the destnaton. In a sngle-hop network, both a transmtter at the source and a recever at the destnaton must operate on the same wavelength for a successful packet transmsson. Thus, the problem of coordnatng access to the varous wavelengths of the network arses. Ths problem s further complcated by the fact that, n ATM-lke local area networks (characterzed by very hgh data rates and very small packet szes), propagaton delays, processng tmes, and transcever tunng tmes all become non-neglgble, and may actually be sgncantly larger than the packet transmsson tme. A number of reservaton protocols for sngle-hop networks have appeared n the lterature; we revew some of these protocols n the next secton. In ths paper, we present HPeR-`, a new reservaton protocol for coordnatng access to the varous channels of a sngle-hop WDM local area network. HPeR-` s speccally desgned to overcome the potental necences of operatng n envronments wth non-zero processng, tunng, and propagaton delays. The novelty of HPeR-` les n the fact that, by transmttng a sngle control packet, nodes can make reservatons for multple data packets. Thus, control overhead s sgncantly reduced, and nodes can use schedulng algorthms that can eectvely mask tunng tmes [3]. HPeR-` also uses ppelnng to mask processng tmes and propagaton delays; parameter ` (the look-ahead) of the protocol controls the degree of ppelnng. Drawng upon results from Markov chan theory, we obtan a necessary and sucent condton for the stablty of the protocol that provdes nsght n the factors aectng the protocol's operaton. In the analyss, we assume arrval processes that capture the noton of burstness and the correlaton of nterarrval tmes, two mportant characterstcs of trac n hgh speed networks [4]. In the next secton, we revew some of the meda access protocols for sngle-hop WDM networks, and we motvate the need for a new and radcally derent protocol. In Secton 3 we present the network and trac model, and n Secton 4 we descrbe HPeR-`, our new reservaton protocol, n detal. In Secton 5 we carry out a stablty analyss of HPeR-` based on Markov chan theory. In Secton 6 we present some numercal results, and we conclude the paper n Secton 7. 1

2 Why A New Multple Access Protocol? Access to the varous channels of a sngle-hop network s usually based on reservaton schemes that requre the use of one [5, 6, 7, 8, 9, 10], or more [11] control channels. Exstng protocols requre that control nformaton be transmtted on the control channel for each packet sent on the data channels. Typcally, TDMA s employed n the control channel wth a control slot consstng of N mn-slots, one for each of the N nodes n the network. In tell-and-go protocols [7, 11] the data packet s sent on the node's home channel mmedately after the transmsson of the correspondng control nformaton. Thus, recever collsons may arse and explct acknowledgments are needed; alternatvely, a node may determne f ts packet was successfully receved by montorng the control channel and applyng the rules used by recevers n selectng one of multple packets smultaneously sent to them. Other protocols are tell-and-wat n nature [9, 10, 12]; n other words, nodes send the control nformaton and wat for the control slot to reach all recevers. Then, they process the nformaton n the control slot to determne f a data slot has been reserved for them. In the event of a successful reservaton, the packet s transmtted n the correspondng slot and channel. In eect, the control slot nformaton n tell-and-wat schemes s used by the ndvdual nodes to buld a pcture of the packet queues at all other nodes n the network. Decsons about whch packets to be transmtted next are taken n a dstrbuted fashon based on protocol-specc rules common to all nodes. The above protocols suer from two problems: The control channel represents an electronc processng bottleneck [11] as control nformaton for N packets must be receved and processed for each packet transmsson and recepton. In envronments wth relatvely hgh data rates (on the order of a few hundred Megabts per second or more) and small packet szes (e.g., 53-byte ATM cells) ths processng overhead can be sgncantly greater than the packet transmsson tme for anythng but networks of trval sze. All these control channel protocols operate by schedulng a sngle packet from each transmtter at a tme (typcally, the head-of-lne packet s the one scheduled for transmsson). Ths packet s scheduled ndependently of other packets watng for transmsson at the same node. Hence, dependng on the protocol, one transmtter or recever tunng tme s ncurred 1 for each packet transmsson/recepton. 1 Protocols requrng tunablty at both ends need to tune both a transmtter and a recever for a successful transmsson. 2

Ths processng and tunng overhead assocated wth each packet on the data channels severely aects the throughput and delay performance of the network. To get a feelng of the magntude of ths problem, consder a 622 Megabts per second ATM LAN. In such a system, a 1s transmtter tunng latency (at the lmts of current state-of-the-art [13]) corresponds to about 1.5 tmes the ATM cell transmsson tme. Suppose now that the tme needed to process a control slot s equal to one half the cell transmsson tme. Then, an overhead of two cell tmes s ncurred for each cell transmtted, brngng the maxmum achevable throughput down to 33% (before even consderng the necences of the actual protocol employed!) Channel collsons n tell-and-go protocols and large propagaton delays n tell-and-wat protocols further degrade the overall performance of the network. A protocol that overcomes the processng bottleneck by ntroducng k > 1 control channels was presented n [11]. The man drawback of ths protocol, however, s ts lack of scalablty, as t requres N + k wavelengths. In fact, almost all control channel protocols requre a number of wavelengths at least equal to the number of nodes N. Furthermore, the performance evaluaton of some of these protocols does not take nto account the processng and tunng tmes; n some cases t s assumed that tunng latences are part of the data slot, an unacceptable soluton n hgh speed networks. The PROTON protocol [10] can operate wth any number of wavelengths, and ts desgn explctly consders tunng and processng tmes. However, PROTON schedules one packet at a tme, and the results n [10] conrm the ntuton that hgh processng and tunng tmes have a sgncant eect on delay and throughput. Dstrbuted Queue Multple Wavelength (DQMW), ntroduced recently n [14], s another protocol that can operate wth any number of wavelengths and whch consders tunng tmes when schedulng packets. DQMW attempts to overcome the head-of-lne blockng of other meda access schemes by consderng multple packets for transmsson by a gven node. But these packets are scheduled ndependently of each other, thus a tunng overhead s ncurred for each. In addton, ths protocol has hgher processng requrements compared to other protocols, as two control packets must be sent for each data packet: one before ts transmsson and one after the end of ts transmsson. FatMAC [15] s a reservaton protocol that does not requre a separate control channel. Instead, all channels operate n cycles, wth each cycle consstng of a control and data phase. Reservatons are transmtted n the control phase, and the correspondng data packets are sent n the followng data phase. As n other protocols, reservatons are made only for the head-of-lne packets, thus a control and tunng overhead s ncurred for each data packet. As a nal observaton, the performance analyss of these protocols has been typcally carred out assumng unform trac and memoryless arrval processes. Smlar trac assumptons were 3

made n performance studes of ATM networks n the late 80s. However, as t was later shown, such assumptons may lead to erroneous results regardng the overall network performance. In order to study correctly the performance of the network, one needs to use trac models that capture the noton of burstness and correlaton, and whch permt non-unform destnatons [4]. In ths paper we present HPeR-`, a new reservaton protocol that overcomes the shortcomngs of prevous protocols. In partcular, HPeR-` has the followng mportant features: It s scalable, as t can operate wth any number of channels C N. It may operate wthout a control channel, thus all channels are avalable for data transmsson and no extra hardware s needed to montor and access a control channel; ths feature s especally useful when only a lmted number of wavelengths can be supported 2. Control packets are transmtted n-band over the same channels used for data. It requres tunablty only at one end, and s symmetrc, n the sense that t can be easly mplemented usng ether tunable transmtters or tunable recevers 3. It ensures that packet transmssons are free of channel and recever collsons. It schedules multple packets for transmsson by a node on a gven channel usng the schedulng algorthms n [3] whch mask the tunng latency. In addton, the control requrements of the protocol are very low, snce a sngle control packet can be used to make reservatons for a number of data packets. It uses ppelnng to (a) overlap processng (.e., the computaton of a schedule) wth packet transmssons, and (b) hde the eects of propagaton delay. We now ntroduce the network and trac model, and then proceed to descrbe the HPeR-` protocol. 3 System and Trac Model 3.1 Network Model We consder an optcal, sngle-hop WDM network wth a passve star physcal topology, as shown n Fgure 1. Each of the N nodes n the network employs one transmtter and one recever. The 2 The protocol can be easly adapted to use a control channel for ts reservaton messages, as descrbed later. 3 In contrast, other protocols ether requre tunablty at both ends [10, 14, 11], or are asymmetrc,.e., they can operate only when tunablty s provded at a partcular end [7]. 4

node 1 1 C node queues. tunable lasers λ... λ 1 C λ fxed optcal flters... λ 1 C λ (1) recever of node 1 1.. node N C. λ... λ 1 C passve star λ... λ 1 C λ (N) recever of node N Fgure 1: Network archtecture wth N nodes and C channels passve star supports C wavelengths, or channels 4, 1 ; ; C. In general, C N. There s no separate control channel; all channels are used for data transmsson, as well as for communcatng control nformaton. Wthout loss of generalty, we only consder tunable-transmtter, xed-recever (TT-FR) networks; our work can be easly adapted to xed-transmtter, tunable-recever systems. Each tunable transmtter can tune to, and transmt on any wavelength. The xed recever at staton j, on the other hand, s assgned a home channel (j) 2 f 1 ; ; C g. Snce C N, a set R c of recevers may be sharng a sngle wavelength: R c = fj j (j) = c g c = 1; ; C (1) The network s packet-swtched, wth xed-sze packets. As Fgure 1 llustrates, the buer space at each node s parttoned nto C ndependent queues 5. Each queue contans packets destned for recevers whch lsten to a partcular wavelength. Ths arrangement elmnates the head-of-lne blockng problem, and permts a node to send a number of packets back-to-back when tuned to a partcular channel. The network operates n a slotted mode, wth a slot tme equal to a packet transmsson tme. All nodes are synchronzed at slot boundares. Packets buered at the c-th queue of each node are transmtted on a FIFO bass nto the optcal medum on wavelength c. These transmssons take place wthn approprate slots whch guarantee that the packets wll be correctly receved by ther 4 The terms \wavelength" and \channel" wll be used nterchangeably throughout ths paper. 5 These are logcal rather than physcal queues, and may be mplemented n shared memory for ecency. Ths assumpton does not pose scalablty problems snce, n general, the number C of wavelengths wll be smaller than the number N of nodes. 5

destnaton node (more on ths n the next subsecton). We let nteger 1 denote the number of slots a tunable transmtter takes to tune from one wavelength to another. We also let denote the one way propagaton delay between a par of nodes. Wthout loss of generalty, we take to be the same for all source-destnaton pars n the network. 3.2 Transmsson Schedules One of the potentally dcult ssues that arse n a WDM envronment, such as the one descrbed above, s that of coordnatng the varous transmtters/recevers. Some form of coordnaton s necessary because (a) a transmtter and recever must both be tuned to the same channel for the duraton of a packet's transmsson, and (b) a smultaneous transmsson by one or more nodes on the same channel wll result n a collson. The ssue of coordnaton s further complcated by the fact that tunable transcevers need a non-neglgble amount of tme to swtch between wavelengths. For the hgh-speed envronment under consderaton, the tunng latency of stateof-the-art tunable lasers or lters may take large nteger values [13]. Consequently, approaches that requre each tunable transmtter to send a sngle packet and then swtch to a new channel, wll suer a hgh tunng overhead and wll result n very low throughput. The authors have recently consdered the problem of packet schedulng wth tunng latences n [3]. Ths work expands upon, and generalzes earler results obtaned n [16, 17, 18]. More speccally, we have shown that careful schedulng can mask the eects of arbtrarly long tunng latences. The key dea s to have each tunable transmtter send a block of packets on each wavelength before swtchng to the next. Dong so makes t possble to overlap the tunng latency at a node wth packet transmssons from other nodes. The man result of [3] was a set of new algorthms for constructng near-optmal (and, under certan condtons, optmal) schedules for transmttng a set of trac demands fa c g. Quantty a c represents the number of packets to be transmtted by node onto channel c. The schedules are such that no collsons ever occur. They are also easy to mplement n a hgh speed envronment, snce the order n whch the varous nodes transmt s the same for all channels [3]. Fgure 2 llustrates the part of such a schedule correspondng to channel c. As we can see, each node s assgned a c contguous slots for transmttng packets on that channel. These a c slots are followed by a gap of g c 0 slots durng whch no node may transmt on c. Ths gap may be necessary to ensure that node + 1 has sucent tme to tune from wavelength c?1 before startng transmsson on c. However, the algorthms n [3] are such that the number of slots n 6

Frame a g a g a g c c c c c c 1 1 2 2 N N λ c Fgure 2: Part of the schedule correspondng to packet transmssons on channel c most of the gaps s equal to ether zero or a small nteger. Thus, the length of the schedule s very close to the lower bound max f P C c=1 a cg. Note that the schedulng algorthms we have developed requre complete nformaton about the trac demands fa c g among all source-channel pars. HPeR-`, descrbed shortly, s a reservaton protocol that allows the network nodes to dynamcally share ths nformaton. 3.3 Trac Model The arrval process to each node s characterzed by a two-state Markov Modulated Bernoull Process (MMBP), hereafter referred to as 2-MMBP. Ths s a Bernoull process whose arrval rate vares accordng to a two-state Markov chan. It captures the noton of burstness and the correlaton of successve nterarrval tmes, two mportant characterstcs of trac n hgh-speed networks. For detals on the propertes of the 2-MMBP, the reader s referred to [19]. We note that all of the results presented later can be readly extended to MMBPs wth more than two states. We assume that the arrval process to node ; = 1; ; N, s gven by a 2-MMBP characterzed by the transton matrx Q, and by A as follows: Q = 2 4 q(00) q (01) q (10) q (11) 3 5 and A = 2 4 (0) 0 0 (1) 3 5 (2) In (2), q (kl) ; k; l = 0; 1; s the probablty that the 2-MMBP wll make a transton to state l, gven that t s currently at state k. Obvously, q (k0) + q (k1) = 1; k = 0; 1. Also, a (0) and a (1) are the arrval rates of the Bernoull process at states 0 and 1, respectvely. We assume that the arrval process to each node s gven by a derent 2-MMBP, ndependent of the arrval processes to other nodes. From [19] we obtan the average arrval rate of the -th 2-MMBP as: = q(10) a (0) + q (01) a (1) q (01) + q (10) (3) 7

We note that s the probablty that any slot contans a packet, regardless of the state of the 2-MMBP. We let r j denote the probablty that a new packet arrvng to node wll have j as ts destnaton node. We wll refer to fr j g as the routng probabltes. Ths descrpton mples that the routng probabltes are source node dependent and non-unformly dstrbuted. Gven these assumptons, the probablty that a packet arrvng to node wll have to be transmtted on channel c s: r c = X j2r c r j = 1; ; N; c = 1; ; C (4) 4 Descrpton of the HPeR-` Protocol We now present HPeR-`, a new reservaton protocol that nodes n a sngle-hop WDM network can use to coordnate access to the varous channels. The operaton of HPeR-` s rather smple: Each network node perodcally sends control packets nformng all other nodes about ts trac demands. Each node has a copy of the packet schedulng algorthm developed n [3]. Upon recept of all control packets transmtted by other nodes, each node ndependently runs the algorthm to determne at what tme slots to transmt ts own data packets. Snce all nodes use the same algorthm and the same nput values (obtaned from the control packets), no channel or recever collsons arse. There are two man derences between HPeR-` and any of the protocols that have appeared n the lterature. Frst, n HPeR-` a node does not send a reservaton request for ts head-of-lne packet only. Instead, each control packet of a node contans nformaton about all the packets that were queued n any of 's C queues at a certan nstant n tme. By sendng a control packet, node s n eect makng reservatons for all packets t had watng for transmsson at that nstant. The next tme node s scheduled to transmt on wavelength c, t wll send a number of data packets back-to-back equal to the number of reservatons t made for ths channel n the correspondng prevous control packet. Secondly, control packets are not transmtted over a separate channel. Reservatons are n-band over the same channels used for data. Furthermore, tme n the channels s not dvded nto dstnct reservaton and data phases as n FatMAC [15]. Exactly when control packets are transmtted wll be dscussed shortly. The operaton of the HPeR-` protocol s motvated by the observaton that t s possble to hde the eects of long tunng latences only by overlappng the tunng tme at a node wth the 8

transmsson of a large number of packets by other nodes [3, 17, 16, 18]. Snce the algorthms n [3] are desgned to construct schedules of near-optmal length even for long tunng tmes, hgh throughput can be acheved by havng each node transmt a number of packets back-to-back on each channel before swtchng to the next. Equally mportant s the fact that a sngle control packet carres reservatons for multple packets watng at each node, sgncantly reducng the control requrements of the protocol. The next subsecton descrbes a rst verson of HPeR-`. We then extend the protocol by ntroducng ppelnng to mask the eects of long propagaton delays and processng tmes. 4.1 The Basc Idea: HPeR-1 The basc operaton of HPeR-` s llustrated n Fgure 3. For reasons that wll become apparent shortly, we wll refer to ths verson of the protocol as HPeR-1. Assume that, somehow, each node has made reservatons for a (k) c data packets on wavelength c, and that these reservatons are known to all nodes. Each node ndependently runs the schedulng algorthm n [3] to compute a packet transmsson schedule. However, the nput to ths algorthm s not quanttes fa (k) c packet (more on ths shortly). g, but rather quanttes fa (k) + 1g; the extra slot s for transmttng a control c The algorthm wll allocate a (k) c + 1 contguous slots to node for transmsson to destnatons lstenng on wavelength c. We wll call ths allocaton of slots to source-wavelength pars a frame. We note that, because of the propertes of the schedulng algorthms n [3], (a) the length of the frame n slots wll typcally be very close (or equal) to the lower bound on the number of slots requred to transmt the trac demands fa (k) c there wll be no collsons. Suppose now that at tme t k + 1g, and (b) n Fgure 3 all nodes have constructed the k-th frame from the known quanttes fa (k) c + 1g. Transmsson of ths frame can then begn at tme t k. Consder the + 1 slots n the frame allocated to node for transmssons on channel c. Node wll transmt a (k) c only a (k) c data packets n these slots (ths s the number of data slots t had reserved). In the last slot node wll transmt a control packet wth nformaton about the number of data packets that were n ts C queues at the begnnng of the frame (.e., at tme t k ), excludng packets t transmts durng ths frame. In other words, a control packet from node n frame k carres C ntegers, a (k+1) 1 ; ; a (k+1), and s used to make reservatons for future transmssons on each channel. An C dentcal copy of the control packet s transmtted by node on each wavelength, and carres a specal address recognzed by all recevers n the network. As a result, by the tme the last packet of the frame reaches all recevers, each node has complete nformaton (although a bt dated) of the 9

t k. τ F k t k+1 ν τ F k+1 t k+2 ν. τ = propagaton delay ν = processng tme for computaton of schedule F = transmsson tme k of frame k F k+2 Tme Fgure 3: Operaton of HPeR-` when the look-ahead ` = 1 queue status at all nodes. Each node can then use ths nformaton to run the schedulng algorthm anew to determne the next frame, as dscussed above. Let F k be the length, n slots, of the k-th frame; F k ncludes the slots requred for tunng the transmtters to ther ntal channels. Referrng to Fgure 3 we note that at tme t k + F k + all nodes wll have access to the control nformaton transmtted n frame k (recall that denotes the propagaton delay). Let denote the tme t takes to run the schedulng algorthm to construct the next frame 6. At tme t k+1 = t k + F k + +, the transmsson of frame k + 1 may start. At the same tme, each node wll record the number of packets n each of ts C queues, and wll use that nformaton for constructng ts control packets for frame k + 1. In eect, the value of a c n a control packet transmtted n frame k + 1 represents the number of packets that arrved to the c-th queue of node between tme t k (the begnnng of transmsson of frame k) and tme t k+1 (the begnnng of transmsson of frame k + 1). As descrbed, the protocol s sad to have a look-ahead ` = 1, snce control nformaton transmtted durng the k-th frame s used to construct the (k + 1)-th frame; thus the name HPeR-1. Ths protocol falls nto the class of gated reservaton schemes [20], snce only those packets that 6 One mportant aspect of the schedulng algorthms n [3] s that ther runnng tme depends only on system parameters such as the number of nodes and channels, not on the actual frame length. 10

arrved pror to the begnnng of frame k wll be transmtted n frame k + 1. The derence between HPeR-` and tradtonal reservaton protocols (ncludng FatMAC [15]) s that HPeR-` does not have a dstnct reservaton phase. Instead, control packets are transmtted wthn a frame along wth data packets. Ths s necessary n order to mnmze the tunng overhead. If there was a separate reservaton phase, the transmtters would have to (a) tune to each channel durng the reservaton phase to transmt a sngle control message, and (b) tune to each channel durng the data phase to transmt the data packets. In our dscusson so far, we have assumed that the sze of a control packet s equal to that of a data packet. Ths s a reasonable assumpton for networks wth small data packets (e.g., for ATM LANs). Let B be the sze of each of the C queues at each node. Snce a control packet carres the sze of each queue, ts length s equal to C log 2 B bts plus the header 7. If the sze of each data packet s sgncantly larger than C log 2 B bts, t would be necent to use a data slot for transmttng the small amount of control nformaton requred. It s possble, however, to overcome ths necency as follows. Let L be an nteger such that the sze of each data packet s L tmes the sze of the control packet, and assume that the unt of tme (slot) n the network s the control packet transmsson tme. When a node makes reservatons for a c packets, t s allocated La c + 1 slots whch are sucent for transmttng a c data packets and one control packet. Wthout loss of generalty, n the followng we only consder the case where control and data packets have the same sze. 4.2 Maskng Processng and Propagaton Delays Through Ppelnng Observe n Fgure 3 that there are no transmssons n an nterval of sze + between the end of frame k (at tme t k + F k ) and the begnnng of frame k + 1 (at tme t k+1 ). If quantty + s small compared to the average transmsson tme of a frame, a system runnng HPeR-1 wll acheve a reasonable throughput. In a hgh data rate envronment, however, processng and propagaton delays may be sgncantly long. As a result, the basc protocol of Fgure 3 wll experence long dle tmes wth severe eects on overall throughput. We now show how ppelnng can solve ths problem and keep channel utlzaton at hgh levels. Ppelnng can be ntroduced n the protocol by usng values of look-ahead greater than one. Fgure 4 llustrates the operaton of HPeR-` when the look-ahead ` = 4. Let us consder frame k + 1 whose transmsson starts at tme t k+1. Control packets transmtted wthn ths frame carry 7 For nstance, the 48-byte payload of an ATM cell s sucent for a 48-channel network, f each of the queues at each node have a capacty of B = 256 packets, or a 64-channel network f each queue has room for 64 packets. 11

t k. t k+1 F k t k+2 t k+3 t k+4 t k+5 τ ν F k+1 F k+2 F k+3 t k+6.. τ = propagaton delay ν = processng tme for computaton of schedule F = transmsson tme k of frame k F k+5 F k+6 Tme Fgure 4: Operaton of HPeR-` when the look-ahead ` = 4 nformaton about the number a c of data packets that arrved to the varous queues n the nterval [t k ; t k+1 ). However, ths nformaton s not used for constructng frame k + 2. As we see n Fgure 4, the nformaton carred by the control packets transmtted n frame k + 1 has not been processed untl after tme t k+4 when frame k + 4 starts. Thus, ths nformaton s used to construct frame k + 5 whose transmsson starts at tme t k+5. In general, we have the followng rule: When the look-ahead s ` 1, the control packets of each frame k carry nformaton about the data packets that arrved durng the prevous frame k? 1. Ths nformaton s used to construct frame k + `. As Fgure 4 ndcates, by selectng an approprate value for the look-ahead `, we can ensure that a frame s ready for transmsson mmedately after the end of the prevous frame, thus keepng channel utlzaton at hgh levels. Let F denote the average frame transmsson tme. Then, the value of the look-ahead should be selected as ` = + Note, however, that (5) s not sucent to guarantee that no dlng wll occur. Because of the stochastc nature of the system, t s possble that durng a relatvely long perod of tme, only a F (5) 12

few packets arrve. If as a result of such a behavor the transmsson tme of a number of successve frames s smaller than the processng tme, then dlng wll occur. Ths s due to the fact that control nformaton n a frame cannot be processed untl after the schedule based on control packets n the prevous frame has been completed. Thus, f a seres of very short frames are transmtted, the processng tmes wll domnate, causng some channel dlng. There are two ways to overcome ths problem. The rst, suggested by the authors of PROTON [10], s to employ multple processng resources at each node so that they can process control nformaton of more than one frames n parallel. Alternatvely, one could make sure that the processng tme s smaller than the transmsson tme of the smallest possble frame, one carryng only control packets (N packets per channel). However, even f none of these approaches s possble, we do not expect channel dlng to be a problem f the look-ahead ` s selected as (5) speces. Ths s because, unless the network operates at very low loads, the probablty of havng multple consecutve short frames s very low, and thus, the propagaton and processng tmes wll be overlapped most of the tme. As descrbed, the HPeR-` protocol ncurs an overhead of N C control packets for each frame transmtted (each node sends one control packet on each wavelength). In terms of ecency, ths overhead s not expected to be a problem except at very low data rates when a frame may carry a small number of data packets. On the other hand, the advantage of n-band reservaton messages over control channel-based reservaton schemes s that all avalable wavelengths can be used to transmt data, and no extra hardware s needed to montor and access the control channel. However, HPeR-` can be easly adapted to use out-of-band reservaton messages (f ths s necessary). In ths case, for each frame of data packets a node needs to send exactly one control packet on the control channel (as opposed to one control packet for each data packet requred by exstng protocols). Thus, only a small fracton of the control channel capacty s needed for reservaton messages; the remanng capacty can be used for other purposes, such as network management, synchronzaton, etc. 5 Performance Analyss An analyss of TDMA schemes n whch a node s allocated multple consecutve slots per frame has been carred out n [21]. There, the generatng functons of the queue sze and of the delay dstrbuton are derved for farly general arrval processes. The model n [21] assumes a xed TDMA frame sze, wth each node recevng a xed number of slots occupyng the same postons n every frame. Because of the stochastc nature of our system, however, each node wll make reservatons for, and t wll be allocated a derent number of slots from frame to frame. Consequently, the 13

frame sze wll vary. Furthermore, the schedulng algorthm s run anew for each frame, therefore, the order n whch the varous nodes transmt may be derent n consecutve frames. As a result, the technques developed n [21] are not applcable here. For the same reasons, an exact delay analyss of a system runnng HPeR-` appears to be dcult. We note, however, that packet delay s drectly related to the frame sze. In the followng, we carry out a stablty analyss of HPeR-` and obtan a necessary and sucent condton on the total arrval rate to the network for the frame sze to reman bounded. Although n our analyss we assume that the arrval process to each node s descrbed by a 2-MMBP, t can be easly seen that the same condton apples to other MMBP-lke processes wth a larger number of states. Before we proceed, we note that there are two factors that drectly aect the operaton of a network runnng HPeR-`: the degree of load balancng across the varous channels, and the qualty of the schedulng algorthm used. In order to quantfy ther eect on the performance of the protocol, we dene two parameters, as follows: Degree of load balancng b 0. Let A k arrvng to the network nodes wthn frame k. be the total number of data packets (trac load) Each of these packets wll be transmtted on one of the C channels n a future frame. If the load s perfectly balanced across the C channels, each channel wll carry exactly A k of these packets. In general, the trac load wll C not be perfectly balanced. Parameter b s dened so as to provde an upper bound on the number of packets to be carred by any sngle channel. Speccally, for any frame k, no more than (1 + b ) A k of the packets arrvng durng that frame are destned for any gven channel. C Under perfect load balancng, b = 0. The degree of load balancng b can be controlled f slowly tunable, rather than xed recevers are used. Then, as the trac pattern changes, the network can be recongured,.e., nodes may be assgned new receve wavelengths, so as to keep the load evenly spread across all channels. Schedulng guarantee s 0. Let ^Fk be the lower bound on the length of frame k, based on the data reservatons made n a prevous frame. Parameter s s dened such that, the algorthm used to schedule packet transmssons wll always construct a frame of length at most (1 + s ) ^F k. Under optmal schedulng, s = 0. 5.1 Markov Chan Model Consder a network runnng HPeR-` wth a look-ahead ` 1. We wll call a collecton of ` + 1 consecutve frames a superframe. Our analyss below s based on the observaton that the data 14

packets transmtted wthn a superframe are exactly those packets that arrved to the varous network nodes durng the prevous superframe. We analyze the system by constructng ts underlyng Markov chan embedded at superframe boundares. We observe the system at an nstant just before the begnnng of a new superframe. The state of the system s descrbed by the tuple (x; y), where: x represents the length, n slots, of the superframe that s about to be transmtted (x = 0; 1; 2; 3; ). y s a vector y = (y 1 ; ; y N ), wth y (y = 0; 1; = 1; ; N). ndcatng the state of the arrval process to node As the state of the system evolves n tme, t denes a Markov chan M. To see ths, let (x; y) be the current state of the system, and (x 0 ; y 0 ) be the state at the begnnng of the next superframe. Obvously, the new state y 0 of the arrval processes depends only on the current state y and the number of slots x that wll elapse. The length x 0 of the new superframe depends on (a) the number of arrvals durng the current superframe and how these packets are dstrbuted across the varous channels, (b) the number of control packets to be transmtted wthn the superframe, and (c) the schedulng algorthm used. The number of arrvals n the current superframe depends only on the state y of the arrval processes at the begnnng of the superframe, and ts length x. The number of control packets transmtted wthn a superframe s (` + 1)CN, snce the superframe conssts of ` + 1 ndvdual frames. The schedulng algorthm used s ndependent of the system state. Therefore, the new length x 0 also depends only on the current state (x; y). Let P [(x; y)! (x 0 ; y 0 )] denote the probablty that the system makes a transton to state (x 0 ; y 0 ), gven that t s currently n state (x; y). (Gven the descrpton of the N 2-MMBPs, the value ` of the look-ahead, and the schedulng algorthm, the transton probabltes are completely speced. However, as we shall shortly see, the exact values of these transton probabltes are not necessary n our analyss.) It s now straghtforward to verfy that Markov chan M s rreducble and aperodc. Thus, M wll have a statonary dstrbuton f we can nd scalars f (x;y) g that satsfy: X (x;y) = (x0 ;y 0 ) P [(x 0 ; y 0 )! (x; y)]; 8 (x; y) (6) (x 0 ;y 0 ) and such that X (x;y) (x;y) = 1 (7) 15

Solvng the equatons (6) by nspecton requres wrtng out the actual values of the transton probabltes, a complcated task. On the other hand, M s a two-dmensonal Markov chan, and t s not possble to apply Pake's lemma [20, 3A.5] drectly to prove that M has a statonary dstrbuton. We observe, however, that only random varable x of the state descrpton of M can take an nnte number of values. Random varable y can take exactly 2 N values. Our approach, therefore, s to construct a one-dmensonal Markov chan M 0 such that M has a statonary dstrbuton f and only f M 0 has one. Then, we carry out a drft analyss of the new Markov chan M 0 and apply Pake's lemma to show that M 0, and, consequently, M, has a statonary dstrbuton. 5.2 Stablty Analyss Frst, we note that the random varable y descrbng the state of the arrval processes changes value at each slot boundary, and denes a Markov chan M y. Ths Markov chan s rreducble and aperodc f the ndvdual Markov chans dened by the varous 2-MMBPs have the same propertes. Referrng to (2), the ndvdual Markov chans are rreducble and aperodc f the 2-MMBPs are such that: q (kl) > 0; k; l = 0; 1; = 1; ; N (8) In the followng, we only consder 2-MMBPs for whch (8) holds. In ths case, Markov chan M y has a statonary dstrbuton whch we wll denote by y. Markov chan M s embedded at slot boundares whch are also superframe boundares, whle chan M y s dened at all slot boundares. Embeddng Markov chan M s equvalent to observng chan M y at random slots as speced by random varable x. Thus, the probablty of ndng the arrval processes at state y at the embedded ponts s equal to the steady-state probablty y, regardless of the value of the random varable x. In other words, f P [y j x] denotes the condtonal probablty that the state of the arrval processes s y gven that the sze of the superframe s x, we have that: P [y j x] = y (9) We now construct a new Markov chan M 0, embedded at superframe boundares, by aggregatng all states of M havng the same value for random varable x. Let P x;x 0 denote the transton probablty from state x to state x 0 n M 0. P x;x 0 of M as follows: P x;x 0 = X y 0 @ y X are dened n terms of the transton probabltes 1 P [(x; y)! (x 0 ; y 0 )] A (10) y 0 16

The nner sum n (10) s the probablty that the system makes a transton to a state n whch the superframe sze s x 0 (regardless of the state of the arrval processes), gven that the system s currently n state (x; y). The outer sum n (10) uncondtons on the current state y of the arrval processes to obtan P x;x 0. Markov chan M 0 has a statonary dstrbuton f there exst scalars f x g that satsfy: x = X x 0 x 0 P x0 ;x 8 x (11) and We now prove the followng lemma. X x x = 1 (12) Lemma 5.1 Markov chan M has a statonary dstrbuton f and only f Markov chan M 0 also has a statonary dstrbuton. Proof. In the forward drecton, suppose that there exst postve scalars f x;y g that satsfy (6) and (7). It s straghtforward to verfy that scalars x = X y (x;y) ; 8 x (13) satsfy (11) and ther sum equals one. In the reverse drecton, suppose that M 0 has a statonary dstrbuton f x g such that (11) and (12) are satsed. Agan, t s straghtforward to verfy that scalars (x;y) = x y (14) satsfy both (6) and (7). 2 We are now ready to prove our man result. Lemma 5.2 Markov chan M 0 has a statonary dstrbuton f and only f < C (1 + b ) (1 + s ) (15) where = P N =1 s the total arrval rate to the network. Proof. Let D x denote the drft at state x of M 0. Because of Pake's lemma [20, 3A.5], n order to show that M 0 has a statonary dstrbuton, we only need to show that there exst a state x 0 0 and a scalar > 0 such that: D x?; 8 x > x 0 (16) 17

The drft at state x of Markov chan M 0 can be wrtten as: D x = E[x 0 j x]? x (17) where E[x 0 j x] s the expected length of the next superframe gven that the length of the current superframe s x slots. The expected number of packets that arrve n the current superframe of sze x slots, ndependently of the state of the arrval processes at the begnnng and end of the superframe, s x, where s the sum of the arrval rates to the network nodes. Because of the denton of parameter b, no more than (1 + b ) x of these arrvng packets are destned for any gven channel. In addton, C there are (` + 1)N control packets that wll be transmtted on each wavelength wthn the next superframe. Therefore, the expected number of packets (data plus control) transmtted on any channel durng the next superframe cannot be greater than (1 + b ) x + (` + 1)N. Because of the C denton of parameter s, the length of ths next superframe cannot be greater than (1 + s ) tmes ths last quantty. Therefore, we can bound the expected length of the next superframe by: E[x 0 j x] (1 + s ) (` + 1)N C + (1 + b) x C (18) If we substtute ths expresson n (17), we obtan an upper bound on the value of the drft at state x: D x (1 + s ) (` + 1)N C + (1 + b) x C After some algebrac manpulaton of (19), we nd that (16) s satsed f we let x 0 = & + (1 + s ) (` + 1)N 1? (1 + b ) (1 + s ) C '? x (19) (20) Ths x 0 s postve f and only f < C (1 + b ) (1 + s ) Fnally, by combnng Lemmata 5.1 and 5.2 we obtan the desred result: (21) 2 Corollary 5.1 Markov chan M has a statonary dstrbuton f and only f the total arrval rate to the network satses (21). The stablty condton (21) s smple yet powerful, as t provdes nsght nto the two man factors that determne the performance of the network, namely, the degree of load balancng, and the qualty of the schedulng algorthm. As we can see, the lower the degree of load balancng (.e., 18

the larger the value of b n (21)), the lower the maxmum arrval rate that the network can sustan (recall that C s the capacty of the network). Smlarly wth the schedulng ecency, captured by parameter s n (21). Although (21) was derved speccally for HPeR-`, we beleve that these two factors play a smlar role n any reservaton protocol for sngle-hop networks. Let F denote the mean frame sze when the stablty condton (21) s satsed. From the denton of the look-ahead `, a packet arrvng durng a frame k wll be transmtted to ts destnaton wthn frame k + ` + 1. We can then obtan the followng expresson for the mean packet delay D: D = (` + 1) F (22) 6 Numercal Results We demonstrate the operaton of the HPeR-` protocol by consderng two networks, each wth N = 40 nodes and C = 10 channels. The rst network, hereafter referred to as the unform network, s such that the destnaton of any packet s unformly dstrbuted across the possble destnatons 8. In other words, the routng probabltes for ths network are: r j = 1 39 8 6= j (unform network) (23) The second network s a clent-server network. There are two servers (nodes 1 and 2) and 38 clents (nodes 3 through 40). The routng probabltes are: r j = 8 >< >: 0 = j 0:01 = 1; j = 2 or = 2; j = 1 0:99 38 = 1; 2; j = 3; ; 40 0:114 = 3; ; 40 ; j = 1; 2 0:772 38 ; j = 3; ; 40 (clent? server network) (24) The arrval process to each of the nodes of ether network s descrbed by a derent 2-MMBP. Snce t s not practcal to provde the matrces Q and A (see (2)) for the 40 2-MMBPs of each network, we nstead show two mportant parameters for each 2-MMBP. In Fgure 5 we show the arrval rate ; = 1; ; 40 n (3) of the 2-MMBPs descrbng the arrval process to each of the 40 nodes of the two networks. The total arrval rate to each network s un = 0:72 (unform network) and cs = 0:73 (clent-server network). In Fgure 6 we show the squared coecent of varaton of r j 8 Although we wll refer to ths network as \unform", ths s just a reecton of the fact that the routng probabltes are unform. As we shall shortly see, the arrval process to each node s descrbed by a derent 2-MMBP, therefore trac n the network s not unform. 19

0.4 Unform Clent-Server 0.35 0.3 Arrval Rate 0.25 0.2 0.15 0.1 0 5 10 15 20 25 30 35 40 Node Number Fgure 5: Arrval rate of the arrval processes to each node the nterarrval tme obtaned n [19]. As we can see, the arrval processes were selected so that these two parameters take a wde range of values. Based on the results of the prevous secton, we have assgned receve wavelengths to the varous nodes so as to spread the trac evenly across the varous channels. For the unform network ths can be acheved by smply assgnng each of the 10 wavelengths to exactly four recevers. In the clent-server network, however, there s more trac enterng the two servers. Therefore, we have decded to assgn one wavelength to each of the two servers, whle the remanng 8 wavelengths are shared by the other 38 nodes (sx of these wavelengths are assgned to 5 nodes, and the other two are assgned to 4 nodes). We have run a number of smulatons to determne the frame sze and mean packet delay n these networks runnng HPeR-` for varous values of the look-ahead `. In our smulatons we assume that the propagaton delay = 20 slots, the processng tme = 100 slots, and the tunng latency = 4 slots. Fgures 7, 8, and 9 plot the actual and mean frame sze of the unform network when the lookahead ` s 1, 2, and 3, respectvely. The sze of the rst 3000 frames n the smulaton s plotted. From the gures we can see that the mean s well-dened, and that the sze of ndvdual frames oscllates around ths mean. Ths behavor s expected, gven the stochastc nature of the system. Very smlar observatons can be made for the frame sze of the clent-server network shown n Fgures 10, and 11, and 12 for the same values of look-ahead. For both networks, when the look-ahead ` s ncreased from 1 to 2, there s a sgncant decrease n the frame sze. Ths can be explaned by notng that, when the look-ahead s 1, there s an dle perod after the end of each frame equal to + = 120 slots (refer also to Fgure 3 descrbng HPeR- 1). Durng ths perod, packets may arrve to the network nodes, but no packets are transmtted. 20

0.9 Unform Clent-Server 0.85 Squared Coeffcent of Varaton 0.8 0.75 0.7 0.65 0.6 0 5 10 15 20 25 30 35 40 Node Number Fgure 6: Squared coecent of varaton of the nterarrval tme for the arrval processes to each node Thus, the average frame sze F has to be large enough so that, on the average, the number of packets transmtted durng F slots equals the number of packets arrvng durng F + 120 slots. When the look-ahead becomes ` = 2, the propagaton delay and processng tme of 120 slots are completely overlapped wth the transmsson of the next frame. Thus, no dlng occurs, and the frame sze s smaller. On the other hand, ncreasng the look-ahead to ` = 3 does not aect the average frame sze. Ths s because n ths case, a look-ahead ` = 2 s sucent to completely mask the 120 slots of propagaton delay and processng tme, so we do not gan anythng by usng a larger value for the look-ahead. Fnally, n Fgures 13 and 14 we show the delay versus throughput curves for the unform and clent-server network, respectvely. The mean delay values are plotted wth 95% condence ntervals, whch, however, are so narrow that they are not vsble. As we can see, a look-ahead of 1 has the worst performance. Ths s expected, snce n ths case, there s an dle perod of 120 slots after each frame, as dscussed above. We also observe that, at low loads, a look-ahead ` = 3 provdes for shorter delays than a look-ahead ` = 2, whle the opposte s true for hgher loads. Ths can be explaned as follows. At low loads, few packets arrve durng a frame, thus the average frame sze when ` = 2 s not large enough to completely overlap the propagaton delay and processng tme. Thus, dlng occurs after the end of two frames, and the result s longer delays than a look-ahead ` = 3. As the load ncreases, the average frame sze for ` = 2 also ncreases. When the load s such that the 120 slots are completely masked wth ` = 2, no further gan s possble by usng ` = 3. That s, a look-ahead ` = 3 wll not decrease the frame sze, but wll ncrease the delay, as seen from (22). Fnally, a look-ahead of ` = 4 or more oers no advantage compared to a look-ahead of ` = 3, resultng n a hgher delay. 21

1800 1600 Actual Frame Sze Average Frame Sze 1400 1200 Frame Sze 1000 800 600 400 200 0 0 500 1000 1500 2000 2500 3000 Frame Number Fgure 7: Frame sze of the unform network when the look-ahead s ` = 1 600 Actual Frame Sze Average Frame Sze 500 400 Frame Sze 300 200 100 0 0 500 1000 1500 2000 2500 3000 Frame Number Fgure 8: Frame sze of the unform network when the look-ahead s ` = 2 600 Actual Frame Sze Average Frame Sze 500 400 Frame Sze 300 200 100 0 0 500 1000 1500 2000 2500 3000 Frame Number Fgure 9: Frame sze of the unform network when the look-ahead s ` = 3 22

1000 Actual Frame Sze Average Frame Sze 800 600 Frame Sze 400 200 0 0 500 1000 1500 2000 2500 3000 Frame Number Fgure 10: Frame sze of the clent-server network when the look-ahead s ` = 1 400 Actual Frame Sze Average Frame Sze 350 300 250 Frame Sze 200 150 100 50 0 0 500 1000 1500 2000 2500 3000 Frame Number Fgure 11: Frame sze of the clent-server network when the look-ahead s ` = 2 400 Actual Frame Sze Average Frame Sze 350 300 250 Frame Sze 200 150 100 50 0 0 500 1000 1500 2000 2500 3000 Frame Number Fgure 12: Frame sze of the clent-server network when the look-ahead s ` = 3 23

1800 1600 l = 1 l = 2 l = 3 1400 Average Packet Delay 1200 1000 800 600 400 200 0 10 20 30 40 50 60 70 80 Effcency Fgure 13: Delay vs. throughput for the unform network 2000 1800 l = 1 l = 2 l = 3 1600 Average Packet Delay 1400 1200 1000 800 600 400 200 0 10 20 30 40 50 60 70 80 Effcency Fgure 14: Delay vs. throughput for the clent-server network Results smlar to the ones presented here have been observed for a varety of networks wth derent routng probabltes and derent 2-MMBPS. Overall, we have observed that the frame sze and the mean packet delay are manly determned by the degree of load balancng and the qualty of schedulng, n accordance wth (21). Our results also ndcate that, n order to acheve the best performance possble, the value of the look-ahead must be carefully selected to ensure that no dlng occurs between successve frames. 7 Concludng Remarks We have consdered the meda access problem arsng n sngle-hop WDM networks. We ntroduced HPeR-`, a new reservaton protocol desgned to overcome the problems posed by non-neglgble processng, tunng, and propagaton delays. In HPeR-`, nodes send multple reservaton requests n a sngle control packet. As a result, the control requrements of the protocol are low, and nodes 24