Calculating Call Blocking and Utilization for Communication Satellites that Use Dynamic Resource Allocation

Similar documents
Calculating Call Blocking, Preemption Probabilities, and Bandwidth Utilization for Satellite Communication Systems.

Queuing Systems. 1 Lecturer: Hawraa Sh. Modeling & Simulation- Lecture -4-21/10/2012

Using Queuing theory the performance measures of cloud with infinite servers

Teletraffic theory (for beginners)

Lecture 5: Performance Analysis I

Cover sheet for Assignment 3

Markov Chains and Multiaccess Protocols: An. Introduction

Multi-threaded, discrete event simulation of distributed computing systems

EP2200 Queueing theory and teletraffic systems

DDSS: Dynamic Dedicated Servers Scheduling for Multi Priority Level Classes in Cloud Computing

Queuing Networks. Renato Lo Cigno. Simulation and Performance Evaluation Queuing Networks - Renato Lo Cigno 1

Introduction: Two motivating examples for the analytical approach

Read Chapter 4 of Kurose-Ross

Design of Optical Burst Switches based on Dual Shuffle-exchange Network and Deflection Routing

EP2200 Queueing theory and teletraffic systems

Introduction to Queuing Systems

Optical Packet Switching

MODELING OF SMART GRID TRAFFICS USING NON- PREEMPTIVE PRIORITY QUEUES

10. Network planning and dimensioning

Method for Automatic Construction of Queuing System Models

A New Call Admission Control scheme for Real-time traffic in Wireless Networks

Impact of Source Counter on Routing Performance in Resource Constrained DTNs

Performance Evaluation of Scheduling Mechanisms for Broadband Networks

Application of QNA to analyze the Queueing Network Mobility Model of MANET

Performance Study of Interweave Spectrum Sharing Method in Cognitive Radio

Approximate Linear Programming for Average-Cost Dynamic Programming

Queuing Networks Modeling Virtual Laboratory

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Power Laws in ALOHA Systems

Tunable Preemption Controls for a Cellular Emergency Network

MODELS FOR QUEUING SYSTEMS

3. Examples. Contents. Classical model for telephone traffic (1) Classical model for telephone traffic (2)

Application of Importance Sampling in Simulation of Buffer Policies in ATM networks

WEB OBJECT SIZE SATISFYING MEAN WAITING TIME IN MULTIPLE ACCESS ENVIRONMENT

Buffer Management for Self-Similar Network Traffic

Analytic Performance Models for Bounded Queueing Systems

BUFFER STOCKS IN KANBAN CONTROLLED (TRADITIONAL) UNSATURATED MULTI-STAGE PRODUCTION SYSTEM

Optimal Admission Control in Two-class Preemptive Loss Systems

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Perspectives on Network Calculus No Free Lunch but Still Good Value

PARALLEL ALGORITHMS FOR IP SWITCHERS/ROUTERS

Model suitable for virtual circuit networks

E ALLOCATION IN ATM BASED PRIVATE WAN

Prioritization scheme for QoS in IEEE e WLAN

Data Network Protocol Analysis & Simulation

The A* traffic process in a queue with feedback

10. Network planning and dimensioning

Process behavior. Categories of scheduling algorithms.

2. Modelling of telecommunication systems (part 1)

2. Traffic lect02.ppt S Introduction to Teletraffic Theory Spring

Week 7: Traffic Models and QoS

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

FEC Performance in Large File Transfer over Bursty Channels

Software Tools for Network Modelling

Resource Allocation Strategies for Multiple Job Classes

Outline. Application examples

Traffic Analysis and Modeling of Real World Video Encoders

PROVIDING SERVICE DIFFERENTIATION IN OBS NETWORKS THROUGH PROBABILISTIC PREEMPTION. YANG LIHONG (B.ENG(Hons.), NTU)

COPYRIGHTED MATERIAL. Introduction. Chapter 1

MLR Institute of Technology

A model for Endpoint Admission Control Based on Packet Loss

OPTIMIZING PRODUCTION WORK FLOW USING OPEMCSS. John R. Clymer

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Chapter 4 ATM VP-Based Ring Network

Computer Networking. Queue Management and Quality of Service (QOS)

Multiple Access Communications. EEE 538, WEEK 11 Dr. Nail Akar Bilkent University Electrical and Electronics Engineering Department

2. Traffic. Contents. Offered vs. carried traffic. Characterisation of carried traffic

Network Performance Analysis

Maximum Number of Users Which Can Be Served by TETRA Systems

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

BROADBAND AND HIGH SPEED NETWORKS

QUEUEING MODELS FOR UNINTERRUPTED TRAFFIC FLOWS

Table 9.1 Types of Scheduling

Process- Concept &Process Scheduling OPERATING SYSTEMS

FAST QUEUING POLICIES FOR MULTIMEDIA APPLICATIONS. Anonymous ICME submission

Advanced Internet Technologies

RESERVATION OF CHANNELS FOR HANDOFF USERS IN VISITOR LOCATION BASED ON PREDICTION

Numerical approach estimate

A Call Admission Protocol for Wireless Cellular Multimedia Networks

Optimal Routing and Scheduling in Multihop Wireless Renewable Energy Networks

Episode 5. Scheduling and Traffic Management

10. Network dimensioning

Buffered Fixed Routing: A Routing Protocol for Real-Time Transport in Grid Networks

048866: Packet Switch Architectures

Simulation-Based Performance Comparison of Queueing Disciplines for Differentiated Services Using OPNET

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Flexible Servers in Understaffed Tandem Lines

A queueing network model to study Proxy Cache Servers

Chapter 5: CPU Scheduling

Simulation and Analysis of Impact of Buffering of Voice Calls in Integrated Voice and Data Communication System

Packet multiple access and the Aloha protocol

Stochastic Processing Networks: What, Why and How? Ruth J. Williams University of California, San Diego

An example based course curriculum for Performance Evaluation in Distributed Real Time Systems

The Network Layer and Routers

L5: Building Direct Link Networks III. Hui Chen, Ph.D. Dept. of Engineering & Computer Science Virginia State University Petersburg, VA 23806

VoIP over wireless networks: a packet scheduling approach to provide QoS using Linux

Connection-Level Scheduling in Wireless Networks Using Only MAC-Layer Information

Performance of Cloud Computing Centers with Multiple Priority Classes

Uniprocessor Scheduling

ECEN 5032 Data Networks Medium Access Control Sublayer

Transcription:

Calculating Call Blocking and Utilization for Communication Satellites that Use Dynamic Resource Allocation Leah Rosenbaum Mohit Agrawal Leah Birch Yacoub Kureh Nam Lee UCLA Institute for Pure and Applied Mathematics (IPAM) 460 Portola Plaza Box 957121 Los Angeles, CA 90095-7121 lfrosenbaum@gmail.com Abstract The performance of most satellite communication (SATCOM) systems is characterized by loading analyses that assess the percentage of users or total throughput a particular system can satisfy. These analyses usually assume a static allocation of resources in which users request communication resources 100% of the time and higher priority users often block lower priority users from getting service. However, the loading of more dynamic, circuit networks such as the publicswitched telephone network (PSTN) is typically analyzed on a statistical basis where the probability of a blocked call is computed. These types of systems can potentially satisfy more users than those that use static resource allocation because they take advantage of statistical multiplexing. As SATCOM moves toward a more dynamic concept of operations (CONOPS) to take advantage of potential statistical multiplexing gains, it is crucial to develop analysis capabilities to evaluate performance. In this paper, a method is developed to calculate call-blocking, preemption, and resource utilization for dynamically-allocated SATCOM systems in which users have different priorities and bandwidth requirements. The first part of the study augments the M/M/m queuing model to account for users with different priorities and bandwidth requirements. In the second part of the study, the model is used to predict the performance for two competing traffic classes with different bandwidths or priorities and highlight important trends. Finally, the third part of the study directly compares the performance of static and dynamic resource allocation approaches. This work was performed by The Aerospace Corporation in collaboration with a team of students representing the Research in Industrial Projects for Students (RIPS) Program. Administered by the UCLA Institute for Pure & Applied Mathematics (IPAM), RIPS provides opportunities for highachieving undergraduate students to work in teams on realworld research projects proposed by a sponsor from industry. James Hant Brian Wood Eric Campbell James Gidney The Aerospace Corporation 2310 E. El Segundo Blvd. El Segundo, CA 90245 310-336-1388 james.j.hant@aero.org TABLE OF CONTENTS 1. INTRODUCTION... 1 2. THEORETICAL MODEL... 3 3. PERFORMANCE OF DYNAMIC RESOURCE ALLOCATION... 4 4. COMPARISON OF STATIC AND DYNAMIC RESOURCE ALLOCATION... 7 5. CONCLUSIONS AND FUTURE WORK... 9 REFERENCES... 9 BIOGRAPHIES... ERROR! BOOKMARK NOT DEFINED. 1. INTRODUCTION Satellite communication (SATCOM) systems often have limited resources to satisfy communication circuits which need to be managed among competing users who have different priorities and bandwidth needs. Most of these systems allocate resources on a static basis in which users are given access to communication circuits for long periods of time in priority order. A pictorial view of this type of allocation approach is shown in Figure 1. This example assumes a total system capacity of 100 Mbps and 18 requested circuits with different bandwidths and priorities (high, medium, and low). 978-1-4577-0557-1/12/$26.00 2012 IEEE 1

allocation scheme depends on the time varying bandwidth needs for the different priority users. As the duty cycles of the users decrease, a dynamic allocation approach can more easily take advantage of multiplexing. In this paper, classical queuing theory is expanded to highlight some of the basic trade offs that determine the performance of static vs. dynamic resource allocation schemes. For this study, a SATCOM system with dynamic resource allocation is modeled as the M/M/m/0 queuing model [1, 2] shown in Figure 3 with m available circuits, no queuing buffer, and circuit arrivals and departures described by exponential distributions. This allows us to expand on classical queuing theory to estimate performance and determine trends. Figure 1: Static Resource Allocation Approach With this type of scheme, high-priority users are given their own reserved channel, regardless of their usage pattern, which causes lower priority users to be blocked and the system to be underutilized. For this example, all low priority users are blocked even though the server utilization is less than 100 Mbps during most of the time. A dynamic resource allocation approach is shown in Figure 2 in which users are allocated resources (in priority order) only when those resources are specifically needed. For this case, the system utilization is increased and more of the lower priority users are satisfied even though some of these users may be preempted by higher priority users. µ Figure 3: M/M/m/0 Queuing Model The following three types of traffic conditions are considered for the dynamic resource allocation system: 1. Single traffic type: all requested circuits have the same priority and bandwidth requirements 2. Two competing traffic classes with different priorities 3. Two competing traffic classes with different bandwidth requirements Theoretical models for user satisfaction (or blocking/preemption probability) and system utilization are determined for these different traffic conditions and a direct comparison is made between static and dynamic allocation approaches. Figure 2: Static Resource Allocation Approach The potential benefit of implementing a dynamic resource The organization of this paper is as follows. In Section 2, a theoretical model for dynamic resource allocation is developed assuming a single traffic type or two competing traffic classes with different priorities or bandwidths. In Section 3, the theoretical model is used to generate results that demonstrate some of the basic performance trends. 2

In Section 4, the performance of static and dynamic allocation schemes is compared for two competing traffic classes with different priorities or different bandwidths. Finally, conclusions and suggestions for future work are presented in Section 5. 2. THEORETICAL MODEL In this section, theoretical models for dynamic resource allocation are developed for a single traffic type, two competing priorities, and two competing bandwidths. To evaluate system performance, we consider the following two performance measures: call blocking/preemption probability and server (or bandwidth) utilization. A call is blocked when there are not enough servers available in the system to handle the job. Preemption occurs when a low priority user gets kicked off a server by a high-priority user who requests to use the system. Server utilization describes the average system utilization or how much bandwidth is being occupied on an average basis. These measures are tracked as a function of the traffic intensity, ρ, which is the ratio of overall arrival and departure rates from the system. A discrete-event simulation model [4] was also generated in MATLAB [3] to verify all theoretical results. Single Traffic Type For one job type, we can consider a stochastic processing network with m servers and a queue of length 0. As such, there can be at most m jobs in the system at any time. If a job seeks to enter the system but no free servers are available, the job is blocked. Because blocked jobs are lost forever, this system is known as the Erlang loss system [2]. A state transition diagram for this type of system is shown in Figure 4. system s past states and thus the system can be modeled by a finite state, continuous time Markov process [1]. When the underlying operating policy is first-in-first-out (FIFO) the M/M/m/0 system can be described by the following infinitesimal transition rate matrix, Λ. [ ] Assuming the process is stationary and irreducible, the probability that the system is in a particular state, π, is calculated by finding the unique solution to the following two equations: Blocking probability can then be calculated as the probability that the system is in state m (that is, that the server is fully utilized) and the mean system occupancy (or server utilization) can be calculated as a weighted sum of the probabilities of being in each state. A similar analysis can be done for a dynamic allocation system with two competing priorities, however, now the state transition diagram and infinitesimal transition rate matrix will be different. To gain some insight into how to develop a state transition diagram for competing priorities, consider the m=1 case (shown in Figure 5). (1) (2) (3) 0 low high L H H Figure 4: State Transition Diagram for an M/M/m/0 Queuing System System state is defined as the number of servers that are currently occupied and it changes whenever a new job arrives or leaves the system. The inter-arrival times of jobs entering the system is described by an exponential distribution with an average arrival rate of The time required to service each job is also described by an exponential distribution with an average service rate of Once the system is in a given state, the probability of entering another state is fixed and independent of the Figure 5: Continuous-Time Markov Chain for Competing Priorities An m=1 system with two competing priorities can be in one of 3 states: unoccupied ( 0 ), servicing a low-priority circuit ( low ), or servicing a high-priority circuit ( high ). State transitions are determined by the arrival rate of high ( H ) or low ( L ) priority circuits and the service rate for those circuits ( The infinitesimal transition rate matrix, Λ, for this system is given by equation 4. 3

[ ] (4) This model can be extended to any value of m and the blocking probability and server utilization can be calculated based on equations (2) and (3) for different values of H, L, and. These values can be used to determine the corresponding traffic intensities for high and low priority traffic defined as H = ( H / and L = ( L / respectively. A continuous time Markov chain can also be defined for a dynamic allocation system with two competing bandwidths. To better understand how this is done, consider a system with a server capacity of 4 bandwidth units and two job classes: jobs requiring 1 bandwidth unit (with an arrival rate of λ 1 ) and jobs requiring 2 bandwidth units (with an arrival rate of λ 2 ). Assuming both traffic classes have the same service rate,, the Markov chain shown in Figure 6 can be used to describe the system. 3. PERFORMANCE OF DYNAMIC RESOURCE ALLOCATION In this section, the theoretical models developed in Section 2 are used to highlight some of the basic trends of a dynamic resource allocation system with a single traffic type, two competing priorities, and two competing bandwidth classes. Single Traffic Type Figure 7 shows the performance of a dynamic resource allocation system with a single traffic type. Blocking probability and mean server utilization is plotted in and, respectively, for different numbers of servers, m. As expected, both blocking probability and server utilization increase with increasing traffic intensity, however, performance is much better for larger m. This implies that systems with a larger number of servers (greater bandwidth resolution) have less blocking and can more efficiently use resources. Figure 6: Continuous-Time Markov chain for Competing Bandwidths Note that this chain is two dimensional and the ordered pair (n1, n2 ) indicates the state of the system having n1 jobs of the first class (those requesting 1 bandwidth unit) and n2 jobs of the second class (those requesting 2 bandwidth units). An infinitesimal transition rate matrix can be generated for this Markov chain or for any two competing bandwidths with an arbitrary number of servers. The blocking probability and server utilization can then be calculated as a function of the total traffic intensity and the ratio of arrival rates for both classes. For the case of competing bandwidth classes, the total traffic intensity is given by equation (5) where λ i is the arrival rate for bandwidth class i, B i is number of servers (or bandwidth units) requested for class i, m is the total number of servers (or bandwidth units), and is the service rate for each user. (5) Figure 7: Performance for a Single Traffic Type 4

Two Competing Priorities Assuming a queuing system with 100 servers (m=100), the performance of two competing priorities (high and low) is plotted in Figure 8 and Figure 9, respectively. Figure 8 and plot the blocking probability and server utilization for high priority traffic as a function of the high priority traffic intensity, H, for different arrival ratios of high and low priority traffic ( H / L ). The performance of an equivalent single-traffic type system (M/M/100) is superimposed on the plots to assess the effect of prioritization. Results show that high priority traffic only competes with itself and its performance is completely determined by the high priority traffic intensity. Regardless of the ratio of arrivals of high and low priority traffic, high priority traffic performs identically to an M/M/100 system at the high priority traffic intensity. priority traffic ( H / L ). The performance of an equivalent single-traffic type system (M/M/100) is again superimposed on the plots to assess the effect of prioritization. Results show that the performance of low priority traffic, on the other hand, is highly dependent on the ratio of high and low priority traffic. When most of the traffic arrivals are low priority ( H / L is small) than the system behaves similarly to an M/M/100 system. However, when most of the traffic arrivals are high priority ( H / L is large) than the low priority traffic is preempted and its performance degrades considerably from M/M/100. Figure 9: Performance for Two Competing Priorities (Low Priority Users) Figure 8: Performance for Two Competing Priorities (High Priority Users) Figure 9 and plot the blocking probability and server utilization for low priority traffic as a function of the total traffic intensity for different arrival ratios of high and low 5 Two Competing Bandwidth Classes The results for a system with 100 total servers and two competing traffic classes with bandwidths of 1 and 10 servers (or bandwidth units) each are shown in Figure 10 below. Blocking probability and mean server utilization are plotted as a function of traffic intensity with the ratio of arrivals for both traffic classes ( 1 /10 10 ) as a parameter. Note that the ratio of arrivals is weighted by the bandwidth required for each traffic class. To gain greater insight, results are plotted with cases in which each traffic class is by itself. This corresponds to an M/M/100 model for the

bandwidth 1 class and an M/M/10 model for the bandwidth 10 class. resources, the performance of equivalent single-traffic type systems (M/M/100 for the bandwidth 1 class and M/M/10 for the bandwidth 10 class) are superimposed onto the results. Figure 10: Total Performance for Two Competing Bandwidths Results show total blocking probability is lower-bounded and server utilization is upper-bounded by the smallest bandwidth class (M/M/100). Interestingly, the blocking probability is not upper-bounded (and the server utilization is not lower bounded) by the largest bandwidth class (M/M/10). When the ratio of small and large bandwidth traffic is near 1, the system actually has worse performance than if the large bandwidth traffic was by itself. This is because the small bandwidth traffic can more easily block the large bandwidth traffic from getting service. The blocking probability of the small and large bandwidth users is shown in Figure 11 and, respectively. Blocking probability is plotted as a function of the ratio of arrivals for the large and small bandwidth traffic classes ( 1 /10 10 ) with total traffic intensity as a parameter. To gain greater insight into how the two traffic classes compete for 6 Figure 11: Performance of Small and Large Bandwidth Users for Two Competing Bandwidths Figure 11 shows that when a majority of the traffic is large bandwidth users ( 1 /10 10 is small) the blocking probability for small bandwidth users is similar the case where only the large bandwidth traffic is competing with itself (M/M/10). Conversely, when most of the traffic consists of small bandwidth users ( 1 /10 10 is large) than blocking probability is similar to the case where only the small-bandwidth traffic is competing with itself (M/M/100). Interestingly, small bandwidth users have the best performance when there is nearly an equal ratio of large and small bandwidth users. This is because small bandwidth users can more easily take advantage of available servers and block the large bandwidth users from getting service. Figure 11 shows a more consistent trend for the large bandwidth users. When a majority of the traffic is large bandwidth users ( 1 /10 10 is small), the performance of large bandwidth users is dominated by these users competing with themselves (M/M/10). As the amount of small bandwidth traffic increases, more of the large bandwidth users begin to get blocked by the small bandwidth users. At the point where the ratio of small-to-

large bandwidth users is nearly equal ( 1 /10 10 =1), almost no large bandwidth users are able to get through at the higher traffic intensities ( =1.5 or 2). 4. COMPARISON OF STATIC AND DYNAMIC RESOURCE ALLOCATION With an analysis method developed to compute the blocking probability and server utilization for dynamic SATCOM systems with competing priorities and bandwidths, the performance of static and dynamic allocation can be compared. Figure 12 compares the total performance of static and dynamic resource allocation for a SATCOM system with two competing priorities (high and low) and 100 total servers. The arrival rates of the low and high priority traffic are equal. A static system must pre-allocate resources for each priority and the following combinations of high:low priority servers were tested: 99:1, 95:5, 90:10, 80:20, 70:30, 60:40, 50:50. The dynamic system was able to allocate resources dynamically, with low priority circuits being preempted by high priority circuits when all servers were occupied. Total satisfaction (defined as 1 the blocking/preemption probability) was plotted vs. server utilization for all the cases tested. dynamic resource allocation always outperforms static resource allocation. However, that is not always true for the low priority users. At lower traffic intensities, dynamic allocation provides improved performance for low priority traffic. At higher traffic intensities, however, dynamic allocation allows high priority requests to outcompete low priority requests, leading to lower satisfaction than static allocation cases where resources have been set aside for low priority users. Figure 12: Comparison of Static and Dynamic Resource Allocation for Two Competing Priorities (Total Performance) Results show dynamic resource allocation outperforms static resource allocation for all possible configurations. Regardless of the number of servers pre-allocated to high or low priority users by a static allocation scheme, dynamic resource allocation always achieves better user satisfaction at a given server utilization. The performance of high and low priority traffic classes under static and dynamic resource allocation is shown in Figure 13. User satisfaction for high and low priority users is plotted as a function of traffic intensity in Figure 13 and, respectively, for all system configurations considered. Results show that for the high priority users, 7 Figure 13: Comparison of Static and Dynamic Resource Allocation for Two Competing Priorities (Performance for each priority) A similar comparison of static and dynamic allocation schemes was generated for two competing bandwidth classes. Figure 14 compares the performance of static and dynamic resource allocation for a SATCOM system with 100 total servers and two competing traffic classes with different bandwidth requirements: one that requests 1 server and the other that requests 10 servers. The arrival rates of the small and large bandwidth classes were assumed to be equal ( 1 /10 10 =1). The static system pre-allocated 50 servers for each bandwidth class while the dynamic system was able to allocate resources for both classes dynamically. Total satisfaction (defined as 1 the blocking probability)

and server utilization were plotted vs. traffic intensity in Figure 14 and, respectively. Figure 14: Comparison of Static and Dynamic Resource Allocation for Two Competing Bandwidths (Total Performance) Results show that the total performance of the dynamic resource allocation system has slightly better performance than the static system, with higher user satisfaction and server utilization at all traffic intensities. The performance for small and large bandwidth users is shown in Figure 15 and Figure 16, respectively. Figure 15 plots user satisfaction and server utilization for small bandwidth users as a function traffic intensity for the both the dynamic allocation system (shown in green) and the static allocation system (shown in blue). Figure 15: Comparison of Static and Dynamic Resource Allocation for Two Competing Bandwidths (Small Bandwidth Users) Results show that dynamic resource allocation has much better performance at the higher traffic intensities. As traffic intensity approaches 1, dynamic resource allocation allows small bandwidth users to utilize more than 50% of the bandwidth resources, resulting in greater user satisfaction and server utilization. The static resource allocation system only allocates 50 servers to these users which greatly reduces their satisfaction and utilization at the higher traffic intensities. This improvement in performance for small bandwidth users at high traffic intensities comes at the price of the large bandwidth users (as shown in Figure 16). Figure 16 plots user satisfaction and server utilization for large bandwidth users as a function of traffic intensity for both the dynamic allocation system (shown in green) and the static allocation system (shown in blue). 8

Figure 16: Comparison of Static and Dynamic Resource Allocation for Two Competing Bandwidths (Large Bandwidth Users) Results show that at the higher traffic intensities, the performance of the large bandwidth users is degraded considerably by the dynamic resource allocation scheme. This is because at the higher traffic intensities, dynamic resource allocation allows the small bandwidth users to take away resources from the large bandwidth users, reducing their satisfaction and utilization. The static allocation system, on the other hand, pre-allocates 50 servers for just the large bandwidth users, resulting in much better performance. Since large bandwidth users are so disadvantaged, it may necessary to fence off resources for them if dynamic allocation systems are finally implemented in a SATCOM system. bandwidths. Results show that users who request a smaller fraction of the total bandwidth resources have better performance (less blocking and higher utilization) than higher bandwidth users. For competing priorities, high priority traffic only competes with itself and its performance is determined by the high priority traffic intensity. Lower priority users are highly dependent on the amount of highpriority traffic and as the ratio of high priority jobs increases, more low priority jobs are preempted and their bandwidth utilization is degraded. For competing bandwidths, total performance is upper bounded by the smaller bandwidth traffic class and small bandwidth jobs can more easily block large bandwidth jobs from getting service. Systems with large and small bandwidth jobs arriving at similar rates perform marginally worse than if the large bandwidth jobs were only competing with themselves. Dynamic allocation schemes provide better overall performance than comparable static allocation schemes, however, at the higher traffic intensities they provide worse performance for low priority and large bandwidth users. Future work will consider more sophisticated models of satellite resources and traffic load, including the effects of frequency channels, time-slots, antenna coverage, beam pointing, and requested circuits with duty cycles. Additional model enhancements may also include allowing for more than two competing priority and bandwidth classes, reentry procedures for preempted jobs, and improvement to the dynamic allocation algorithm. It will also be important to model some of the real-world consequences of implementing a Demand Assigned Multiple Access (DAMA) scheme for SATCOM including the effects of protocol delays and network management. REFERENCES [1] S. M. Ross. Introduction to Probability Models, Sixth Edition. Academic Press, 1997 [2] E. Cınlar. Introduction to Stochastic Processes. Prentice- Hall, 1997. [3] A. Gilat. MATLAB: An Introduction with Applications. John Wiley & Sons, 2005. [4] S. M. Ross. Simulation. Academic Press, 2002. 5. CONCLUSIONS AND FUTURE WORK An analytical model has been generated to measure user satisfaction (or blocking/preemption probability) and resource utilization for dynamically-allocated SATCOM systems that have users with competing priorities and 9