Performance Model of a Distributed Simulation Middleware CS672 Project Report May 5, 2004 Roger Wuerfel Yuhui Wang
|
|
- Arron Chandler
- 6 years ago
- Views:
Transcription
1 Performance Model of a Distributed Simulation Middleware CS672 Project Report May 5, 2004 Roger Wuerfel Yuhui Wang Introduction Distributed simulation is increasingly being used in many fields to bring together unique models into one system to simulate complex environments and problems. As the distributed simulation systems grow performance is a major issue. Most large simulation projects have taken only cursory looks at performance during development and end up tackling performance issues after the system is built and it is not performing as needed. Federation performance and modeling the performance of a federation has been looked at in several papers 1, 2, 3, 4, 5 but none have produced a model that was shown to produce valid results. These papers only described hypothetical situations of how their models could be used. The ability to create and validate such a model would be very useful to the distributed simulation community. The Run-Time Infrastructure (RTI) is an implementation of the High Level Architecture (HLA) that was designed to be a distributed simulation infrastructure. Appendix A gives a short description of the HLA and the RTI but in simple terms it can be thought of as a publication and subscription system for interactions and object updates. Interactions can be thought of as events and objects as persistent representations of objects in the simulation. These objects can have a state that can be updated by the owning simulation node. Other nodes that have subscribed to a particular type of object will receive messages with the new state information when it is updated. In HLA a simulation node is termed a federate and federates executing together is termed a federation. Project Goal During a federation integration small scenarios that are representative of the goal scenario are used to test the federates for interoperability. Usually the performance is adequate for these small scenarios and not much is done to prepare for the goal scenario. This project will attempt to use system measurements of a small scale scenario to create and validate a performance model of a system using an RTI. System Description The system to be measured will be defined as the two federate s local RTI components (LRC) and the network as shown in the following diagram.
2 Federate 1 λ1 LRC Network System LRC Federate 2 χ2 The metrics under study are the throughput and response time of messages from one federate to the other. Throughput is designated as the minimum of the sender and receiver throughput. 6 Sender throughput is measured in calls to send messages per second and the receiver throughput is measured in the number of messages received per second. The response time for the system under study is the total time it takes from the start of sending a block of updates until the last update is received by the receiving federate. Test Program The test program is a benchmark program that was designed to measure throughput of the RTI with one sending and one receiving federate. The program creates a given number of objects and then does cycles of updating each object as fast as possible. Throughput is measured for each cycle by timing the execution of all the updates and the receptions. At the end of each cycle the receiving federate sends a report interaction to the sending federate containing the number received and the time to receive. The sending federate then prints out the updates per second that it sent along with the reflections per second seen by the receiving federate. The sending federate sends a simple start interaction to the receiver and the cycle starts again. All network message traffic will be sent using TCP/IP. Project Plan The test program will be used to measure system resource utilization for different sized loads to create a workload characterization and performance model. The performance model will be validated against the measured response time of the system. The performance model will then be used to predict how the system will respond when the loading is increased. The system will be measured with the increased load and compared against the predictions. The predictions will then be assessed against the collected data. Equipment The NG-Pro V1.0.1 implementation of the High Level Architecture v1.3. Two Windows laptop computers and a 100 Mbps switch Sender is running Windows XP with a 1.2GHz processor and 1GB of RAM
3 Receiver is running Windows XP with a 700 MHz processor and 384 MB of RAM Custom software to use the Windows performance counters to record CPU and disk usage at the sending and receiving sides Workload Characterization Basic Components and Characterizing Parameters The basic components of the workload model are object updates, object reflections, sent interactions and received interactions. The system throughput is only measured for the object updates because the number of interactions are low enough when compared with the object updates that they can be ignored. For the current tests there are two interactions sent for every 10,000 object updates. Therefore, this study will concentrate on the object update loading and take measurement accordingly. The characterizing parameters for the object updates are the number of attributes for each update, the size of the attributes, and the frequency of updates. For this test each object will have only one attribute. The main parameter that will be varied is the size of that attribute. The benchmark test to be used updates the objects as fast as possible so the frequency will be a measured quantity. Sender Side Workload The sending program runs in a tight loop for the input number of updates calling the RTI updateattributevalues call for each execution of the loop. By default, the RTI itself does bundling of these updates to improve performance. The bundling parameters initially used for this experiment were a timeout of 5 ms and a size limit of bytes. To characterize the sender side workload for use in a queuing network model it is necessary to find the model s input parameter values in terms of the rate of the messages input to the sender s LRC. Trying to model the bundling became a problem because at the application level the throughput is measured in terms of individual updates per second and at the network level the throughput will be measured in terms of bundles per second. This causes a problem when trying to specify the queuing model s arrival rate input parameters. The problem is there are X messages going into the first queue, the sender s CPU, and these get aggregated into Y messages (where Y < X) going into the network. At the receiving side the Y messages will again be disaggregated into X messages to be delivered to the receiver. Therefore, it would appear to the queuing network that messages were destroyed in one place and then created in another which is not considered in this queuing model. It was decided to simplify the analysis and turn bundling off in the RTI which forces each update to be sent on the network as a single network message. Network Workload The network will be modeled as a queue instead of just a plain delay device because of the arrival rates to the network are high enough that the computers may have to buffer the messages. A previous internal investigation of the application overhead of the messages
4 was used as the basis of the following network size message estimation. The network overhead and service demand for the network queue were calculated using the analysis from Capacity Planning for Web Performance chapter 3 7. Segment Size (bytes) Network Speed (Mbps) Ethernet TCP header 20 IP header 20 GIOP header 64 Event Set Header 28 Event Payload Header 20 Total 112 Each RTI Message Size Segment (bytes) Message Header 32 Attribute Header 4 Attribute Size? No Bundling Benchmark Size (bytes) Message Size (bytes) Number of datagrams per message Overhead per message (bytes) Service Time for one Message (sec) Receiver Side Workload The receiving program runs in a tight loop waiting to receive the known number of messages or until it times out. It then sends a report interaction back to the sender of how many messages it received and how long it took to receive them. Therefore, to characterize the receiver side workload for use in a queuing network model we will need to find the model s input parameter values in terms of rate of the messages received and processed. In this benchmark program the application itself does not do any actual processing on the received messages besides doing a memcpy of the message contents into a local variable.
5 System Measurements The system measurements will be the CPU usage at the sending and receiving LRCs and the system throughput in terms of object updates or receptions per second for each cycle. These measurements will allow the service demand of a single update to be determined at the sender and receiver. The network service demand is calculated based upon knowledge of the network as previously described. The system does not do any disk I/O and can therefore disk utilization can be ignored. Arrival rates will be the rate at which the sending federate is updating the objects and will be measured as the system level throughput in terms of updates per second. The number of updates in each cycle was intentionally chosen to be long (10,000 messages) so that the total elapsed time would be large in comparison to the values we want to measure. It is also large in comparison to the granularity of some of the Windows clocks which can be ten milliseconds. 10,000 updates allowed the elapsed time for the smallest sized message to be on the order of six seconds. The total time for all messages to be received will be measured by the elapsed time from when the sender starts sending until it receives the report back from the receiver that all messages have been received. This elapsed time can be skewed by several factors. One is the time for the report to be sent from the receiver to the sender. From the network service demand calculations we can see that a 128 byte message takes about 0.02 ms to send therefore a small message transmission is negligible compared with the time to be measured which will be in measured in seconds. The second factor is that federates must invoke a method on the LRC to receive callbacks. The benchmark federates use the version of this method that allows a minimum and maximum time to be specified. The method will then spend at least the minimum amount of time and no more that the maximum amount of time waiting for and processing network messages and delivering callbacks to the federates. This means that if the message the federate was waiting on arrived at the start of the time the method will still wait until the minimum time has elapsed before returning and allowing the federate to send an interaction. This potential lengthening of the response time has been attempted to be compensated for by calculating the elapsed time from when the waited for interaction was actually received by a callback until the receiver sends the report. This elapsed time is then sent in the report and is subtracted from the sender s elapsed time. Measurement Techniques The measurements for sender s and receiver s CPU usage are taken from the same Windows system library used in the performance monitor on Windows computers. For sender s CPU, the data used is the % User Time used. On the receiver side, the CPU usage is broken to three steps. Processes receive interrupts from network interface card and spend time in the interrupt service routines (ISRs). Also Windows systems use deferred procedure calls (DPCs) to handle network traffic. Before a process can access the data from network interface card, these two steps must happen. Thus, the first queue used in model is measurements from the % Interrupt Time. Next queue in the model is %
6 DPC Time, (DPC) services devices while with interrupts enabled. The third queue used is measured with % User Time of the process on receiving machine. Priority level is also factored into the measurement of both the sender and receiver. Interrupts have the highest priority in Windows systems, and DPCs have lower priority than that of interrupts. DPCs also run with interrupts enabled. On the other hand, a process s usage of CPU in user mode has lower priority than that of DPC. Therefore, elongation factor for both % DPC Time and % User Time is calculated and applied to both % DPC Time and % User Time values before being used in the model for calculation. Measurement Results The benchmark program was executed for ten cycles with each cycle sending 10,000 updates for five different sized messages of 128, 256, 512, 1024, and 2048 bytes. The STDOUT of the sending and receive federates were captured in a file and then imported into Excel to analyze the data. The following table is a sample of the data collected for the sending federate with 1024 byte sized messages. #UAV send time(ms) Total % %Int %DPC Proc % % Priv % User Total Time Average Figure 1 Data from sender with 1024 byte messages The labels for the sender side data table are described in the following table. Label #UAV send time(ms) Total % %Int %DPC Proc % % Priv % User Description Number of updateattributevalue calls Time to send call all the UAV calls Total System % CPU used Total System % CPU used for interrups Total System % CPU used for DPC % CPU used for benchmark process % CPU privileged for benchmark process % CPU user mode for benchmark process
7 Total Time Total time to send and receive all the UAVs (sec) The following table is a sample of the data collected for the receiving federate with 1024 byte messages. #RAV recv time(ms ) Total % %Int %DPC Proc % % Priv % User Averag e Figure 2 Data from receiver with 1024 byte messages The labels for the sender side data table are described in the following table. Label #RAV recv time(ms) Total % %Int %DPC Proc % % Priv % User Description Number of reflectattributevalue calls Time to send call all the RAV calls Total System % CPU used Total System % CPU used for interrups Total System % CPU used for DPC % CPU used for benchmark process % CPU privileged for benchmark process % CPU user mode for benchmark process The %Int, %DPC, and %User data was used to calculate the service demands for the queues of the model. The following table shows an example of the elongation factor
8 being calculated and applied to the receivers CPU measurements for the 1024 byte message test data. Description % CPU Sum higher Elongation Interrupt DPC Application Throughput Dint DDPC Dcpu The primary measured value of interest is the total time it took to send and receive all of the data messages since this will be in the comparison of the model s output. This data is listed as Total Time in the senders data table (Figure 1). The descriptive statistics capability of Excel was used to create the following table. Total Time Mean Standard Error Median Mode #N/A Standard Deviation Sample Variance Kurtosis Skewness Range Minimum Maximum Sum Count 10 Largest(1) Smallest(1) Confidence Level(95.0%) This indicates that there is a 95% probability that the sample interval of to contains the true mean of the population. This combined with the coefficient of variation of the mean of the data equaling it would seem to indicate that we have a statistically meaningful set of Total Time measurements. Performance Model Development The system was initially envisioned to be modeled as three queues, a CPU for the sender (CPUsend), the network (NET), and a CPU for the receiver (CPUrecv). After working through the measurement techniques as previously described the five queue model was
9 chosen as shown in the following diagram. The CPUsend models the senders CPU, the Network models the 100Mbps network switch, the CPUint models the ISR to handle the incoming message at the receiver, the CPUdpc models the CPU used in the DPC, and finally the CPUrecv models the applications use of the CPU to receive the message. The model is an open queuing network because the input is specified as an arrival rate. The arrival rate in this case is the throughput of the system. Performance Model Validation The validation of the performance model was performed by using the data collected for the test runs of the 128, 256, 512, 1024, and 2048 bytes sized messages in the OpenQN.xls model supplied with Performance by Design. The model input for the 1024 byte message case is shown in the following table. Open Multiclass Queuing Networks This wokbook comes with the books "Performance by Design," "Capacity Planning for Web Services" and "Scaling for E-Business" by D. A. Menascé and V. A. F. Almeida, Prentice Hall, 2004, 2002 and No. Queues: 5 No. of Classes: 1 Classes fi Arrival Rates: 745 Service Demand Matrix Classes fi
10 Queues fl Type fl (LI/D/MPn) 1 CPUsend LI Network LI CPUint LI CPUdpc LI CPUrecv LI The residence times output for the 1024 byte message case is shown in the following table. Classes fi Queues fl 1 CPUsend Network CPUint CPUdpc CPUrecv Response Time Since the response time was for one message and the Total Time measurement was for 10,000 messages, the response time was multiplied by 10,000 and then plotted against the measured Total Time values that were collected. The following plot shows one set of test runs of the Total Time and OpenQN.xls response time for each of the message sizes. Measured vs Calculated 35 System Response Time (sec) Size (bytes) Total Time Response Time Linear (Response Time) Another run of the data produced the following graph.
11 Measured vs Calculated System Response Time (sec) Size (bytes) Total Time Response Time Linear (Response Time) The graph shows that the calculated values are much higher that the measured values but they do follow the same trend as shown by the linear regression trendline. There may be several reasons for the discrepancy, inaccuracies in the relationships of measurements, unknowns about the exact content of the percent of CPU measurements, and unknowns about the relationship between messages and DPCs. Since the system response time being measured was defined as the start time on one computer and the end time on another there are inaccuracies in the measurement of the Total Time and the collection of the system data. The system data is collected on both systems and may have collected more or less data that should really be attributed to the processing needed to send all the messages. The sender knows exactly when to start measuring because it starts the cycle but does not know exactly when to stop measuring because only the receiver knows exactly when to stop. The receiver has the opposite problem in that it knows the exact end time but not the exact start time. Synchronized clocks between the systems might alleviate this concern. The full relationship and interconnection of the interrupt and DPC usage of the CPU and the percent CPU usage reported for the process itself is not fully understood. One reference 8 states that % DPC Time is already included in the % Privileged Time measure. From the collected data this could not be true since the average % Privileged Time for the example data is 6% and the % DPC Time is 69%. This leaves the relationship between these two values and which values should be accounted for and how in question. The relationship between a single message and a single DPC is not clear. It may be that several messages get handled by a single DPC and that might affect the performance model since it is relying on this relationship. This is supported by the fact that for the
12 collected data the descriptive statistics show that for the sender the coefficient of variation of the % CPU for Interrupts is 0.56 and for the % CPU DPC is For the receiver the value for the % CPU for Interrupts is 0.64 and for the % CPU DPC is One other oddity of the data is that for the 128 and 256 byte messages there are several % CPU for Interrupts that are zero. This could be caused by the granularity of the system clock that measures this value. To further investigate this issue a longer run, 100 iterations, with the 128 byte message size was executed to see if the coefficient of variation would be better with a larger sample size. The coefficients of variation ended up to be about the same. The % CPU for Interrupts was then plotted against the % CPU DPC for this longer run and produced the following charts for the sender and receiver. Sender % CPU DPC % CPU Int
13 Receiver % CPU DPC % CPU Int These graphs clearly show clustering of the data and the need to break these up into more classes in the model. More research should be done to understand the reasoning behind this clustering so that the appropriate classes can be differentiated and any relationship between the sender and receiver side clusters can be properly accounted for. Another possibility is that the underlying Windows implementation of TCP/IP may be doing some bundling of messages. This could cause the same kind of difficulties in using the model as explained previously when considering the fact that the RTI bundles its messages by default. Summary and Future Work We did not quite meet our original goal of predicting performance of different loads because we spent a lot of time initially trying to model the bundling of messages done by the RTI and later with validation. Validation of the model also took a lot of time while we worked out the issues involved with the interrupts and the DPCs and their affect on the model. In spite of the difference in the actual values the fact that the modeled data follows the trend of the real data is encouraging. As stated in the introduction, there has been some work on creating performance models of federations but none have actually been validated. The necessity of breaking up the % CPU DPC and % CPU Interrupt into classes looks like it could be the reason that the model values were higher than the measured values. This should be the first follow on work performed.
14 After improving the validation of the model it can be applied to other classes of federates. In particular, most federates send out a variety of sizes of data. A prediction of performance for federates that send different sized messages can be explored using the linear regression from the baseline values for basic sized messages. The model can also be extended to take into account interactions and other services that the RTI provides such as time management and data distribution management.
15 Appendix A Description of the High Level Architecture and the Associated Runtime Infrastructure The HLA is set of standards to allow the creation of large simulations by combining smaller simulations into a cooperating group called a federation. Each simulation in a federation is called a federate and contributes at least one necessary function to accomplish the goals of the federation. In this regard, the HLA is a component based architecture where the components are federates. The HLA is designed as a publish/subscribe system. Federates declare that they will publish certain data and that they are interested in certain data. Therefore, each federate will potentially be sending data, receiving data, or sending and receiving data. Besides exchange of simulation data the HLA also provides the following categories of simulations services. 1. Federation Management 2. Declaration Management 3. Object Management 4. Ownership Management 5. Time Management 6. Data Distribution Management Discussion of all of these services is beyond the scope of this report and the interested reader is referred to Creating Computer Simulation Services 9 for more information on all of the HLA services. The four services that are used in this performance study are Federation Management, Declaration Management, Object Management, and Time Management. Federation Management services are focused on creating the logical federation and allowing a particular federate to join the federation. Declaration Management is the declarative component of the publish/subscribe paradigm. These services allow the federate to state what data it will publish and what data it wants to subscribe to. These declarations are based on the concepts of objects and interactions as the data. Objects are persistent entities that have state that may or may not change during the execution of the federation. Federates will create objects with the Object Management services and then update the object state information when necessary. These updates are then delivered by the infrastructure to federates who have subscribed to those types of objects. This delivery of updates is termed reflecting the update. Each object has a unique handle that the owner and receiving federates can use for identification. Interactions are single events with no persistence. Interactions are sent by a federate and received by subscribing federates. Interactions are onetime events and therefore do not have a unique handle. The Time Management services are concerned with maintaining a causally correct order of simulation events. Events here being defined as object updates and sent interactions. Each federate will attach a logical time to each event when it is generated. Logical time is not necessarily tied to wall clock time or to any particular representation or unit of time. It is merely a means to order events. The HLA infrastructure will then
16 deliver these events to subscribers in logical time order. In order to guarantee that federates do not receive events that have logical times less than the federate s current logical time, all federates must coordinate their advance of logical time with the entire federation. A federation is created when a stakeholder has a need to satisfy with simulation and participant simulations are identified. The federate representatives negotiate a shared data representation that will be the basis of their interactions using the HLA Object Model Template. Each simulation must then be modified to use that shared data representation in its interaction with the software that realizes the HLA, the runtime infrastructure (RTI). The RTI software implements the HLA application programmers interface (API) specification. The Runtime Infrastructure Next Generation (RTI-NG) is one implementation of an RTI ( that was developed by SAIC under sponsorship of DMSO ( From the network architecture point of view, the RTI-NG is a hybrid of a centralized and decentralized distributed system with each node designated as a federate and the entire system as a federation. Certain HLA services require a centralized repository of information such as the need to ensure that object names and handles are unique in the federation and the coordinated advancement of logical time. The system is also decentralized because each federate maintains a connection to all other federates in order to exchange simulation data. The following figure shows the interconnections within the RTI-NG of the federates in detail. Each federate maintains a TCP/IP connection to all the other federates and to the centralized process, the Federation Executive (FedExec). Exchange of simulation data between the federates is accomplished by use of direct connections between the federates. The FedExec process is the centralized coordination process for services such as time management. In order to implement the Time Management services, all of the reliable events with time stamps are counted at each federate when they are sent and received. When all the federates are ready to advance their logical time the FedExec process requests the counts of sent and received time stamped events from each federate. The federates are only allowed to advance in logical time when the number of sent events equals the number of received events.
17 Federate 1 Federate 2 Federate Executive Federate 3 Federate 4 RTI-NG Connection Architecture Diagram 1 Toward Predictive Models of Federation Performance: Essential Instrumentation, Kolek, S., Boswell, S. and Wolfson, H., Fall 2000 Simulation Interoperability Workshop, 00F-SIW-085, September A General Framework for Modeling Federation Performance, Miller, D. and Boswell, S., Spring 2001 Simulation Interoperability Workshop, 01S-SIW -070, March Techniques for Measuring and Modeling Federation Performance, Boswell, and S. Miller, D., European 2001 Simulation Interoperability Workshop, 01E-SIW-063, June Monitoring, Measuring and Analyzing Federation Performance, Bers, J., Carlson, L., and Boswell, S., Spring 2002 Simulation Interoperability Workshop, 02S-SIW-038, March An RTI Performance Testing Framework, Wuerfel, Roger and Olszewsky, Jeff, Fall 1999 Simulation Interoperability Workshop, 99F-SIW-127, September Defining RTI Performance, Wuerfel, Roger and Olszewsky, Jeff, Spring 1999 Simulation Interoperability Workshop, 99S-SIW-100, March Capacity Planning for Web Performance Metrics, Models, & Methods, Daniel A. Menascé and Virgilio A. F. Almeida, Prentice Hall, PTR, Windows 2000 Performance Guide, Friedman, Mark and Pentakalos, pp , Odysseas, O Rielly & Associates, Cambridge, Mass., Creating Computer Simulation Systems, Dr. Fredrick Kuhl, Dr. Richard Weatherly, Dr. Judith Dahmann, Prentice Hall PTR, 1999.
An Object-Oriented HLA Simulation Study
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 5 Special Issue on Control in Transportation Systems Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0022
More information! High Level Architecture (HLA): Background. ! Rules. ! Interface Specification. Maria Hybinette, UGA. ! SIMNET (SIMulator NETworking) ( )
Outline CSCI 8220 Parallel & Distributed Simulation PDES: Distributed Virtual Environments Introduction High Level Architecture! High Level Architecture (HLA): Background! Rules! Interface Specification»
More informationAnálise e Modelagem de Desempenho de Sistemas de Computação: Component Level Performance Models of Computer Systems
Análise e Modelagem de Desempenho de Sistemas de Computação: Component Level Performance Models of Computer Systems Virgilio ili A. F. Almeida 1 o Semestre de 2009 Introdução: Semana 5 Computer Science
More informationRTI Performance on Shared Memory and Message Passing Architectures
RTI Performance on Shared Memory and Message Passing Architectures Steve L. Ferenci Richard Fujimoto, PhD College Of Computing Georgia Institute of Technology Atlanta, GA 3332-28 {ferenci,fujimoto}@cc.gatech.edu
More informationpacket-switched networks. For example, multimedia applications which process
Chapter 1 Introduction There are applications which require distributed clock synchronization over packet-switched networks. For example, multimedia applications which process time-sensitive information
More informationImplementation of DDM in the MAK High Performance RTI
Implementation of DDM in the MAK High Performance RTI Douglas D. Wood 185 Alewife Brook Parkway Cambridge, MA 02138 dwood@mak.com Keywords: HLA, RTI, DDM ABSTRACT: The HLA data distribution management
More informationCh. 13: Measuring Performance
Ch. 13: Measuring Performance Kenneth Mitchell School of Computing & Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 Kenneth Mitchell, CS & EE dept., SCE, UMKC p. 1/3 Introduction
More informationPerformance Analysis of a Call Center Telecommunications Interface. CS672 Spring 2004 Mohamed Benalayat Raymond Jordan
Performance Analysis of a Call Center Telecommunications Interface CS672 Spring 2004 Mohamed Benalayat Raymond Jordan Overview Call center monitoring and recording systems allow corporations to monitor
More informationEvaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades
Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state
More informationImplementing a NTP-Based Time Service within a Distributed Middleware System
Implementing a NTP-Based Time Service within a Distributed Middleware System ACM International Conference on the Principles and Practice of Programming in Java (PPPJ `04) Hasan Bulut 1 Motivation Collaboration
More informationLesson 2: Using the Performance Console
Lesson 2 Lesson 2: Using the Performance Console Using the Performance Console 19-13 Windows XP Professional provides two tools for monitoring resource usage: the System Monitor snap-in and the Performance
More informationWhite Paper. Performance in Broadband Wireless Access Systems
White Paper Performance in Broadband Wireless Access Systems Defining Broadband Services Broadband service is defined as high speed data that provides access speeds of greater than 256K. There are a myriad
More informationAutomated Clustering-Based Workload Characterization
Automated Clustering-Based Worload Characterization Odysseas I. Pentaalos Daniel A. MenascŽ Yelena Yesha Code 930.5 Dept. of CS Dept. of EE and CS NASA GSFC Greenbelt MD 2077 George Mason University Fairfax
More informationCS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP
CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering
More informationProgramming Project. Remember the Titans
Programming Project Remember the Titans Due: Data and reports due 12/10 & 12/11 (code due 12/7) In the paper Measured Capacity of an Ethernet: Myths and Reality, David Boggs, Jeff Mogul and Chris Kent
More informationThe latency of user-to-user, kernel-to-kernel and interrupt-to-interrupt level communication
The latency of user-to-user, kernel-to-kernel and interrupt-to-interrupt level communication John Markus Bjørndalen, Otto J. Anshus, Brian Vinter, Tore Larsen Department of Computer Science University
More informationEqualLogic Storage and Non-Stacking Switches. Sizing and Configuration
EqualLogic Storage and Non-Stacking Switches Sizing and Configuration THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS
More informationProcess- Concept &Process Scheduling OPERATING SYSTEMS
OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple
More informationCPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31
CPU scheduling CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In a single-processor
More informationReal-Time Protocol (RTP)
Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of
More informationLixia Zhang M. I. T. Laboratory for Computer Science December 1985
Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol
More informationPerformance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution
Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Product Group Dell White Paper February 28 Contents Contents Introduction... 3 Solution Components... 4
More informationConfiguring Cisco IOS IP SLAs Operations
CHAPTER 50 This chapter describes how to use Cisco IOS IP Service Level Agreements (SLAs) on the switch. Cisco IP SLAs is a part of Cisco IOS software that allows Cisco customers to analyze IP service
More informationTable of Contents. Copyright Pivotal Software Inc,
Table of Contents Table of Contents Greenplum Command Center User Guide Dashboard Query Monitor Host Metrics Cluster Metrics Monitoring Multiple Greenplum Database Clusters Historical Queries & Metrics
More informationECE 650 Systems Programming & Engineering. Spring 2018
ECE 650 Systems Programming & Engineering Spring 2018 Networking Transport Layer Tyler Bletsch Duke University Slides are adapted from Brian Rogers (Duke) TCP/IP Model 2 Transport Layer Problem solved:
More informationNetwork Design Considerations for Grid Computing
Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom
More informationConfiguring Cisco IOS IP SLA Operations
CHAPTER 58 This chapter describes how to use Cisco IOS IP Service Level Agreements (SLA) on the switch. Cisco IP SLA is a part of Cisco IOS software that allows Cisco customers to analyze IP service levels
More informationRaw Data Formatting: The RDR Formatter and NetFlow Exporting
CHAPTER 9 Raw Data Formatting: The RDR Formatter and NetFlow Exporting Revised: September 27, 2012, Introduction Cisco Service Control is able to deliver gathered reporting data to an external application
More informationMidterm II December 4 th, 2006 CS162: Operating Systems and Systems Programming
Fall 2006 University of California, Berkeley College of Engineering Computer Science Division EECS John Kubiatowicz Midterm II December 4 th, 2006 CS162: Operating Systems and Systems Programming Your
More informationSOFT 437. Software Performance Analysis. Ch 7&8:Software Measurement and Instrumentation
SOFT 437 Software Performance Analysis Ch 7&8: Why do we need data? Data is required to calculate: Software execution model System execution model We assumed that we have required data to calculate these
More informationLies, Damn Lies and Performance Metrics. PRESENTATION TITLE GOES HERE Barry Cooks Virtual Instruments
Lies, Damn Lies and Performance Metrics PRESENTATION TITLE GOES HERE Barry Cooks Virtual Instruments Goal for This Talk Take away a sense of how to make the move from: Improving your mean time to innocence
More informationAnalytic Performance Models for Bounded Queueing Systems
Analytic Performance Models for Bounded Queueing Systems Praveen Krishnamurthy Roger D. Chamberlain Praveen Krishnamurthy and Roger D. Chamberlain, Analytic Performance Models for Bounded Queueing Systems,
More informationptop: A Process-level Power Profiling Tool
ptop: A Process-level Power Profiling Tool Thanh Do, Suhib Rawshdeh, and Weisong Shi Wayne State University {thanh, suhib, weisong}@wayne.edu ABSTRACT We solve the problem of estimating the amount of energy
More informationPCnet-FAST Buffer Performance White Paper
PCnet-FAST Buffer Performance White Paper The PCnet-FAST controller is designed with a flexible FIFO-SRAM buffer architecture to handle traffic in half-duplex and full-duplex 1-Mbps Ethernet networks.
More informationInital Starting Point Analysis for K-Means Clustering: A Case Study
lemson University TigerPrints Publications School of omputing 3-26 Inital Starting Point Analysis for K-Means lustering: A ase Study Amy Apon lemson University, aapon@clemson.edu Frank Robinson Vanderbilt
More informationChapter 6: CPU Scheduling. Operating System Concepts 9 th Edition
Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time
More informationIn examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured
System Performance Analysis Introduction Performance Means many things to many people Important in any design Critical in real time systems 1 ns can mean the difference between system Doing job expected
More informationData & Computer Communication
Basic Networking Concepts A network is a system of computers and other devices (such as printers and modems) that are connected in such a way that they can exchange data. A bridge is a device that connects
More informationBenchmarking results of SMIP project software components
Benchmarking results of SMIP project software components NAILabs September 15, 23 1 Introduction As packets are processed by high-speed security gateways and firewall devices, it is critical that system
More informationValidation of Router Models in OPNET
Validation of Router Models in OPNET B. Van den Broeck, P. Leys, J. Potemans 1, J. Theunis, E. Van Lil, A. Van de Capelle Katholieke Universiteit Leuven (K.U.Leuven) Department of Electrical Engineering
More informationTutorial Parallel & Distributed Simulation Systems and the High Level Architecture
Tutorial Parallel & Distributed Simulation Systems and the High Level Architecture Kalyan Perumalla, Ph.D. Research Faculty Member College of Computing & MSREC Georgia Institute of Technology Atlanta,
More informationNET0183 Networks and Communications
Lectures 7 and 8 Measured performance of an Ethernet Ethernet is a CSMA/CD network. Carrier Sense Multiple Access with Collision Detection 1 Historical Case Study http://portal.acm.org/beta/citation.cfm?id=359044
More informationLecture 4: Introduction to Computer Network Design
Lecture 4: Introduction to Computer Network Design Instructor: Hussein Al Osman Based on Slides by: Prof. Shervin Shirmohammadi Hussein Al Osman CEG4190 4-1 Computer Networks Hussein Al Osman CEG4190 4-2
More informationSmall verse Large. The Performance Tester Paradox. Copyright 1202Performance
Small verse Large The Performance Tester Paradox The Paradox Why do people want performance testing? To stop performance problems in production How do we ensure this? Performance test with Realistic workload
More informationDesigning High Performance IEC61499 Applications on Top of DDS
ETFA2013 4th 4DIAC Users Workshop Designing High Performance IEC61499 Applications on Top of DDS Industrial communications Complex Different solutions at the different layers Fieldbus at bottom layers:
More informationAdapting Mixed Workloads to Meet SLOs in Autonomic DBMSs
Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Baoning Niu, Patrick Martin, Wendy Powley School of Computing, Queen s University Kingston, Ontario, Canada, K7L 3N6 {niu martin wendy}@cs.queensu.ca
More informationPerformance of UMTS Radio Link Control
Performance of UMTS Radio Link Control Qinqing Zhang, Hsuan-Jung Su Bell Laboratories, Lucent Technologies Holmdel, NJ 77 Abstract- The Radio Link Control (RLC) protocol in Universal Mobile Telecommunication
More informationUNIT 2 TRANSPORT LAYER
Network, Transport and Application UNIT 2 TRANSPORT LAYER Structure Page No. 2.0 Introduction 34 2.1 Objective 34 2.2 Addressing 35 2.3 Reliable delivery 35 2.4 Flow control 38 2.5 Connection Management
More informationThe Network Layer and Routers
The Network Layer and Routers Daniel Zappala CS 460 Computer Networking Brigham Young University 2/18 Network Layer deliver packets from sending host to receiving host must be on every host, router in
More informationOn Network Dimensioning Approach for the Internet
On Dimensioning Approach for the Internet Masayuki Murata ed Environment Division Cybermedia Center, (also, Graduate School of Engineering Science, ) e-mail: murata@ics.es.osaka-u.ac.jp http://www-ana.ics.es.osaka-u.ac.jp/
More informationSequence Number. Acknowledgment Number. Data
CS 455 TCP, Page 1 Transport Layer, Part II Transmission Control Protocol These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make
More informationTechnical Brief: Specifying a PC for Mascot
Technical Brief: Specifying a PC for Mascot Matrix Science 8 Wyndham Place London W1H 1PP United Kingdom Tel: +44 (0)20 7723 2142 Fax: +44 (0)20 7725 9360 info@matrixscience.com http://www.matrixscience.com
More informationBest Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays
Dell EqualLogic Best Practices Series Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays A Dell Technical Whitepaper Jerry Daugherty Storage Infrastructure
More informationContinuous Real Time Data Transfer with UDP/IP
Continuous Real Time Data Transfer with UDP/IP 1 Emil Farkas and 2 Iuliu Szekely 1 Wiener Strasse 27 Leopoldsdorf I. M., A-2285, Austria, farkas_emil@yahoo.com 2 Transilvania University of Brasov, Eroilor
More informationPerformance Objects and Counters for the System
APPENDIXA Performance Objects and for the System May 19, 2009 This appendix provides information on system-related objects and counters. Cisco Tomcat Connector, page 2 Cisco Tomcat JVM, page 4 Cisco Tomcat
More informationFuxiSort. Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc
Fuxi Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc {jiamang.wang, yongjun.wyj, hua.caihua, zhipeng.tzp, zhiqiang.lv,
More informationHomework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca
CSCI 1680 Computer Networks Fonseca Homework 1 Due: 27 September 2012, 4pm Question 1 - Layering a. Why are networked systems layered? What are the advantages of layering? Are there any disadvantages?
More informationExtracting Performance and Scalability Metrics From TCP. Baron Schwartz Postgres Open September 16, 2011
Extracting Performance and Scalability Metrics From TCP Baron Schwartz Postgres Open September 16, 2011 Consulting Support Training Development For MySQL October 24-25, London /live Agenda Capturing TCP
More informationCase Study II: A Web Server
Case Study II: A Web Server Prof. Daniel A. Menascé Department of Computer Science George Mason University www.cs.gmu.edu/faculty/menasce.html 1 Copyright Notice Most of the figures in this set of slides
More informationLoad Dynamix Enterprise 5.2
DATASHEET Load Dynamix Enterprise 5.2 Storage performance analytics for comprehensive workload insight Load DynamiX Enterprise software is the industry s only automated workload acquisition, workload analysis,
More informationJoe Wingbermuehle, (A paper written under the guidance of Prof. Raj Jain)
1 of 11 5/4/2011 4:49 PM Joe Wingbermuehle, wingbej@wustl.edu (A paper written under the guidance of Prof. Raj Jain) Download The Auto-Pipe system allows one to evaluate various resource mappings and topologies
More informationGustavo Alonso, ETH Zürich. Web services: Concepts, Architectures and Applications - Chapter 1 2
Chapter 1: Distributed Information Systems Gustavo Alonso Computer Science Department Swiss Federal Institute of Technology (ETHZ) alonso@inf.ethz.ch http://www.iks.inf.ethz.ch/ Contents - Chapter 1 Design
More informationSysGauge SYSTEM MONITOR. User Manual. Version 3.8. Oct Flexense Ltd.
SysGauge SYSTEM MONITOR User Manual Version 3.8 Oct 2017 www.sysgauge.com info@flexense.com 1 1 SysGauge Product Overview SysGauge is a system and performance monitoring utility allowing one to monitor
More informationVMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR. NetApp Storage. User Guide
VMWARE VREALIZE OPERATIONS MANAGEMENT PACK FOR User Guide TABLE OF CONTENTS 1. Purpose... 3 2. Introduction to the Management Pack... 3 2.1 Understanding NetApp Integration... 3 2.2 How the Management
More informationCloud Monitoring as a Service. Built On Machine Learning
Cloud Monitoring as a Service Built On Machine Learning Table of Contents 1 2 3 4 5 6 7 8 9 10 Why Machine Learning Who Cares Four Dimensions to Cloud Monitoring Data Aggregation Anomaly Detection Algorithms
More informationMiddleware and Interprocess Communication
Middleware and Interprocess Communication Reading Coulouris (5 th Edition): 41 4.1, 42 4.2, 46 4.6 Tanenbaum (2 nd Edition): 4.3 Spring 2015 CS432: Distributed Systems 2 Middleware Outline Introduction
More informationOPC UA Configuration Manager Help 2010 Kepware Technologies
OPC UA Configuration Manager Help 2010 Kepware Technologies 1 OPC UA Configuration Manager Help Table of Contents 1 Getting Started... 2 Help Contents... 2 Overview... 2 Server Settings... 2 2 OPC UA Configuration...
More informationRicardo Rocha. Department of Computer Science Faculty of Sciences University of Porto
Ricardo Rocha Department of Computer Science Faculty of Sciences University of Porto Slides based on the book Operating System Concepts, 9th Edition, Abraham Silberschatz, Peter B. Galvin and Greg Gagne,
More informationIP SLAs Overview. Finding Feature Information. Information About IP SLAs. IP SLAs Technology Overview
This module describes IP Service Level Agreements (SLAs). IP SLAs allows Cisco customers to analyze IP service levels for IP applications and services, to increase productivity, to lower operational costs,
More informationDISTRIBUTED COMPUTER SYSTEMS
DISTRIBUTED COMPUTER SYSTEMS MESSAGE ORIENTED COMMUNICATIONS Dr. Jack Lange Computer Science Department University of Pittsburgh Fall 2015 Outline Message Oriented Communication Sockets and Socket API
More informationWorkload Characterization Techniques
Workload Characterization Techniques Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu These slides are available on-line at: http://www.cse.wustl.edu/~jain/cse567-08/
More informationAppendix B. Standards-Track TCP Evaluation
215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error
More informationPerformance Best Practices Paper for IBM Tivoli Directory Integrator v6.1 and v6.1.1
Performance Best Practices Paper for IBM Tivoli Directory Integrator v6.1 and v6.1.1 version 1.0 July, 2007 Table of Contents 1. Introduction...3 2. Best practices...3 2.1 Preparing the solution environment...3
More informationPLEASE READ CAREFULLY BEFORE YOU START
Page 1 of 20 MIDTERM EXAMINATION #1 - B COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document
More informationPLEASE READ CAREFULLY BEFORE YOU START
Page 1 of 20 MIDTERM EXAMINATION #1 - A COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document
More informationDetermining the Number of CPUs for Query Processing
Determining the Number of CPUs for Query Processing Fatemah Panahi Elizabeth Soechting CS747 Advanced Computer Systems Analysis Techniques The University of Wisconsin-Madison fatemeh@cs.wisc.edu, eas@cs.wisc.edu
More informationDesign and Evaluation of a Socket Emulator for Publish/Subscribe Networks
PUBLISHED IN: PROCEEDINGS OF THE FUTURE INTERNET SYMPOSIUM 2010 1 Design and Evaluation of a for Publish/Subscribe Networks George Xylomenos, Blerim Cici Mobile Multimedia Laboratory & Department of Informatics
More informationMethod-Level Phase Behavior in Java Workloads
Method-Level Phase Behavior in Java Workloads Andy Georges, Dries Buytaert, Lieven Eeckhout and Koen De Bosschere Ghent University Presented by Bruno Dufour dufour@cs.rutgers.edu Rutgers University DCS
More informationDistributed Systems. Pre-Exam 1 Review. Paul Krzyzanowski. Rutgers University. Fall 2015
Distributed Systems Pre-Exam 1 Review Paul Krzyzanowski Rutgers University Fall 2015 October 2, 2015 CS 417 - Paul Krzyzanowski 1 Selected Questions From Past Exams October 2, 2015 CS 417 - Paul Krzyzanowski
More informationPerformance of Virtual Desktops in a VMware Infrastructure 3 Environment VMware ESX 3.5 Update 2
Performance Study Performance of Virtual Desktops in a VMware Infrastructure 3 Environment VMware ESX 3.5 Update 2 Workload The benefits of virtualization for enterprise servers have been well documented.
More informationVerification and Validation of X-Sim: A Trace-Based Simulator
http://www.cse.wustl.edu/~jain/cse567-06/ftp/xsim/index.html 1 of 11 Verification and Validation of X-Sim: A Trace-Based Simulator Saurabh Gayen, sg3@wustl.edu Abstract X-Sim is a trace-based simulator
More informationChapter 4 Network Layer: The Data Plane
Chapter 4 Network Layer: The Data Plane A note on the use of these Powerpoint slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you see
More informationQuality of Service in US Air Force Information Management Systems
Quality of Service in US Air Force Information Management Systems Co-hosted by: Dr. Joseph P. Loyall BBN Technologies Sponsored by: 12/11/2009 Material Approved for Public Release. Quality of Service is
More informationChapter 8 Virtual Memory
Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Operating Systems: Internals and Design Principles You re gonna need a bigger boat. Steven
More informationCS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 21: Network Protocols (and 2 Phase Commit)
CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2003 Lecture 21: Network Protocols (and 2 Phase Commit) 21.0 Main Point Protocol: agreement between two parties as to
More informationPLEASE READ CAREFULLY BEFORE YOU START
Page 1 of 11 MIDTERM EXAMINATION #1 OCT. 16, 2013 COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2013-75 minutes This examination
More informationReview. EECS 252 Graduate Computer Architecture. Lec 18 Storage. Introduction to Queueing Theory. Deriving Little s Law
EECS 252 Graduate Computer Architecture Lec 18 Storage David Patterson Electrical Engineering and Computer Sciences University of California, Berkeley Review Disks: Arial Density now 30%/yr vs. 100%/yr
More informationRAPIDIO USAGE IN A BIG DATA ENVIRONMENT
RAPIDIO USAGE IN A BIG DATA ENVIRONMENT September 2015 Author: Jorge Costa Supervisor(s): Olof Barring PROJECT SPECIFICATION RapidIO (http://rapidio.org/) technology is a package-switched high-performance
More informationQuality of Service (QoS) Enabled Dissemination of Managed Information Objects in a Publish-Subscribe-Query
Quality of Service (QoS) Enabled Dissemination of Managed Information Objects in a Publish-Subscribe-Query Information Broker Dr. Joe Loyall BBN Technologies The Boeing Company Florida Institute for Human
More informationSegregating Data Within Databases for Performance Prepared by Bill Hulsizer
Segregating Data Within Databases for Performance Prepared by Bill Hulsizer When designing databases, segregating data within tables is usually important and sometimes very important. The higher the volume
More informationAnalysis of a Multiple Content Variant Extension of the Multimedia Broadcast/Multicast Service
PUBLISHED IN: PROCEEDINGS OF THE EUROPEAN WIRELESS 2006 CONFERENCE 1 Analysis of a Multiple Content Variant Extension of the Multimedia Broadcast/Multicast Service George Xylomenos, Konstantinos Katsaros
More informationSOFT 437 Quiz #2 February 26, 2015
SOFT 437 Quiz #2 February 26, 2015 Do not turn this page until the quiz officially begins. STUDENT NUMBER Please do not write your name anywhere on this quiz. I recommend writing your student number at
More informationMean Value Analysis and Related Techniques
Mean Value Analysis and Related Techniques 34-1 Overview 1. Analysis of Open Queueing Networks 2. Mean-Value Analysis 3. Approximate MVA 4. Balanced Job Bounds 34-2 Analysis of Open Queueing Networks Used
More informationMemory Addressing, Binary, and Hexadecimal Review
C++ By A EXAMPLE Memory Addressing, Binary, and Hexadecimal Review You do not have to understand the concepts in this appendix to become well-versed in C++. You can master C++, however, only if you spend
More informationOVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI
CMPE 655- MULTIPLE PROCESSOR SYSTEMS OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI What is MULTI PROCESSING?? Multiprocessing is the coordinated processing
More informationJULIA ENABLED COMPUTATION OF MOLECULAR LIBRARY COMPLEXITY IN DNA SEQUENCING
JULIA ENABLED COMPUTATION OF MOLECULAR LIBRARY COMPLEXITY IN DNA SEQUENCING Larson Hogstrom, Mukarram Tahir, Andres Hasfura Massachusetts Institute of Technology, Cambridge, Massachusetts, USA 18.337/6.338
More informationKent State University
CS 4/54201 Computer Communication Network Kent State University Dept. of Computer Science www.mcs.kent.edu/~javed/class-net06f/ 1 A Course on Networking and Computer Communication LECT-11, S-2 Congestion
More informationECE519 Advanced Operating Systems
IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (8 th Week) (Advanced) Operating Systems 8. Virtual Memory 8. Outline Hardware and Control Structures Operating
More informationOPC UA Configuration Manager PTC Inc. All Rights Reserved.
2017 PTC Inc. All Rights Reserved. 2 Table of Contents 1 Table of Contents 2 4 Overview 4 5 Project Properties - OPC UA 5 Server Endpoints 7 Trusted Clients 9 Discovery Servers 10 Trusted Servers 11 Instance
More informationThe Role of Performance
Orange Coast College Business Division Computer Science Department CS 116- Computer Architecture The Role of Performance What is performance? A set of metrics that allow us to compare two different hardware
More information