Hardware Assisted Recursive Packet Classification Module for IPv6 etworks ABSTRACT

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Hardware Assisted Recursive Packet Classification Module for IPv6 etworks ABSTRACT"

Transcription

1 Hardware Assisted Recursive Packet Classification Module for IPv6 etworks Shivvasangari Subramani Department of Computer Science and Electrical Engineering University of Maryland Baltimore County,USA ABSTRACT In the advanced computer networks, to implement various internet functions such as traffic policing, Quality of Service, firewall processing, and normal unicast and multicast forwarding we need to classify packets based on its multi-field header which has time complexity. It is estimated that approximately 14% of the processing load is spent on packet classification, therefore, designing a hardware assisted packet classification scheme not only can reduce the work load placed on the network processor, but also potentially speed up the entire classification process. We propose a multiple SIMD processors based architecture to take advantage of the parallel processing paradigm offered by the decomposition algorithm. Our design is composed of seven identical SIMD processors to simultaneously handle the multiple data chunks from the various IP header fields. We used a single thread C code to simulate the classification process that occurs on a network processor, along with additional logging for individual instruction execution time and frequency. We have analyzed our code for packet classifier using various cache structures and sizes. And also performance analysis of single threaded Vs multithreaded architecture of packet classifier has been done. 1. I TRODUCTIO Packet classification is the process of comparing an incoming packet flow with a set of pre-established flow characteristics (known as filters or rules) to determine its identity. In the current generation of Internet Protocol - IPv4, packet classification is primarily used in enhancing security, monitoring network, and differentiating quality of service. With the emerging of the new internet procotol - IPv6, packet classification has exp1anded its roles to include matching flow characteristics to various flow profiles(know as Robust Header Compression Profile, defined in RFC3095 and RFC 3096, profile serves as a basis for different compression schemes targeting at various packet content) to determine its best header compression method. At its core, packet classification is a multiple fields search problem involving finding the best matching filter based on exact or wildcard pattern. Since filters can contain overlapping properties, search results often yield multiple matching filters for a single flow, thus, filter priority is also added to help eliminating non-exclusive search results. In the context of Robust Header Compression used for IPv6 in a wireless network, Packet Classification combined with Header encoding & decoding and CRC comprise the three components that are most computationally intensive on a network processor[ref 2]. Our design goal is motivated by the idea of forming a three piece hardware linked modules, work in conjunction to reduce the computational overhead imposed on the network processor (see Fig 1 below). Although this paper focuses just on the packet classification module, our overall design model still remains intact, and Figure 1 illustrates the placement of the packet classification module relative to the overall structure. Our packet classification module design is derived from a class of algorithms categorized as decomposition methods. In comparison with the remaining classification algorithms including decision tree, exhaustive 1

2 search and tuple space, decomposition stands out as a natural candidate for a hardware implementation. Decomposition method focuses on breaking down multiple field search into independent searches on single field, then combine all the search results. This type of search is a natural fit for a modern parallel SIMD processing architecture. Upon further research, we narrowed down our choice to the Recursive Flow Classification(RFC) due to its efficiency and speed. RFC treats the packet classification problem as a X-bit string reduction problem, where a X-bit string must be reduced from its original S-bit length to a new T-bit length such that T << S [ref 1]. The S-bit represents the total bits for all the header fields in a packet, and the T-bit represents a set of matching filters found for the S-bit header fields. The unique T-bit pattern used to represent the set of matching filters for a packet is known as an equivalence class identifier(eqid)[ref 1]. RFC carries out this bit-reduction process through multiple phases, each phase combines eqids returned from previous lookup phases, then subsequently re-applies the same reduction method to yield more concise eqid classifications with total lesser bit length. The last phase of this successive merging and reduction process yields a final eqid that specifies the flow action for the packet. The basic idea of RFC is illustrated in Figure 2[ref 1]: We divide the remaining of this paper into four major sections: Architecture Model, Architecture Simulation, Data Analysis and Conclusion. In section two - architecture model, we dive into the details of our RFC hardware architecture, explaining its various components and individual construction. In section three- 2

3 architecture simulation, we describe the simulation methods used to experiment with our design, and parameters employed for data inputs. In section four - data analysis, we present our simulation results and our analysis on the performance impact of the design. Finally, We conclude in section five with key findings of our simulation results and discuss some of the limitations in our experiment and their implications. 2. ARCHITECTURE MODEL The RFC Packet Classification architecture, depicted in Figure 3, can be summarized into four major parts: Buffer Units, Dispatching Unit(aka Control Unit), SIMD units, and Cache & Memory Units. The Buffer Units, described in section 2.1, is used to store input/output data stream, these streams include the original packet header fields and the intermediate filter search results from different RFC phases. The Buffer Units feed the data streams to the Dispatching Unit. The Dispatching Unit, described in section 2.2, acts as a load balancer for all the SIMD units, its major function is to issue the same instructions to all SIMD units, as well as divide the header fields or eqids into multiple data chunks and distribute them. The SIMD units(described in section 2.3) are the center piece of our design, they perform the necessary RFC bit-reduction process and forwards their search results(eqids) to the Output Buffer Unit. The last group of our design is the Cache & Memory Units(described in section 2.4), they serve as a storage place for instructions and classification filters. 2.1 BUFFER U IT The RFC RFC Packet Classification Architecture encompasses two separate piece of buffer units: Input Data Buffer and Output Data Buffer. The primary use of the Input Data Buffer unit is to capture the incoming packet header fields and queue them if necessary in order to pass the data down to the Dispatching Unit. The Input Buffer size is targeted at 512K, it can approximately queue IPv6 packet headers(each IPv6 header contains 352-bit header fields). The Input Buffer Unit is the equivalent of the original 2 S data source for the initial phase 0 shown in Figure 2. For the Output Buffer, the same identical hardware is used but the capacity is reduced in half since each reduction phase will produce data with much shorter bit-length. The purpose of the Output Data Buffer is to provide an intermediate storage area for keeping search results produced from different RFC phases. Using the example from Figure 2, the Output Data Buffer will store the search results 2 64, 2 24 and 2 12 from phase 1, 2 and 3. Once the Dispatching Unit detects the current incoming eqid is generated from a final RFC bit-reduction phase, it redirects the output 2 T to the output data link of our hardware module (see Figure 3). 2.2 DISPTACHI G U IT The Dispatching Unit is the central coordinator for distributing data and issuing instructions. It takes both the original packet header fields from the Input Data Buffer as well as the intermediate search results from the Output Buffer, the two data streams represent the entire data feed for the SIMD architecture. Raw data streams are further divided into multiple chunks of data groups by the Dispatching Unit and send to SIMD unit for processing. The dispatching unit has the ability to detect the processing condition for each unit and stalls if necessary when all SIMD units reach their capacities. Instruction fetch is done through a linkage to the Instruction Cache Unit, the Dispatching Unit loads and issues the same instruction to all the SIMD unit in the same execution cycle. Instructions are issued in-order and must be completed in-order in our model. The Dispatching Unit also ensures the current packet header has been completely processed before loads the next packet header from the Input Buffer. The final phase of a bit-reduction is detected when there is only one eqid coming from the Output Buffer Unit, once this condition occurs, the Dispatching Unit redirects the 3

4 final output to the output data link of the RFC hardware module for the next stage of processing by the network processor. 2.3 SIMD U ITS The SIMD units are the main work engine in our design, they carry out the filter search operation for each RFC bit-reduction phase and sends their results to the Output Buffer unit. We used seven SIMD units to accommodate the seven header fields in an IPv6 Packet Header. Since each RFC phase reduces the output data length, it means there is less input to feed back into SIMDs units, therefore some of the SIMD units may become idle after several phases. We considered the possibility of allowing fetching the next packet header in the queue, however, this potentially complicates the matter due to the fact the Output Buffer Unit will need to be shared. In addition, the Dispatching unit also must discern eqids between different packet headers. An apparent solution for the problem is to introduce some additional bit identifier attached to the data chunks, effectively labeling each packet header, but this will increase the data length and introduce extra processing overhead of stripping them during filter search, therefore, we decided not to address this issue in this design. In our design, all the SIMDs units connect to the memory unit and the Output Buffer unit via a common memory bus. The memory bus is configured to be 64 bit-wide and operates at a frequency of 200mhz. It is capable of 2 transfers per cycle, which offers a total 3200 megabytes per second bandwidth. In addition, our design targets each SIMD processor as a 32-bit processor with a clock rate of 100 mhz, and capable of processing 4 bytes per cycle. Assuming each packet header is exactly 352 bits(44 bytes), this configuration provides a processing power of 63 million packet headers per second, with a total of 2800 mega bytes per second throughput, which is well under the capacity limit of the memory bus. The remaining memory bus bandwidth can be used for loading the necessary filter data set from the main memory unit. 4

5 2.4 CACHE A D MEMORY U ITS The RFC hardware module is equipped with two cache units and a main memory unit to provide for instruction buffering and data fetching. The two cache units consist of an instruction cache unit and a victim cache unit. Both units adopt the fully associative cache structure aimed at reducing miss rate. Since our instruction data is shared across all the SIMD units, we believe providing a common shared instruction cache is the preferred option in this setup. The instruction cache is also linked to a victim cache to add a second level of instruction buffering. The victim cache stores data entries purged from the instruction cache due to conflict or capacity misses. The support for adding a victim cache comes from the fact that RFC recursively uses the same routines in multiple phases, there is a high probability of recently purged instruction will be used again in the next phase. The intended configuration for the cache is to set the cache size to be 8k and 4k respectively for the i-cache and v-cache. The data transfer between the instruction cache unit and the Dispatching unit is done via a memory bus similar to the bus architecture used to link SIMDs and the buffer units, however, this bus link is enhanced to run at a frequency of 400mhz -doubling the previous bus speed. All data transfer from cache unit is facilitated through the Dispatching Unit, there are no direct interactions between the SIMD units and cache, this is a sharp contrast in comparison with the main memory unit setup, which is directly connected to all the SIMDs units via the shared memory bus. The memory unit is engineered this way because it stores all the necessary filter data set relevant to the search operations being performed inside each SIMD unit. Since filter data sets tends to be large - approximately 4 mega bytes in size after compressed[ref 1], thus having a direct connection will likely reduce overhead and improve transfer speed. In our initial design, we contemplated the idea of having individual memory unit attached to each SIMD unit, and splitting the large filter data set to distribute them among the individual memory units, however, this approach will increase the complexity of the Dispatching Unit, which will have the increased responsibility of managing memory exchanges between SIMD units, this approach is also likely to increase the cost for the design, thus we decided to use a shared memory model for this architecture. 3.1 RULES FRAMI G 3. ARCHITECTURE SIMULATIO As the rules have been framed based on various fields of the header, a short description about the IPv6 header has been given[ref 3] IPv6 HEADER Figure 4: IPv6 Header 5

6 The IPV6 header consists of 40 bytes as given below: 1. Version - version 6 (4-bit IP version). 2. Traffic class - packet priority (8-bits). Priority values: 0-7 Low priority; 8-15 High Priority 3. Flow label - QoS management (20 bits). 4. Payload length - payload length in bytes (16 bits). 5. Next header - Specifies the next encapsulated protocol. The values are compatible with those specified for the IPv4 protocol field (8 bits). 6. Hop limit It is equivalent to the time to live field of IPv4 (8 bits). 7. Source and destination addresses bits each. For classifying a packet based on the header, we need to determine the fields which we have to consider and their various values. For example as we are dealing with IPV6 packets Version field is going to be the same. Hop limit and Payload length are not going to do anything with packet classification. Hence there is no need for considering these fields. However Traffic class, Flow label, Next header and address fields influence the type of packet as per their respective definitions. The values we have considered for various fields of header for classifying packets have been enlisted in the table 1. Table 1: Rule Table Address Type Destination address Flow Label Protocol Traffic class Rule o./ Priority Multicast FF00 **** **** **** IntServ TCP Low 29 Unicast 4000 **** **** **** IntServ TCP Low 30 Site Local FEC0 **** **** **** IntServ TCP Low 31 Link local FE80 **** **** **** IntServ TCP Low 32 Multicast FF00 **** **** **** DiffServ TCP Low 25 Unicast 4000 **** **** **** DiffServ TCP Low 26 Site Local FEC0 **** **** **** DiffServ TCP Low 27 Link local FE80 **** **** **** DiffServ TCP Low 28 Multicast FF00 **** **** **** IntServ TCP High 13 Unicast 4000 **** **** **** IntServ TCP High 14 Site Local FEC0 **** **** **** IntServ TCP High 15 Link local FE80 **** **** **** IntServ TCP High 16 Multicast FF00 **** **** **** DiffServ TCP High 9 6

7 Unicast 4000 **** **** **** Site Local FEC0 **** **** **** Link local FE80 **** **** **** Multicast FF00 **** **** **** Unicast 4000 **** **** **** Site Local FEC0 **** **** **** Link local FE80 **** **** **** Multicast FF00 **** **** **** Unicast 4000 **** **** **** Site Local FEC0 **** **** **** Link local FE80 **** **** **** Multicast FF00 **** **** **** Unicast 4000 **** **** **** Site Local FEC0 **** **** **** Link local FE80 **** **** **** Multicast FF00 **** **** **** Unicast 4000 **** **** **** Site Local FEC0 **** **** **** Link local FE80 **** **** **** DiffServ TCP High 10 DiffServ TCP High 11 DiffServ TCP High 12 IntServ UDP Low 21 IntServ UDP Low 22 IntServ UDP Low 23 IntServ UDP Low 24 DiffServ UDP Low 17 DiffServ UDP Low 18 DiffServ UDP Low 19 DiffServ UDP Low 20 IntServ UDP High 5 IntServ UDP High 6 IntServ UDP High 7 IntServ UDP High 8 DiffServ UDP High 1 DiffServ UDP High 2 DiffServ UDP High 3 DiffServ UDP High SYSTEM ARCHITECTURE The task of packet classification is accomplished by mapping S bits of packet header to T bits of CLASS ID. This mapping involves three phases which has been discussed below PHASE 0 The various fields of packet header which are valid for packet classification are divided into chunks and supplied as input to the phase 0. For example, with the first two bytes of destination address the type of the packet can be identified. I.e. the packet can be classified in one of the four categories such as multicast, site 7

8 local, link local, and unicast. Hence first byte of destination address is supplied to chunk#1 and second byte of destination address is given to chunk#2. Similarly chunk#3 is supplied with bits of CLASS field; chunk#4 is supplied with flow label and chunk#5 with next header. Mapping of actual input to the Equivalent ID (Eq ID) is done as given in the figure PHASE 1 The output of first two chunks in phase 0 is given as input to chunk#6 to determine the address type of the packet. The output of other three chunks in phase 0 is given as input to chunk#7 of phase 1. The mapping of these inputs to the corresponding Eq IDs is done as given in the figure PHASE 2 This is the final stage of the classifier where the Eq IDs of chunks#6 and #7 are combined to produce the CLASS ID of 5 bits which identifies the type of packet. The packet header with the size of 40 bytes has been reduced to the CLASS ID of size 5 bits. Figure 5: Implementation of packet Classifier 8

9 3.3 CACHE SETUP For performance analysis it is necessary to have various environments. In order to analyze the performance of the packet classifier, we have created a cache program with three classes. First class represents an I-cache and second class represents V-cache. Third class represents an I-cache connected with V-cache. First and second class combined together works in the conventional way. I.e. if the instruction is found in the I-cache it is a hit, else it marks it as miss, and goes to victim cache. In the third class where I-cache is connected with V-cache, if the instruction is not found in I-cache then V- cache is checked. If it exists there, then it still counts as a hit, if it is not in V-cache, it counts as a miss. Whenever cache is full, the oldest entry is stored into the v-cache. We have analyzed our code for packet classifier for various cache sizes. V-cache is always adjusted to be half the size of I-cache and also it supports same replacement strategy as that of I-cache (FIFO & Random). 4. DATA A ALYSIS From the implementation of the packet classifier module with a C program simulation we were able to observe certain results which helped us to evaluate the performance of our proposed architecture. A single threaded program to perform this kind of packet classification was run on a Simple Scalar simulator and the total simulation time was found to be 112 seconds. A similar C program which basically accepts the packet header as input, separate them into different chunks of data according to the specifications and gives the output for Phase 0 level took a total simulation time of 83 seconds. These two steps of the program which segregates the incoming packet into different chunks of data and generating the EqID (an output of Phase 0) is the tedious phase of execution in this program. When implemented in a multi threaded program Phase 0 takes the maximum amount of time since it involves comparing the input header with the required parameters and generating the intermediate EqID. Since always in a multithreaded program the total execution time of the program is same as the execution time of the longest phase we can state that the execution time for multithreaded implementation of the same program will be approximately 83 seconds. But however the overhead of switching between different threads in multithreading brings down the performance of the entire system. So, although the execution time is less in this case the overall performance of the system is brought down. Our single thread and multi-thread timing information provided us with a clue as to how a RFC hardware scheme will perform in a SIMD environment. We can observe that the parallel executing of search among SIMD hardware is definitely beneficial to the RFC based packet classification. Since in each phase, a single thread program can only process one chunk of data at a time, and the total time is the aggregation of all the time spend on processing all the headers, it is drastically different than SIMD execution, where for each phase, the worst timing is the longest execution time occurs on one of the SIMD unit, while the other units waiting for the last executing SIMD to finish. So for each, the total execution time will only be the worst execution or longest running time on one particular unit. The amount of improvement will be based on different data chunks we have to split among the SIMD, this is similar to the pipe-lined stage effect, so we anticipate the time to be only 1/N of the originally non-simd implementation, assume we have N multiple header fields. 9

10 An analysis of the miss rate for various size and types of cache implementations is tabulated as follows: I-cache without v-cache (random) I-cache with v-cache (random) o. of entries Miss rate 64 entries 61% 128 entries 24% 256 entries 12% 512 entries 12% I-cache without v-cache (FIFO) o. of entries Miss rate 64 entries 41% 128 entries 37% 256 entries 12% 512 entries 12% o. of entries Miss rate 64 entries 33% 128 entries 12% 256 entries 12% 512 entries 12% I-cache with v-cache (FIFO) o. of entries Miss rate 64 entries 38% 128 entries 12% 256 entries 12% 512 entries 12% Figure 7: Graph to show the impact in miss rate as we increase cache size and varying the replacement policy 10

11 From the above tabulated data we find that the miss rate reduces as the we increase the cache size, the v-cache plays a less and less role in helping bring down the miss rate as the size of the cache approaches 4KB and 8KB, which is what we targeted as our size for the i-cache, our results indicates the v-cache probably plays a much lesser role than we originally anticipated. But in all likelihood, can be safely removed from the overall design without any impact to performance if we decide to choose a cache size around 4KB to 8KB. Furthermore, based on our observation, we can speculate that the RFC algorithm uses media size quantities of instructions and if we provide an I-cache with 4KB or 8KB size, it is sufficient to accommodate the cache needs, and the addition of the v-cache will not add any more benefits. However, if the addition of 4KB or 8KB cache will significantly increase the cost of the overall cost of our design, and we have to work with small amount of cache hardware, then the addition of the v-cache can definitely be justified and will for sure improve the cache miss rates. One interesting note from the experiment is that it shows the miss rate in the Phase 0 (initial stage) are much higher than that of the Phase 1 and 2, it effectively illustrates the fact that when the initial cache contents are empty, the phase 0 suffer more misses, but as we proceed to the subsequent phases, the cache contents get filled and there is significant improvement in the miss rate. 5. Conclusions Although this project has helped us find the performance improvement considering the various simulation results, this will only be a speculation since we didn t actually simulate the fully functioning version of the SIMD model. We did not take into consideration the cost involved in constructing a 7 SIMD unit. Practically this cost might be too high to construct such a unit. We did not try to implement with a large size filter table to apparently show the benefit of parallel processing. A pipelined implementation of this architectural design could potentially result in significant increase in performance. In sum, from the above data analysis, using SIMD approach for this type of packet classification in IPv6 networks is sure to benefit when compared with all other approaches and our hopes are so high that our proposed model will work well in a real world implementation scenario. References [1] Pankaj Gupta and Nick McKeown, "Algorithms for Packet Classification", IEEE Network Special Issue, March/April 2001, volume 15, no. 2, pp [2] Taylor, David E.; Herkersdorf, Andreas; Doring, Andreas; Dittmann and Gero, "Robust Header Compression (ROHC) in Next-Generation Network Processors", IEEE/ACM Transactions on Networking, August 2005, Volume 13, Issue 4, pp [3] RFC Internet Protocol, Version 6 (IPv6) Specification. 11

EECS 122: Introduction to Computer Networks Switch and Router Architectures. Today s Lecture

EECS 122: Introduction to Computer Networks Switch and Router Architectures. Today s Lecture EECS : Introduction to Computer Networks Switch and Router Architectures Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley,

More information

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture Generic Architecture EECS : Introduction to Computer Networks Switch and Router Architectures Computer Science Division Department of Electrical Engineering and Computer Sciences University of California,

More information

Lecture 13. Quality of Service II CM0256

Lecture 13. Quality of Service II CM0256 Lecture 13 Quality of Service II CM0256 Types of QoS Best Effort Services Integrated Services -- resource reservation network resources are assigned according to the application QoS request and subject

More information

AN ASSOCIATIVE TERNARY CACHE FOR IP ROUTING. 1. Introduction. 2. Associative Cache Scheme

AN ASSOCIATIVE TERNARY CACHE FOR IP ROUTING. 1. Introduction. 2. Associative Cache Scheme AN ASSOCIATIVE TERNARY CACHE FOR IP ROUTING James J. Rooney 1 José G. Delgado-Frias 2 Douglas H. Summerville 1 1 Dept. of Electrical and Computer Engineering. 2 School of Electrical Engr. and Computer

More information

Multimedia Networking

Multimedia Networking CMPT765/408 08-1 Multimedia Networking 1 Overview Multimedia Networking The note is mainly based on Chapter 7, Computer Networking, A Top-Down Approach Featuring the Internet (4th edition), by J.F. Kurose

More information

Multimedia Networking. Network Support for Multimedia Applications

Multimedia Networking. Network Support for Multimedia Applications Multimedia Networking Network Support for Multimedia Applications Protocols for Real Time Interactive Applications Differentiated Services (DiffServ) Per Connection Quality of Services Guarantees (IntServ)

More information

Queuing Mechanisms. Overview. Objectives

Queuing Mechanisms. Overview. Objectives Queuing Mechanisms Overview Objectives This module describes the queuing mechanisms that can be used on output interfaces. It includes the following topics: Queuing Overview FIFO Queuing Priority Queuing

More information

Internet Services & Protocols. Quality of Service Architecture

Internet Services & Protocols. Quality of Service Architecture Department of Computer Science Institute for System Architecture, Chair for Computer Networks Internet Services & Protocols Quality of Service Architecture Dr.-Ing. Stephan Groß Room: INF 3099 E-Mail:

More information

RECOMMENDATION ITU-R BS.776 * Format for user data channel of the digital audio interface **

RECOMMENDATION ITU-R BS.776 * Format for user data channel of the digital audio interface ** Rec. ITU-R BS.776 1 RECOMMENDATION ITU-R BS.776 * Format for user data channel of the digital audio interface ** The ITU Radiocommunication Assembly considering (1992) a) that there is a real need for

More information

CS-435 spring semester Network Technology & Programming Laboratory. Stefanos Papadakis & Manolis Spanakis

CS-435 spring semester Network Technology & Programming Laboratory. Stefanos Papadakis & Manolis Spanakis CS-435 spring semester 2016 Network Technology & Programming Laboratory University of Crete Computer Science Department Stefanos Papadakis & Manolis Spanakis CS-435 Lecture #4 preview ICMP ARP DHCP NAT

More information

CCVP QOS Quick Reference Sheets

CCVP QOS Quick Reference Sheets Why You Need Quality of Service (QoS)...3 QoS Basics...5 QoS Deployment...6 QoS Components...6 CCVP QOS Quick Reference Sheets Basic QoS Configuration...11 Traffic Classification and Marking...15 Queuing...26

More information

KUPF: 2-Phase Selection Model of Classification Records

KUPF: 2-Phase Selection Model of Classification Records KUPF: 2-Phase Selection Model of Classification Records KAKIUCHI Masatoshi Nara Institute of Science and Technology Background Many Internet services classify the data to be handled according to rules

More information

DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING QUESTION BANK

DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING QUESTION BANK DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING QUESTION BANK SUBJECT : CS6303 / COMPUTER ARCHITECTURE SEM / YEAR : VI / III year B.E. Unit I OVERVIEW AND INSTRUCTIONS Part A Q.No Questions BT Level

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering Multiprocessors and Thread-Level Parallelism Multithreading Increasing performance by ILP has the great advantage that it is reasonable transparent to the programmer, ILP can be quite limited or hard to

More information

Multicast and Quality of Service. Internet Technologies and Applications

Multicast and Quality of Service. Internet Technologies and Applications Multicast and Quality of Service Internet Technologies and Applications Aims and Contents Aims Introduce the multicast and the benefits it offers Explain quality of service and basic techniques for delivering

More information

MEMORY MANAGEMENT/1 CS 409, FALL 2013

MEMORY MANAGEMENT/1 CS 409, FALL 2013 MEMORY MANAGEMENT Requirements: Relocation (to different memory areas) Protection (run time, usually implemented together with relocation) Sharing (and also protection) Logical organization Physical organization

More information

QoS in IPv6. Madrid Global IPv6 Summit 2002 March Alberto López Toledo.

QoS in IPv6. Madrid Global IPv6 Summit 2002 March Alberto López Toledo. QoS in IPv6 Madrid Global IPv6 Summit 2002 March 2002 Alberto López Toledo alberto@dit.upm.es, alberto@dif.um.es Madrid Global IPv6 Summit What is Quality of Service? Quality: reliable delivery of data

More information

Computer Networking: A Top Down Approach Featuring the. Computer Networks with Internet Technology, William

Computer Networking: A Top Down Approach Featuring the. Computer Networks with Internet Technology, William Dr. John Keeney 3BA33 TCP/IP protocol architecture with IP OSI Model Layers TCP/IP Protocol Architecture Layers TCP/IP Protocol Suite Application Layer Application Layer Telnet FTP HTTP DNS RIPng SNMP

More information

HIGH PERFORMANCE CACHE ARCHITECTURES FOR IP ROUTING: REPLACEMENT, COMPACTION AND SAMPLING SCHEMES RUIRUI GUO

HIGH PERFORMANCE CACHE ARCHITECTURES FOR IP ROUTING: REPLACEMENT, COMPACTION AND SAMPLING SCHEMES RUIRUI GUO HIGH PERFORMANCE CACHE ARCHITECTURES FOR IP ROUTING: REPLACEMENT, COMPACTION AND SAMPLING SCHEMES By RUIRUI GUO A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR

More information

HP 5130 EI Switch Series

HP 5130 EI Switch Series HP 5130 EI Switch Series ACL and QoS Configuration Guide Part number: 5998-5471a Software version: Release 31xx Document version: 6W100-20150731 Legal and notice information Copyright 2015 Hewlett-Packard

More information

RMIT University. Data Communication and Net-Centric Computing COSC 1111/2061. Lecture 2. Internetworking IPv4, IPv6

RMIT University. Data Communication and Net-Centric Computing COSC 1111/2061. Lecture 2. Internetworking IPv4, IPv6 RMIT University Data Communication and Net-Centric Computing COSC 1111/2061 Internetworking IPv4, IPv6 Technology Slide 1 Lecture Overview During this lecture, we will understand The principles of Internetworking

More information

Novel MIME Type and Extension Based Packet Classification Algorithm in WiMAX

Novel MIME Type and Extension Based Packet Classification Algorithm in WiMAX Novel MIME Type and Extension Based Packet Classification Algorithm in WiMAX Siddu P. Algur Departmentof Computer Science Rani Chennamma University Belgaum, India Niharika Kumar Department of Information

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

CS6401- Operating System UNIT-III STORAGE MANAGEMENT UNIT-III STORAGE MANAGEMENT Memory Management: Background In general, to rum a program, it must be brought into memory. Input queue collection of processes on the disk that are waiting to be brought into

More information

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards Computer Organization CS 231-01 Improving Performance Dr. William H. Robinson November 8, 2004 Topics Money's only important when you don't have any. Sting Cache Scoreboarding http://eecs.vanderbilt.edu/courses/cs231/

More information

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Department of Electrical Engineering and Computer Sciences University of California Berkeley Review: Switch Architectures Input Queued

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Leaky Bucket Algorithm

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Leaky Bucket Algorithm Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Foundations of Python

Foundations of Python Foundations of Python Network Programming The comprehensive guide to building network applications with Python Second Edition Brandon Rhodes John Goerzen Apress Contents Contents at a Glance About the

More information

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS : Computer Networks Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS Ways to deal with congestion Host-centric versus router-centric Reservation-based versus feedback-based Window-based versus rate-based

More information

Lec 11 How to improve cache performance

Lec 11 How to improve cache performance Lec 11 How to improve cache performance How to Improve Cache Performance? AMAT = HitTime + MissRate MissPenalty 1. Reduce the time to hit in the cache.--4 small and simple caches, avoiding address translation,

More information

IPv6: An Introduction

IPv6: An Introduction Outline IPv6: An Introduction Dheeraj Sanghi Department of Computer Science and Engineering Indian Institute of Technology Kanpur dheeraj@iitk.ac.in http://www.cse.iitk.ac.in/users/dheeraj Problems with

More information

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France Operating Systems Memory Management Mathieu Delalandre University of Tours, Tours city, France mathieu.delalandre@univ-tours.fr 1 Operating Systems Memory Management 1. Introduction 2. Contiguous memory

More information

Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses

Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses Soner Onder Michigan Technological University Randy Katz & David A. Patterson University of California, Berkeley Four Questions for Memory Hierarchy Designers

More information

Problem Statement. Algorithm MinDPQ (contd.) Algorithm MinDPQ. Summary of Algorithm MinDPQ. Algorithm MinDPQ: Experimental Results.

Problem Statement. Algorithm MinDPQ (contd.) Algorithm MinDPQ. Summary of Algorithm MinDPQ. Algorithm MinDPQ: Experimental Results. Algorithms for Routing Lookups and Packet Classification October 3, 2000 High Level Outline Part I. Routing Lookups - Two lookup algorithms Part II. Packet Classification - One classification algorithm

More information

Portland State University ECE 588/688. Cray-1 and Cray T3E

Portland State University ECE 588/688. Cray-1 and Cray T3E Portland State University ECE 588/688 Cray-1 and Cray T3E Copyright by Alaa Alameldeen 2014 Cray-1 A successful Vector processor from the 1970s Vector instructions are examples of SIMD Contains vector

More information

Configuring WMT Streaming Media Services on Standalone Content Engines

Configuring WMT Streaming Media Services on Standalone Content Engines CHAPTER 9 Configuring WMT Streaming Media Services on Standalone Content Engines This chapter provides an overview of the Windows Media Technologies (WMT) streaming and caching services, and describes

More information

Network Performance: Queuing

Network Performance: Queuing Network Performance: Queuing EE 122: Intro to Communication Networks Fall 2007 (WF 4-5:30 in Cory 277) Vern Paxson TAs: Lisa Fowler, Daniel Killebrew & Jorge Ortiz http://inst.eecs.berkeley.edu/~ee122/

More information

Bridging and Switching Basics

Bridging and Switching Basics CHAPTER 4 Bridging and Switching Basics This chapter introduces the technologies employed in devices loosely referred to as bridges and switches. Topics summarized here include general link-layer device

More information

Configuring QoS. Understanding QoS CHAPTER

Configuring QoS. Understanding QoS CHAPTER 29 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

Differentiated Services

Differentiated Services 1 Differentiated Services QoS Problem Diffserv Architecture Per hop behaviors 2 Problem: QoS Need a mechanism for QoS in the Internet Issues to be resolved: Indication of desired service Definition of

More information

HP 3600 v2 Switch Series

HP 3600 v2 Switch Series HP 3600 v2 Switch Series ACL and QoS Configuration Guide Part number: 5998-2354 Software version: Release 2101 Document version: 6W101-20130930 Legal and notice information Copyright 2013 Hewlett-Packard

More information

FPX Architecture for a Dynamically Extensible Router

FPX Architecture for a Dynamically Extensible Router FPX Architecture for a Dynamically Extensible Router Alex Chandra, Yuhua Chen, John Lockwood, Sarang Dharmapurikar, Wenjing Tang, David Taylor, Jon Turner http://www.arl.wustl.edu/arl Dynamically Extensible

More information

What is an L3 Master Device?

What is an L3 Master Device? What is an L3 Master Device? David Ahern Cumulus Networks Mountain View, CA, USA dsa@cumulusnetworks.com Abstract The L3 Master Device (l3mdev) concept was introduced to the Linux networking stack in v4.4.

More information

EITF20: Computer Architecture Part4.1.1: Cache - 2

EITF20: Computer Architecture Part4.1.1: Cache - 2 EITF20: Computer Architecture Part4.1.1: Cache - 2 Liang Liu liang.liu@eit.lth.se 1 Outline Reiteration Cache performance optimization Bandwidth increase Reduce hit time Reduce miss penalty Reduce miss

More information

Grandstream Networks, Inc. GWN7000 QoS - VoIP Traffic Management

Grandstream Networks, Inc. GWN7000 QoS - VoIP Traffic Management Grandstream Networks, Inc. GWN7000 QoS - VoIP Traffic Management Table of Contents INTRODUCTION... 4 DSCP CLASSIFICATION... 5 QUALITY OF SERVICE ON GWN7000... 6 USING QOS TO PRIORITIZE VOIP TRAFFIC...

More information

Information about Network Security with ACLs

Information about Network Security with ACLs This chapter describes how to configure network security on the switch by using access control lists (ACLs), which in commands and tables are also referred to as access lists. Finding Feature Information,

More information

EE 122: Differentiated Services

EE 122: Differentiated Services What is the Problem? EE 122: Differentiated Services Ion Stoica Nov 18, 2002 Goal: provide support for wide variety of applications: - Interactive TV, IP telephony, on-line gamming (distributed simulations),

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 27 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

Case study: Performance-efficient Implementation of Robust Header Compression (ROHC) using an Application-Specific Processor

Case study: Performance-efficient Implementation of Robust Header Compression (ROHC) using an Application-Specific Processor Case study: Performance-efficient Implementation of Robust Header Compression (ROHC) using an Application-Specific Processor Gert Goossens, Patrick Verbist, Erik Brockmeyer, Luc De Coster Synopsys 1 Agenda

More information

Managing and Securing Computer Networks. Guy Leduc. Chapter 2: Software-Defined Networks (SDN) Chapter 2. Chapter goals:

Managing and Securing Computer Networks. Guy Leduc. Chapter 2: Software-Defined Networks (SDN) Chapter 2. Chapter goals: Managing and Securing Computer Networks Guy Leduc Chapter 2: Software-Defined Networks (SDN) Mainly based on: Computer Networks and Internets, 6 th Edition Douglas E. Comer Pearson Education, 2015 (Chapter

More information

Differentiated Services

Differentiated Services Diff-Serv 1 Differentiated Services QoS Problem Diffserv Architecture Per hop behaviors Diff-Serv 2 Problem: QoS Need a mechanism for QoS in the Internet Issues to be resolved: Indication of desired service

More information

Configuring RTP Header Compression

Configuring RTP Header Compression Header compression is a mechanism that compresses the IP header in a packet before the packet is transmitted. Header compression reduces network overhead and speeds up the transmission of either Real-Time

More information

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control 1 DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control Sundeep.B.Singh and Girish.P.Saraph Indian Institute of Technology Bombay, Powai, Mumbai-400076, India Abstract

More information

Managing Caching Performance and Differentiated Services

Managing Caching Performance and Differentiated Services CHAPTER 10 Managing Caching Performance and Differentiated Services This chapter explains how to configure TCP stack parameters for increased performance ant throughput and how to configure Type of Service

More information

CS433 Homework 2 (Chapter 3)

CS433 Homework 2 (Chapter 3) CS433 Homework 2 (Chapter 3) Assigned on 9/19/2017 Due in class on 10/5/2017 Instructions: 1. Please write your name and NetID clearly on the first page. 2. Refer to the course fact sheet for policies

More information

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM B649 Parallel Architectures and Programming Motivation behind Multiprocessors Limitations of ILP (as already discussed) Growing interest in servers and server-performance

More information

Network Performance: Queuing

Network Performance: Queuing Network Performance: Queuing EE 122: Intro to Communication Networks Fall 2006 (MW 4-5:30 in Donner 155) Vern Paxson TAs: Dilip Antony Joseph and Sukun Kim http://inst.eecs.berkeley.edu/~ee122/ Materials

More information

Implementation of a leaky bucket module for simulations in NS-3

Implementation of a leaky bucket module for simulations in NS-3 Implementation of a leaky bucket module for simulations in NS-3 P. Baltzis 2, C. Bouras 1,2, K. Stamos 1,2,3, G. Zaoudis 1,2 1 Computer Technology Institute and Press Diophantus Patra, Greece 2 Computer

More information

Decision Forest: A Scalable Architecture for Flexible Flow Matching on FPGA

Decision Forest: A Scalable Architecture for Flexible Flow Matching on FPGA Decision Forest: A Scalable Architecture for Flexible Flow Matching on FPGA Weirong Jiang, Viktor K. Prasanna University of Southern California Norio Yamagaki NEC Corporation September 1, 2010 Outline

More information

Improving QOS in IP Networks. Principles for QOS Guarantees

Improving QOS in IP Networks. Principles for QOS Guarantees Improving QOS in IP Networks Thus far: making the best of best effort Future: next generation Internet with QoS guarantees RSVP: signaling for resource reservations Differentiated Services: differential

More information

Before configuring standard QoS, you must have a thorough understanding of these items:

Before configuring standard QoS, you must have a thorough understanding of these items: Finding Feature Information, page 1 Prerequisites for QoS, page 1 QoS Components, page 2 QoS Terminology, page 3 Information About QoS, page 3 Restrictions for QoS on Wired Targets, page 41 Restrictions

More information

RISC Principles. Introduction

RISC Principles. Introduction 3 RISC Principles In the last chapter, we presented many details on the processor design space as well as the CISC and RISC architectures. It is time we consolidated our discussion to give details of RISC

More information

CCNA Exploration1 Chapter 7: OSI Data Link Layer

CCNA Exploration1 Chapter 7: OSI Data Link Layer CCNA Exploration1 Chapter 7: OSI Data Link Layer LOCAL CISCO ACADEMY ELSYS TU INSTRUCTOR: STELA STEFANOVA 1 Explain the role of Data Link layer protocols in data transmission; Objectives Describe how the

More information

Networking: Network layer

Networking: Network layer control Networking: Network layer Comp Sci 3600 Security Outline control 1 2 control 3 4 5 Network layer control Outline control 1 2 control 3 4 5 Network layer purpose: control Role of the network layer

More information

A Comparative Performance Evaluation of Different Application Domains on Server Processor Architectures

A Comparative Performance Evaluation of Different Application Domains on Server Processor Architectures A Comparative Performance Evaluation of Different Application Domains on Server Processor Architectures W.M. Roshan Weerasuriya and D.N. Ranasinghe University of Colombo School of Computing A Comparative

More information

EXAM 1 SOLUTIONS. Midterm Exam. ECE 741 Advanced Computer Architecture, Spring Instructor: Onur Mutlu

EXAM 1 SOLUTIONS. Midterm Exam. ECE 741 Advanced Computer Architecture, Spring Instructor: Onur Mutlu Midterm Exam ECE 741 Advanced Computer Architecture, Spring 2009 Instructor: Onur Mutlu TAs: Michael Papamichael, Theodoros Strigkos, Evangelos Vlachos February 25, 2009 EXAM 1 SOLUTIONS Problem Points

More information

- Hubs vs. Switches vs. Routers -

- Hubs vs. Switches vs. Routers - 1 Layered Communication - Hubs vs. Switches vs. Routers - Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing

More information

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350): The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350): Motivation for The Memory Hierarchy: { CPU/Memory Performance Gap The Principle Of Locality Cache $$$$$ Cache Basics:

More information

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2 6 - Main Memory EECE 315 (101) ECE UBC 2013 W2 Acknowledgement: This set of slides is partly based on the PPTs provided by the Wiley s companion website (including textbook images, when not explicitly

More information

Evaluation of OpenROUTE Networks Bandwidth Reservation System and Priority Queueing

Evaluation of OpenROUTE Networks Bandwidth Reservation System and Priority Queueing Naval Research Laboratory Washington, DC 20375-5320 Evaluation of OpenROUTE Networks Bandwidth Reservation System and Priority Queueing VINCENT D. PARK JOSEPH P. MACKER STEVE J. SOLLON Protocol Engineering

More information

Classification Steady-State Cache Misses: Techniques To Improve Cache Performance:

Classification Steady-State Cache Misses: Techniques To Improve Cache Performance: #1 Lec # 9 Winter 2003 1-21-2004 Classification Steady-State Cache Misses: The Three C s of cache Misses: Compulsory Misses Capacity Misses Conflict Misses Techniques To Improve Cache Performance: Reduce

More information

QoS on Low Bandwidth High Delay Links. Prakash Shende Planning & Engg. Team Data Network Reliance Infocomm

QoS on Low Bandwidth High Delay Links. Prakash Shende Planning & Engg. Team Data Network Reliance Infocomm QoS on Low Bandwidth High Delay Links Prakash Shende Planning & Engg. Team Data Network Reliance Infocomm Agenda QoS Some Basics What are the characteristics of High Delay Low Bandwidth link What factors

More information

Operating Systems Unit 6. Memory Management

Operating Systems Unit 6. Memory Management Unit 6 Memory Management Structure 6.1 Introduction Objectives 6.2 Logical versus Physical Address Space 6.3 Swapping 6.4 Contiguous Allocation Single partition Allocation Multiple Partition Allocation

More information

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 8: Virtual Memory. Operating System Concepts Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2009 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Creating an Encoding Independent Music Genre. Classifier

Creating an Encoding Independent Music Genre. Classifier Creating an Encoding Independent Music Genre Classifier John-Ashton Allen, Sam Oluwalana December 16, 2011 Abstract The field of machine learning is characterized by complex problems that are solved by

More information

Reducing Miss Penalty: Read Priority over Write on Miss. Improving Cache Performance. Non-blocking Caches to reduce stalls on misses

Reducing Miss Penalty: Read Priority over Write on Miss. Improving Cache Performance. Non-blocking Caches to reduce stalls on misses Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the. Reducing Miss Penalty: Read Priority over Write on Miss Write buffers may offer RAW

More information

Optimizing Bandwidth Utilization in Packet Based Telemetry Systems

Optimizing Bandwidth Utilization in Packet Based Telemetry Systems Optimizing Bandwidth Utilization in Packet Based Telemetry Systems Jeffrey R. Kalibjian Lawrence Livermore National Laboratory Keywords bandwidth utilization, intelligent telemetry processing Abstract

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

Internet Protocol version 6

Internet Protocol version 6 Internet Protocol version 6 Claudio Cicconetti International Master on Communication Networks Engineering 2006/2007 IP version 6 The Internet is growing extremely rapidly. The

More information

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF MODULE 05 MULTIPROTOCOL LABEL SWITCHING (MPLS) AND LABEL DISTRIBUTION PROTOCOL (LDP) 1 by Xantaro IP Routing In IP networks, each router makes an independent

More information

Advanced Computer Architecture

Advanced Computer Architecture Advanced Computer Architecture Chapter 1 Introduction into the Sequential and Pipeline Instruction Execution Martin Milata What is a Processors Architecture Instruction Set Architecture (ISA) Describes

More information

Packet Switching - Asynchronous Transfer Mode. Introduction. Areas for Discussion. 3.3 Cell Switching (ATM) ATM - Introduction

Packet Switching - Asynchronous Transfer Mode. Introduction. Areas for Discussion. 3.3 Cell Switching (ATM) ATM - Introduction Areas for Discussion Packet Switching - Asynchronous Transfer Mode 3.3 Cell Switching (ATM) Introduction Cells Joseph Spring School of Computer Science BSc - Computer Network Protocols & Arch s Based on

More information

Multi-gigabit Switching and Routing

Multi-gigabit Switching and Routing Multi-gigabit Switching and Routing Gignet 97 Europe: June 12, 1997. Nick McKeown Assistant Professor of Electrical Engineering and Computer Science nickm@ee.stanford.edu http://ee.stanford.edu/~nickm

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 30 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

UDP-Lite Enhancement Through Checksum Protection

UDP-Lite Enhancement Through Checksum Protection IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS UDP-Lite Enhancement Through Checksum Protection To cite this article: Suherman et al 2017 IOP Conf. Ser.: Mater. Sci. Eng. 180

More information

Implementation and Evaluation of Prefetching in the Intel Paragon Parallel File System

Implementation and Evaluation of Prefetching in the Intel Paragon Parallel File System Implementation and Evaluation of Prefetching in the Intel Paragon Parallel File System Meenakshi Arunachalam Alok Choudhary Brad Rullman y ECE and CIS Link Hall Syracuse University Syracuse, NY 344 E-mail:

More information

CC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments

CC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments CC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments Stream Control Transmission Protocol (SCTP) uses the 32-bit checksum in the common header, by which a corrupted

More information

II. Principles of Computer Communications Network and Transport Layer

II. Principles of Computer Communications Network and Transport Layer II. Principles of Computer Communications Network and Transport Layer A. Internet Protocol (IP) IPv4 Header An IP datagram consists of a header part and a text part. The header has a 20-byte fixed part

More information

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Dr. Vinod Vokkarane Assistant Professor, Computer and Information Science Co-Director, Advanced Computer Networks Lab University

More information

COMP211 Chapter 4 Network Layer: The Data Plane

COMP211 Chapter 4 Network Layer: The Data Plane COMP211 Chapter 4 Network Layer: The Data Plane All material copyright 1996-2016 J.F Kurose and K.W. Ross, All Rights Reserved Computer Networking: A Top Down Approach 7 th edition Jim Kurose, Keith Ross

More information

Table of Contents. Cisco How NAT Works

Table of Contents. Cisco How NAT Works Table of Contents How NAT Works...1 This document contains Flash animation...1 Introduction...1 Behind the Mask...2 Dynamic NAT and Overloading Examples...5 Security and Administration...7 Multi Homing...9

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 36 This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science Virtual Memory CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition of the course text Operating

More information

Configuring IP Multicast Routing

Configuring IP Multicast Routing 34 CHAPTER This chapter describes how to configure IP multicast routing on the Cisco ME 3400 Ethernet Access switch. IP multicasting is a more efficient way to use network resources, especially for bandwidth-intensive

More information

Networking Quality of service

Networking Quality of service System i Networking Quality of service Version 6 Release 1 System i Networking Quality of service Version 6 Release 1 Note Before using this information and the product it supports, read the information

More information

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing UNIVERSIDADE TÉCNICA DE LISBOA INSTITUTO SUPERIOR TÉCNICO Departamento de Engenharia Informática Architectures for Embedded Computing MEIC-A, MEIC-T, MERC Lecture Slides Version 3.0 - English Lecture 14

More information

Measuring MPLS overhead

Measuring MPLS overhead Measuring MPLS overhead A. Pescapè +*, S. P. Romano +, M. Esposito +*, S. Avallone +, G. Ventre +* * ITEM - Laboratorio Nazionale CINI per l Informatica e la Telematica Multimediali Via Diocleziano, 328

More information

Network Working Group. Category: Standards Track BBN September 1997

Network Working Group. Category: Standards Track BBN September 1997 Network Working Group Request for Comments: 2207 Category: Standards Track L. Berger FORE Systems T. O Malley BBN September 1997 RSVP Extensions for IPSEC Data Flows Status of this Memo This document specifies

More information