On the Efficient Implementation of Pipelined Heaps for Network Processing

Size: px
Start display at page:

Download "On the Efficient Implementation of Pipelined Heaps for Network Processing"

Transcription

1 On the Efficient Implementation of Pipelined Heaps for Network Processing Hao Wang ill in University of alifornia, San iego, a Jolla, {wanghao,billlin}@ucsd.edu bstract Priority queues are often used in many network processing applications. pplications include sophisticated perflow scheduling for providing advanced Quality-of-Service (QoS) guarantees, fast packet buffer memory management, and exact maintanence of statistics counters for real-time network measurements. In all these applications, the priority queues used must operate at very high-speeds, e.g. at 40 Gbps rates and beyond. One widely used data structure for implementing priority queues is the heap data structure. However, the logarithmic time complexity of heap operations is often too slow for increasingly fast line rates. To achieve constant time complexity, the pipelined heap structure has been proposed. In this paper, we describe new architecture techniques for the efficient implementation of pipelined heaps. In particular, we focus on aggressive memory management and pipelining techniques. I. INTROUTION Priority queues are often used in many network processing applications. For example, per-flow queueing with sophisticated scheduling based on Weighted Fair Queueing (WFQ) service disciplines has been proposed as an approach for providing advanced Quality-of-Service (QoS) guarantees [1], [2]. In advanced WFQ scheduling algorithms, packets are timestamps with a required departure time necessary to meet QoS guarantees. Priority queues are used in these algorithms to ensure that packets are serviced in the order of departure deadlines. nother application is in the management of perflow queues using RMs. To achieve aggressive line rates, random access to per-flow queues is necessary at high speeds. Therefore, portions of the queues must be maintained in SRM to achieve line rate, namely the heads and tails of these queues [3]. However, to provide bulk storage, the middle portion of the queues must be maintained using external RMs. One approach to managing the migration of packets between SRMs and RMs is based on replenishing heads of queues that are the shortest. Priority queues are again used to provide this real-time sorting. Finally, priority queues have been used for the maintanence of exact statistics counters for real-time network measurements [4], [5]. In all these applications, the priority queues used must operate at very high-speeds, e.g. at 40 Gbps rates and beyond. One widely used data structure for implementing priority queues is the heap data structure. However, operations on a heap require O(log N) complexity, which is still too slow for fast line rates. To achieve O(1) time complexity, we proposed in our earlier work the pipelined heap structure [6]. In the pipelined heap structure, each layer of the heap corresponds to a pipeline stage. Unlike a conventional binary heap, the operations on a pipelined heap all operate from the top of the heap downwards, progressing through each pipeline stage in one direction. t each pipeline stage, a memory structure is associated with the pipeline stage for storing the nodes in the corresponding layer of the heap. However, since the number of nodes in a layer of the heap is twice as many as the number of nodes in the immediately preceding layer, there is enormous disparity between the memory sizes at the top layers of the heap when compared to the bottom layers of the heap, making it difficult to provide any regularity in the memory structures. Moreover, the top layers of the heap are too small for structured memory. In this paper, we explore more efficient memory organizations for pipelined heaps. In addition, we explore implementation details of the pipelined heap to achieve more aggressive pipelining. The remainder of this paper is organized as follows. In Section II, we provide an overview of the pipelined heap structure and its operations. In Section III, we show how the pipelined heap can be implemented so that a new operation can be initiated on every clock cycle to support very fast line rates. In Section IV, we present our ideas on efficient memory organization and management of pipelined heaps. We first consider the case of a forest of pipelined heaps. We then show how a similar idea can be applied to the case of a single pipelined heaps by partitioning a heap into sub-heaps. Finally, we conclude in Section V. II. THE PIPEINE HEP In this section, we summarize how a pipelined heap works. The reader is referred to [6] for more details. The basic pipelined heap data structure is shown in Figure 1. We assume here a maximum heap where the largest value of the heap is at the top. Our pipelined heap is essentially the same as a conventional binary heap, except that every node has an additional capacity field that indicates the number of unused locations in the sub-heap rooted at the node. The implementation architecture of the pipelined heap is depicted in Figure 2. In the implementation architecture, each pipeline stage corresponds to a layer in the heap. In our pipelined heap, the operations of ENQUEUE, E- QUEUE, and ENQUEUEEQUEUE are supported. For EN- QUEUE, a new entry is inserted into the heap. For EQUEUE, the minimum value of the heap is deleted from the heap and return to the port. In ENQUEUEEQUEUE operation, the This full text paper was peer reviewed at the direction of IEEE ommunications Society subject matter experts for publication in the IEEE GOEOM 2006 proceedings. uthorized licensed use limited to Univ of alif San iego. ownloaded on January 13, 2010 at 1916 from IEEE Xplore. Restrictions apply.

2 value capacity Fig op1 Fig. 1. stage 1 stage The pipelined heap structure. arg1 op2 p2 arg2 op3 p3 arg3 stage 3 op4 p4 arg4 stage 4 input M1 M2 M3 M4 Implementation architecture of the pipelined heap. minimum value of the heap is replaced with a new entry. For each stage i, the specified operation op i, the operation position in the heap p i, and a data argument arg i are transferred from the previous stage. For the first stage, the operation is specified externally, and the operation position is always the root. The data argument is the value we want to put into the heap, if the operation is ENQUEUE or ENQUEUEEQUEUE. This argument will be void if we are doing a EQUEUE operation. Our pipelined heap is different from traditional heap in that it always works in a top down fashion. For an ENQUEUE operation, at stage i, position p j, we compare the value of the argument in arg i with the values in the memory at this stage with those in positions p j and p j +1. We find the first node (assumed in position p k, where p k is either p j or p j +1) that is greater than arg i with capacity greater that 0, exchange those two values, and decrease the capacity of node k by one. If there is no value greater than arg i, then we find the first node with capacity greater than zero. Then, we pass arg i, operation op i, and p k to the next stage. If there is no node with capacity greater than 0, then we return that the heap is full and terminate the operation. Overall, the ENQUEUE operation takes up to three cycles for each stage. They are 1) read the corresponding memory in this stage, 2) compare the value to be inserted with the value read, and 3) a possible exchange of values between the values in one position in the memory with the one to be inserted. For EQUEUE operation, at stage i, position p j, we compare the value of the argument in arg i with the values in the memory locations at the next stage that correspond to the children of node p j, from left to right. et p k be the position of the smaller node of the two children. We then move the value at node p k to node p j, and we delete the value stored at node p k. fter this, we increase the capacity of node p k by one. Finally, we pass arg i, op i, and p k to the next stage. Overall, the EQUEUE operation takes three cycles for each stage. They are 1) read the children of node p j from the memory of the next stage, 2) compare those values and find the minimum, and 3) remove this value and put it into node p j. For ENQUEUEEQUEUE operation, there is a new entry waiting to be inserted in to the heap. t stage 1, we use this new entry to replace the root. Then the children of root is accessed. We find the minimum of those two entries (in position p j ) and exchange it with the entry of the root. The name of this operation (i.e. ENQUEUEEQUEUE) and the position of this operation p j are sent down to stage 2. For all the other stages, the same procedures are followed. t stage i, position p j, we compare the entry of node p j with the entries that are the children of the node p j in the memory at the next stage. Then the minimum of those entries are found out. If the minimum entry is at position p j, then return operation done. Otherwise(assuming it is in position p k ), exchange the entry at position p j with the one at position p k. Then arg i, op i, and p k are sent down to the following stage. Overall, the ENQUEUEEQUEUE operation takes up to three cycles per stage. Those stages are 1) reading the children of node p j from the memory of the next stage, 2) comparing the value in current position p j with the value read, finding the minimum, and 3) a possible exchange of values between the values in one position in the memory with the one in current position. From what are shown above, we can see that all three operations can be divided into three cycles. pparently we can rip down the pipeline at a rate of one stage each three cycle. We will show that, in this pipelined heap structure, all operations can be pipelined in a cycle fashion. That is to say, we can add in one more operation into the system each cycle, which is discussed in the following chapter. III. SINGE-YE OPERTION The problem with initiating one new operation each clock cycle is that, for current operation, it has to wait at least two cycles to get the right path to go down the heap from the previous operation. So, in order to achieve a higher processing This full text paper was peer reviewed at the direction of IEEE ommunications Society subject matter experts for publication in the IEEE GOEOM 2006 proceedings. uthorized licensed use limited to Univ of alif San iego. ownloaded on January 13, 2010 at 1916 from IEEE Xplore. Restrictions apply.

3 and and =min{, } Write, E, F, G and E =min{, E} Write +2 =min{h, I} and and and Fig. 3. Part of a pipelined heap. =min{, } Write and =min{, } Write H, I, J, K H, I lock ycles Write +2 =min{, } +3 H I J K M N O and E and Write Fig. 5. Improved pipelined EQUEUE operation. lock ycles =min{,, } and,, Write, E, F, G,, E =min{,, E} Write =min{, H, I} Fig. 4. Improved pipelined ENQUEUE operation. H, I, J, K, H, I Write rate, in each stage, more entries need to be read from the memory and more memories need to be considered. In the following, an outline is provided about how this can be achieved for each of the pipelined heap operations. ENQUEUE onsider the partial pipelined heap shown in Figure 3. Suppose we want to insert to replace, then to replace, and then to replace, and so on. To achieve one stage per cycle, we can follow the pipelined operations that are depicted in Figure 4. In clock cycle 1, at stage, we have to start reading data. However, at this time, we are not sure whether we should use or. Instead, both of them are read. In cycle 2, the result of comparison between and is known, so we only need to compare and. t the mean time, and E are read in stage +2, since we don t have the result of comparison of and. So, in this way, we achieve one stage per cycle. The requirement for memory is that it has a read throughput of 2 values per cycle in each memory and a write throughput of 1 value per cycle in each memory. EQUEUE onsider again the partial pipelined heap shown in Figure 3, now suppose we want to delete, then use to replace, and then use to replace. Figure 5 illustrates the operation lock ycles H I J K M N O Fig. 6. Improved pipelined ENQUEUEEQUEUE operation. pipeline for achieving one stage per cycle. In clock cycle 1, at stage, we have to start reading data. However, at this time, we are not sure about whether or is smaller. Instead, the children of both of and are read. In cycle 2, the result of minimum value of and is known, so we only need to consider the children of in this case. t the mean time, H, I, J, K are read in stage +2, since we don t have the result of comparison of and E. So, in this way, we achieve one stage per cycle. The requirement for memory is that it has a read throughput of 4 values per cycle in each memory and a write throughput of 1 value per cycle in each memory. ENQUEUEEQUEUE Following the same procedure, now suppose we want to use to replace, use to replace, and then use to replace This full text paper was peer reviewed at the direction of IEEE ommunications Society subject matter experts for publication in the IEEE GOEOM 2006 proceedings. uthorized licensed use limited to Univ of alif San iego. ownloaded on January 13, 2010 at 1916 from IEEE Xplore. Restrictions apply.

4 . The operation pipeline is illustrated in Figure 6. In clock cycle 1, at stage, we have to start reading data. However, at this time, we are not sure that among and, which is the minimum entry. Instead, the children of both and are read. In cycle 2, the minimum of,, and is known, so we only need to compare,, E in stage. t the same time, H, I, J, K are read in stage +2, since we don t have the result of comparison of,, and E. So, in this way, we achieve one stage per cycle. The requirement for memory is that it has a read throughput of 4 values per cycle in each memory and a write throughput of 1 value per cycle in each memory. Inter-Operation Management If all three operations are supported in the same heap, interoperation management is worth considering. We can treat aequeue operation as an ENQUEUEEQUEUE operation that inserts a entry which has a void value. So the interoperation managements for EQUEUE and ENQUEUEEQUEUE are the same. Hence, there are four cases to be considered. They are ENQUEUE followed by ENQUEUE, ENQUEUE followed by EQUEUE (or ENQUEUEEQUEUE), EQUEUE (or ENQUEUEEQUEUE) followed by ENQUEUE, and EQUEUE (or ENQUEUEEQUEUE) followed by EQUEUE (or EN- QUEUEEQUEUE). For the first three cases, we can achieve one stage per operation. Once one operation goes to the next stage, we can insert another one. Since each stage can be finished in one cycle, the operation can be done in one cycle per operation for the first three cases. For the last case, if one EQUEUE operation followed by another EQUEUE operation, we can insert one more operation every two stages. The next EQUEUE needs to access the memory in next stage, so it has to wait until the last one finishes writing to the memory of that stage. Overall, the operation can be done in a fashion of every two cycles per operation. IV. EFFIIENT MEMORY MNGEMENT. Memory management for a forest of pipelined heaps In the applications of pipelined heaps, multiple heap structures are frequently used. For conventional heaps, all nodes are normally stored in an array in a single memory structure. However, in the case of a pipelined heap, the nodes are partitioned across log N pieces of memories, each corresponding to one of log N pipeline stages. These memories are of different sizes. For stage 1, the size is 1, for stage 2, it is 2, and for stage k, it is 2 k 1, where k is the number of layers in the heap. For a large heap, the disparity between the memory sizes at the the different layers can be enormous, making it difficult to maintain much regularity in the memory structures. Instead, we can treat the memories for a forest of pipelined heaps together. For pipelined heaps that have 6 stages, the memory sizes at each of the 6 stages are 1, 2, 4, 8, 16, and 32, respectively. ssume we have 10 heaps all together, then the memory sizes Heap 1 Heap 2 Fig. 7. Fig. 8. Heap 3 Heap N-1 Simplified forest of heaps. Heap N Example of memories of heaps. at each of the 6 stages for this forest of heaps would 10, 20, 40, 80, 160, and 320, respectively, if we were to implement a forest of heaps in a straightforward manner. However, there would be substantial disparity between the first stage and the last stage. lternatively, we can organize the forest of heaps as shown in Figure 7. In this scheme, half of the heaps are inverted and then interleaved with the other half. This way, the large storage requirements of the bottom layers of a heap are offset by the small storage requirements of the top layers of another heap. onsider again a forest of 10 heaps, each with 6 stages. y inverting half of the heaps in an alternating manner, the memory requirements at each of the 6 stages would become 165, 90, 60, 60, 90, and 165, respectively, which shows much less disparity in memory requirements between stages. onsider another simple example shown in Figure 8. y inverting half of the heaps in an alternating manner, the memory for this forest of heaps can be organized as shown in Figure 9. This forest of pipelined heaps needs to work in both directions. For pipelined heaps that are not inverted, the memory works from top to bottom. For pipelined heaps that are inverted, the memory works from bottom to top. However, in the implementation architecture for the pipeline stages, it is more logical to have the pipeline stages move only in one direction. In the case of a single pipelined heap, each pipeline stage needs to access the memory structure at its stage as well as the next stage, as depicted in Figure 2. Fig. 9. Memory of forest of pipelined heap. This full text paper was peer reviewed at the direction of IEEE ommunications Society subject matter experts for publication in the IEEE GOEOM 2006 proceedings. uthorized licensed use limited to Univ of alif San iego. ownloaded on January 13, 2010 at 1916 from IEEE Xplore. Restrictions apply.

5 input op1 arg1 stage 1 1 op2 p2 arg2 stage 2 2 op3 p3 arg3 stage 3 3 opn-2 pn-2 argn-2 stage N-2 N-2 opn-1 pn-1 argn-1 stage N-1 N-1 Fig. 12. Single pipelined heap arrangement. opn pn argn Fig. 10. stage N N Implementation structure for a forest of pipelined heaps. stage i Fig. 11. controller controller Memory management for one stage. To support inverted heaps that alternate with non-inverted heaps in a forest, we extend the implementation architecture to allow each pipeline stage to access the memory structures of the corresponding inverted memory layers, as shown in Figure 10. This way, the logical pipeline can still proceed in one physical direction. To hide this details from the control logic that implements the pipelined heap operations, a level of memory mapping can be inserted in hardware, as depicted in Figure 11. However, to implement this scheme, each memory structure needs to support potentially twice as many memory operations. Therefore, to support one stage per cycle, each memory structure needs to support a read throughput of 8 and a write throughput of 2 per cycle. Memory management for a single pipelined heap Even for a single pipelined heap, there is a better way of memory management that can achieve better performance. For the heap shown in Figure 12, the memory chips for the left hand side are 1, 2, 4, and 8. The memory chips for the right hand side are 7 and 8. For heap with 6 stages, the memories for pipelined heap are 1, 2, 4, 8, 16, and 32. fter rearrangement, the memories are 21, 18, and 24. It can be shown that after the rearrangement, the sizes of memories are closer distributed, which means that they are easier to be realized by using SRM. However, price is paid for the complex memories control logic. Since at one time, there can be only one operation entering the triangle on the top and only one operation entering all the triangles on the bottom. So, it requires that the memory has a read throughput of 8 and write throughput of 2. V. ONUSION In this paper, we presented new memory management techniques to enable a more efficient implementation of pipelined heaps. The memory management techniques are aimed at resolving the disparity in storage requirements between the top and bottom layers of a pipelined heap. In addition, we explored efficient implementation techniques to enable more aggressive pipelining. These efficient implementation techniques enable a faster priority queue implementation, which has important applications in network processing, including advanced QoSbased scheduling, fast packet buffer management, and network measurement. REFERENES [1]. K. Parekh, R. G. Gallager, generalized processor sharing approach to flow control in integrated service networks The single-node case, IEEE/M Transactions on Networking, vol. 1, pp , [2]. emers, S. Keshav, S. Shenkar, nalysis and simulation of a fair queueing algorithms, Proceedings of SIGOMM 89, pp. 1-12, ustin, TX, Sept [3] S. Iyer, N. Mckeown, esigning buffers for router line cards, Stanford University HPNG Technical Report - TR02-HPNG , Stanford,, Mar [4]. Shah, nalysis of a Statistics ounter rchitecture, Proc. IEEE Hot Interconnects 9, IEEE S Press, os lamitos,, [5] M. Roeder,. in, Maintaining Exact Statistics ounters with a Multi- evel ounter Memory, IEEE Global ommunications onference (Globecom 04), allas, Texas, Vol. 2, pages , November, [6] R. hagwan,. in, Fast and scalable priority queue architecture for high-speed network switches, Proceedings of INFOOM 2000, Tel viv, Israel, [7] T. H. ormen,. E. eiserson, and R.. Rivest, Introduction to algorithms, McGraw-Hill ook ompany, ISN This full text paper was peer reviewed at the direction of IEEE ommunications Society subject matter experts for publication in the IEEE GOEOM 2006 proceedings. uthorized licensed use limited to Univ of alif San iego. ownloaded on January 13, 2010 at 1916 from IEEE Xplore. Restrictions apply.

On the Efficient Implementation of Pipelined Heaps for Network Processing. Hao Wang, Bill Lin University of California, San Diego

On the Efficient Implementation of Pipelined Heaps for Network Processing. Hao Wang, Bill Lin University of California, San Diego On the Efficient Implementation of Pipelined Heaps for Network Processing Hao Wang, Bill Lin University of California, San Diego Outline Introduction Pipelined Heap Structure Single-Cycle Operation Memory

More information

Pipelined van Emde Boas Tree: Algorithms, Analysis, and Applications

Pipelined van Emde Boas Tree: Algorithms, Analysis, and Applications This fll text paper was peer reviewed at the direction of IEEE Commnications Society sbject matter experts for pblication in the IEEE INFOCOM 007 proceedings Pipelined van Emde Boas Tree: Algorithms, Analysis,

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 2 Baochun Li Department of Electrical and Computer Engineering University of Toronto Keshav Chapter 9.1, 9.2, 9.3, 9.4, 9.5.1, 13.3.4 ECE 1771: Quality

More information

Merge Sort Goodrich, Tamassia Merge Sort 1

Merge Sort Goodrich, Tamassia Merge Sort 1 Merge Sort 7 2 9 4 2 4 7 9 7 2 2 7 9 4 4 9 7 7 2 2 9 9 4 4 2004 Goodrich, Tamassia Merge Sort 1 Review of Sorting Selection-sort: Search: search through remaining unsorted elements for min Remove: remove

More information

Adaptive Linear Prediction of Queues for Reduced Rate Scheduling in Optical Routers

Adaptive Linear Prediction of Queues for Reduced Rate Scheduling in Optical Routers Adaptive Linear Prediction of Queues for Reduced Rate Scheduling in Optical Routers Yang Jiao and Ritesh Madan EE 384Y Final Project Stanford University Abstract This paper describes a switching scheme

More information

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Department of Electrical Engineering and Computer Sciences University of California Berkeley Review: Switch Architectures Input Queued

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

Programming II (CS300)

Programming II (CS300) 1 Programming II (CS300) Chapter 10: Search and Heaps MOUNA KACEM mouna@cs.wisc.edu Spring 2018 Search and Heaps 2 Linear Search Binary Search Introduction to trees Priority Queues Heaps Linear Search

More information

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov.

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov. Lecture 21 Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov. 7 http://money.cnn.com/2011/11/07/technology/juniper_internet_outage/

More information

Per-flow Queue Management with Succinct Priority Indexing Structures for High Speed Packet Scheduling

Per-flow Queue Management with Succinct Priority Indexing Structures for High Speed Packet Scheduling Page of Transactions on Parallel and Distributed Systems 0 0 Per-flow Queue Management with Succinct Priority Indexing Structures for High Speed Packet Scheduling Hao Wang, Student Member, IEEE, and Bill

More information

Per-flow Queue Scheduling with Pipelined Counting Priority Index

Per-flow Queue Scheduling with Pipelined Counting Priority Index 2011 19th Annual IEEE Symposium on High Performance Interconnects Per-flow Queue Scheduling with Pipelined Counting Priority Index Hao Wang and Bill Lin Department of Electrical and Computer Engineering,

More information

A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup

A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup Yan Sun and Min Sik Kim School of Electrical Engineering and Computer Science Washington State University Pullman, Washington

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches Xike Li, Student Member, IEEE, Itamar Elhanany, Senior Member, IEEE* Abstract The distributed shared memory (DSM) packet switching

More information

Definition of a Heap. Heaps. Priority Queues. Example. Implementation using a heap. Heap ADT

Definition of a Heap. Heaps. Priority Queues. Example. Implementation using a heap. Heap ADT Heaps Definition of a heap What are they for: priority queues Insertion and deletion into heaps Implementation of heaps Heap sort Not to be confused with: heap as the portion of computer memory available

More information

Congestion Control and Resource Allocation

Congestion Control and Resource Allocation Congestion Control and Resource Allocation Lecture material taken from Computer Networks A Systems Approach, Third Edition,Peterson and Davie, Morgan Kaufmann, 2007. Advanced Computer Networks Congestion

More information

CS551 Router Queue Management

CS551 Router Queue Management CS551 Router Queue Management Bill Cheng http://merlot.usc.edu/cs551-f12 1 Congestion Control vs. Resource Allocation Network s key role is to allocate its transmission resources to users or applications

More information

CMSC 341 Leftist Heaps

CMSC 341 Leftist Heaps CMSC 341 Leftist Heaps Based on slides from previous iterations of this course Today s Topics Review of Min Heaps Introduction of Left-ist Heaps Merge Operation Heap Operations Review of Heaps Min Binary

More information

Router Architectures

Router Architectures Router Architectures Venkat Padmanabhan Microsoft Research 13 April 2001 Venkat Padmanabhan 1 Outline Router architecture overview 50 Gbps multi-gigabit router (Partridge et al.) Technology trends Venkat

More information

CSE 373 OCTOBER 23 RD MEMORY AND HARDWARE

CSE 373 OCTOBER 23 RD MEMORY AND HARDWARE CSE 373 OCTOBER 23 RD MEMORY AND HARDWARE MEMORY ANALYSIS Similar to runtime analysis MEMORY ANALYSIS Similar to runtime analysis Consider the worst case MEMORY ANALYSIS Similar to runtime analysis Rather

More information

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1.

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1. 7.2 Binary Min-Heaps A heap is a tree-based structure, but it doesn t use the binary-search differentiation between the left and right sub-trees to create a linear ordering. Instead, a binary heap only

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

An Enhanced Dynamic Packet Buffer Management

An Enhanced Dynamic Packet Buffer Management An Enhanced Dynamic Packet Buffer Management Vinod Rajan Cypress Southeast Design Center Cypress Semiconductor Cooperation vur@cypress.com Abstract A packet buffer for a protocol processor is a large shared

More information

ò mm_struct represents an address space in kernel ò task represents a thread in the kernel ò A task points to 0 or 1 mm_structs

ò mm_struct represents an address space in kernel ò task represents a thread in the kernel ò A task points to 0 or 1 mm_structs Last time We went through the high-level theory of scheduling algorithms Scheduling Today: View into how Linux makes its scheduling decisions Don Porter CSE 306 Lecture goals Understand low-level building

More information

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router Scheduling algorithms Scheduling Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Scheduling: choose a packet to transmit over a link among all

More information

Scheduling. Don Porter CSE 306

Scheduling. Don Porter CSE 306 Scheduling Don Porter CSE 306 Last time ò We went through the high-level theory of scheduling algorithms ò Today: View into how Linux makes its scheduling decisions Lecture goals ò Understand low-level

More information

Chapter 4 Network Layer: The Data Plane

Chapter 4 Network Layer: The Data Plane Chapter 4 Network Layer: The Data Plane A note on the use of these Powerpoint slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you see

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms Spring 2017-2018 Outline 1 Priority Queues Outline Priority Queues 1 Priority Queues Jumping the Queue Priority Queues In normal queue, the mode of selection is first in,

More information

Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. B-Trees Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Motivation for B-Trees So far we have assumed that we can store an entire data structure in main memory.

More information

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19 CSE34T/CSE549T /05/04 Lecture 9 Treaps Binary Search Trees (BSTs) Search trees are tree-based data structures that can be used to store and search for items that satisfy a total order. There are many types

More information

cs.ni/ Nov 1998

cs.ni/ Nov 1998 TP TRUNKING H. T. Kung and S. Y. Wang Division of Engineering and pplied Sciences Harvard University ambridge, M 02138, US email: kung@eecs.harvard.edu July 1998 cs.ni/9811028 20 Nov 1998 STRT TP trunk

More information

Design and Implementation of Measurement-Based Resource Allocation Schemes Within The Realtime Traffic Flow Measurement Architecture

Design and Implementation of Measurement-Based Resource Allocation Schemes Within The Realtime Traffic Flow Measurement Architecture Design and Implementation of Measurement-Based Resource Allocation Schemes Within The Realtime Traffic Flow Measurement Architecture Robert D. allaway and Michael Devetsikiotis Department of Electrical

More information

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Merge Sort 2015 Goodrich and Tamassia Merge Sort 1 Application: Internet Search

More information

EP2210 Scheduling. Lecture material:

EP2210 Scheduling. Lecture material: EP2210 Scheduling Lecture material: Bertsekas, Gallager, 6.1.2. MIT OpenCourseWare, 6.829 A. Parekh, R. Gallager, A generalized Processor Sharing Approach to Flow Control - The Single Node Case, IEEE Infocom

More information

Long-Haul TCP vs. Cascaded TCP

Long-Haul TCP vs. Cascaded TCP Long-Haul TP vs. ascaded TP W. Feng 1 Introduction In this work, we investigate the bandwidth and transfer time of long-haul TP versus cascaded TP [5]. First, we discuss the models for TP throughput. For

More information

CSC 401 Data and Computer Communications Networks

CSC 401 Data and Computer Communications Networks CSC 401 Data and Computer Communications Networks Network Layer Overview, Router Design, IP Sec 4.1. 4.2 and 4.3 Prof. Lina Battestilli Fall 2017 Chapter 4: Network Layer, Data Plane chapter goals: understand

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle (UNC-CH) November 16-17, 2001 Univ. of Maryland Symposium The Problem TCP Reno congestion control reacts only to packet

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Crossbar - example. Crossbar. Crossbar. Combination: Time-space switching. Simple space-division switch Crosspoints can be turned on or off

Crossbar - example. Crossbar. Crossbar. Combination: Time-space switching. Simple space-division switch Crosspoints can be turned on or off Crossbar Crossbar - example Simple space-division switch Crosspoints can be turned on or off i n p u t s sessions: (,) (,) (,) (,) outputs Crossbar Advantages: simple to implement simple control flexible

More information

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach Topic 4b: QoS Principles Chapter 9 Computer Networking: A Top Down Approach 7 th edition Jim Kurose, Keith Ross Pearson/Addison Wesley April 2016 9-1 Providing multiple classes of service thus far: making

More information

White Paper Enabling Quality of Service With Customizable Traffic Managers

White Paper Enabling Quality of Service With Customizable Traffic Managers White Paper Enabling Quality of Service With Customizable Traffic s Introduction Communications networks are changing dramatically as lines blur between traditional telecom, wireless, and cable networks.

More information

Lecture 8 13 March, 2012

Lecture 8 13 March, 2012 6.851: Advanced Data Structures Spring 2012 Prof. Erik Demaine Lecture 8 13 March, 2012 1 From Last Lectures... In the previous lecture, we discussed the External Memory and Cache Oblivious memory models.

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master

More information

Power Efficient IP Lookup with Supernode Caching

Power Efficient IP Lookup with Supernode Caching Power Efficient IP Lookup with Supernode Caching Lu Peng, Wencheng Lu * and Lide Duan Department of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 73 {lpeng, lduan1}@lsu.edu

More information

Parallel Databases C H A P T E R18. Practice Exercises

Parallel Databases C H A P T E R18. Practice Exercises C H A P T E R18 Parallel Databases Practice Exercises 181 In a range selection on a range-partitioned attribute, it is possible that only one disk may need to be accessed Describe the benefits and drawbacks

More information

Operations on Heap Tree The major operations required to be performed on a heap tree are Insertion, Deletion, and Merging.

Operations on Heap Tree The major operations required to be performed on a heap tree are Insertion, Deletion, and Merging. Priority Queue, Heap and Heap Sort In this time, we will study Priority queue, heap and heap sort. Heap is a data structure, which permits one to insert elements into a set and also to find the largest

More information

Question 7.11 Show how heapsort processes the input:

Question 7.11 Show how heapsort processes the input: Question 7.11 Show how heapsort processes the input: 142, 543, 123, 65, 453, 879, 572, 434, 111, 242, 811, 102. Solution. Step 1 Build the heap. 1.1 Place all the data into a complete binary tree in the

More information

Mark Sandstrom ThroughPuter, Inc.

Mark Sandstrom ThroughPuter, Inc. Hardware Implemented Scheduler, Placer, Inter-Task Communications and IO System Functions for Many Processors Dynamically Shared among Multiple Applications Mark Sandstrom ThroughPuter, Inc mark@throughputercom

More information

Implementations. Priority Queues. Heaps and Heap Order. The Insert Operation CS206 CS206

Implementations. Priority Queues. Heaps and Heap Order. The Insert Operation CS206 CS206 Priority Queues An internet router receives data packets, and forwards them in the direction of their destination. When the line is busy, packets need to be queued. Some data packets have higher priority

More information

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture Generic Architecture EECS : Introduction to Computer Networks Switch and Router Architectures Computer Science Division Department of Electrical Engineering and Computer Sciences University of California,

More information

Mohammad Hossein Manshaei 1393

Mohammad Hossein Manshaei 1393 Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 Multimedia networking:

More information

Networking Acronym Smorgasbord: , DVMRP, CBT, WFQ

Networking Acronym Smorgasbord: , DVMRP, CBT, WFQ Networking Acronym Smorgasbord: 802.11, DVMRP, CBT, WFQ EE122 Fall 2011 Scott Shenker http://inst.eecs.berkeley.edu/~ee122/ Materials with thanks to Jennifer Rexford, Ion Stoica, Vern Paxson and other

More information

CS 3330 Final Exam Spring 2016 Computing ID:

CS 3330 Final Exam Spring 2016 Computing ID: S 3330 Spring 2016 Final xam Variant O page 1 of 10 mail I: S 3330 Final xam Spring 2016 Name: omputing I: Letters go in the boxes unless otherwise specified (e.g., for 8 write not 8 ). Write Letters clearly:

More information

Heaps. Heaps Priority Queue Revisit HeapSort

Heaps. Heaps Priority Queue Revisit HeapSort Heaps Heaps Priority Queue Revisit HeapSort Heaps A heap is a complete binary tree in which the nodes are organized based on their data values. For each non- leaf node V, max- heap: the value in V is greater

More information

THERE are a growing number of Internet-based applications

THERE are a growing number of Internet-based applications 1362 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 14, NO. 6, DECEMBER 2006 The Stratified Round Robin Scheduler: Design, Analysis and Implementation Sriram Ramabhadran and Joseph Pasquale Abstract Stratified

More information

Tree-Based Minimization of TCAM Entries for Packet Classification

Tree-Based Minimization of TCAM Entries for Packet Classification Tree-Based Minimization of TCAM Entries for Packet Classification YanSunandMinSikKim School of Electrical Engineering and Computer Science Washington State University Pullman, Washington 99164-2752, U.S.A.

More information

Routers & Routing : Computer Networking. Binary Search on Ranges. Speeding up Prefix Match - Alternatives

Routers & Routing : Computer Networking. Binary Search on Ranges. Speeding up Prefix Match - Alternatives Routers & Routing -44: omputer Networking High-speed router architecture Intro to routing protocols ssigned reading [McK9] Fast Switched ackplane for a Gigabit Switched Router Know RIP/OSPF L-4 Intra-omain

More information

System of Systems Complexity Identification and Control

System of Systems Complexity Identification and Control System of Systems Identification and ontrol Joseph J. Simpson Systems oncepts nd ve NW # Seattle, W jjs-sbw@eskimo.com Mary J. Simpson Systems oncepts nd ve NW # Seattle, W mjs-sbw@eskimo.com bstract System

More information

Scalable Schedulers for High-Performance Switches

Scalable Schedulers for High-Performance Switches Scalable Schedulers for High-Performance Switches Chuanjun Li and S Q Zheng Mei Yang Department of Computer Science Department of Computer Science University of Texas at Dallas Columbus State University

More information

Comparison Sorts. Chapter 9.4, 12.1, 12.2

Comparison Sorts. Chapter 9.4, 12.1, 12.2 Comparison Sorts Chapter 9.4, 12.1, 12.2 Sorting We have seen the advantage of sorted data representations for a number of applications Sparse vectors Maps Dictionaries Here we consider the problem of

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

MUD: Send me your top 1 3 questions on this lecture

MUD: Send me your top 1 3 questions on this lecture Administrivia Review 1 due tomorrow Email your reviews to me Office hours on Thursdays 10 12 MUD: Send me your top 1 3 questions on this lecture Guest lectures next week by Prof. Richard Martin Class slides

More information

Dynamic Scheduling Algorithm for input-queued crossbar switches

Dynamic Scheduling Algorithm for input-queued crossbar switches Dynamic Scheduling Algorithm for input-queued crossbar switches Mihir V. Shah, Mehul C. Patel, Dinesh J. Sharma, Ajay I. Trivedi Abstract Crossbars are main components of communication switches used to

More information

1 Interlude: Is keeping the data sorted worth it? 2 Tree Heap and Priority queue

1 Interlude: Is keeping the data sorted worth it? 2 Tree Heap and Priority queue TIE-0106 1 1 Interlude: Is keeping the data sorted worth it? When a sorted range is needed, one idea that comes to mind is to keep the data stored in the sorted order as more data comes into the structure

More information

Virtual Memory. Chapter 8

Virtual Memory. Chapter 8 Chapter 8 Virtual Memory What are common with paging and segmentation are that all memory addresses within a process are logical ones that can be dynamically translated into physical addresses at run time.

More information

lecture notes September 2, How to sort?

lecture notes September 2, How to sort? .30 lecture notes September 2, 203 How to sort? Lecturer: Michel Goemans The task of sorting. Setup Suppose we have n objects that we need to sort according to some ordering. These could be integers or

More information

Heap: A binary heap is a complete binary tree in which each, node other than root is smaller than its parent. Heap example: Fig 1. NPTEL IIT Guwahati

Heap: A binary heap is a complete binary tree in which each, node other than root is smaller than its parent. Heap example: Fig 1. NPTEL IIT Guwahati Heap sort is an efficient sorting algorithm with average and worst case time complexities are in O(n log n). Heap sort does not use any extra array, like merge sort. This method is based on a data structure

More information

Efficient pebbling for list traversal synopses

Efficient pebbling for list traversal synopses Efficient pebbling for list traversal synopses Yossi Matias Ely Porat Tel Aviv University Bar-Ilan University & Tel Aviv University Abstract 1 Introduction 1.1 Applications Consider a program P running

More information

COMP3121/3821/9101/ s1 Assignment 1

COMP3121/3821/9101/ s1 Assignment 1 Sample solutions to assignment 1 1. (a) Describe an O(n log n) algorithm (in the sense of the worst case performance) that, given an array S of n integers and another integer x, determines whether or not

More information

Range Queries. Kuba Karpierz, Bruno Vacherot. March 4, 2016

Range Queries. Kuba Karpierz, Bruno Vacherot. March 4, 2016 Range Queries Kuba Karpierz, Bruno Vacherot March 4, 2016 Range query problems are of the following form: Given an array of length n, I will ask q queries. Queries may ask some form of question about a

More information

Algorithm Analysis Advanced Data Structure. Chung-Ang University, Jaesung Lee

Algorithm Analysis Advanced Data Structure. Chung-Ang University, Jaesung Lee Algorithm Analysis Advanced Data Structure Chung-Ang University, Jaesung Lee Priority Queue, Heap and Heap Sort 2 Max Heap data structure 3 Representation of Heap Tree 4 Representation of Heap Tree 5 Representation

More information

EECS 122: Introduction to Computer Networks Switch and Router Architectures. Today s Lecture

EECS 122: Introduction to Computer Networks Switch and Router Architectures. Today s Lecture EECS : Introduction to Computer Networks Switch and Router Architectures Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley,

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master Course Computer Networks IN2097 Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Chair for

More information

Lecture 16: Network Layer Overview, Internet Protocol

Lecture 16: Network Layer Overview, Internet Protocol Lecture 16: Network Layer Overview, Internet Protocol COMP 332, Spring 2018 Victoria Manfredi Acknowledgements: materials adapted from Computer Networking: A Top Down Approach 7 th edition: 1996-2016,

More information

PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS

PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS Objective PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS Explain what is meant by compiler. Explain how the compiler works. Describe various analysis of the source program. Describe the

More information

Analysis of Algorithms

Analysis of Algorithms Algorithm An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. In mathematics and

More information

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS Department of Computer Science University of Babylon LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS By Faculty of Science for Women( SCIW), University of Babylon, Iraq Samaher@uobabylon.edu.iq

More information

Lecture 19 Sorting Goodrich, Tamassia

Lecture 19 Sorting Goodrich, Tamassia Lecture 19 Sorting 7 2 9 4 2 4 7 9 7 2 2 7 9 4 4 9 7 7 2 2 9 9 4 4 2004 Goodrich, Tamassia Outline Review 3 simple sorting algorithms: 1. selection Sort (in previous course) 2. insertion Sort (in previous

More information

Technical University of Denmark

Technical University of Denmark Technical University of Denmark Written examination, May 7, 27. Course name: Algorithms and Data Structures Course number: 2326 Aids: Written aids. It is not permitted to bring a calculator. Duration:

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

OPTIMAL MULTI-CHANNEL ASSIGNMENTS IN VEHICULAR AD-HOC NETWORKS

OPTIMAL MULTI-CHANNEL ASSIGNMENTS IN VEHICULAR AD-HOC NETWORKS Chapter 2 OPTIMAL MULTI-CHANNEL ASSIGNMENTS IN VEHICULAR AD-HOC NETWORKS Hanan Luss and Wai Chen Telcordia Technologies, Piscataway, New Jersey 08854 hluss@telcordia.com, wchen@research.telcordia.com Abstract:

More information

Hierarchically Aggregated Fair Queueing (HAFQ) for Per-flow Fair Bandwidth Allocation in High Speed Networks

Hierarchically Aggregated Fair Queueing (HAFQ) for Per-flow Fair Bandwidth Allocation in High Speed Networks Hierarchically Aggregated Fair Queueing () for Per-flow Fair Bandwidth Allocation in High Speed Networks Ichinoshin Maki, Hideyuki Shimonishi, Tutomu Murase, Masayuki Murata, Hideo Miyahara Graduate School

More information

Designing High-Speed ATM Switch Fabrics by Using Actel FPGAs

Designing High-Speed ATM Switch Fabrics by Using Actel FPGAs pplication Note C105 esigning High-Speed TM Switch Fabrics by Using ctel FPGs The recent upsurge of interest in synchronous Transfer Mode (TM) is based on the recognition that it represents a new level

More information

A Proposal for a High Speed Multicast Switch Fabric Design

A Proposal for a High Speed Multicast Switch Fabric Design A Proposal for a High Speed Multicast Switch Fabric Design Cheng Li, R.Venkatesan and H.M.Heys Faculty of Engineering and Applied Science Memorial University of Newfoundland St. John s, NF, Canada AB X

More information

Lecture Notes on Priority Queues

Lecture Notes on Priority Queues Lecture Notes on Priority Queues 15-122: Principles of Imperative Computation Frank Pfenning Lecture 16 October 18, 2012 1 Introduction In this lecture we will look at priority queues as an abstract type

More information

Counter Braids: A novel counter architecture

Counter Braids: A novel counter architecture Counter Braids: A novel counter architecture Balaji Prabhakar Balaji Prabhakar Stanford University Joint work with: Yi Lu, Andrea Montanari, Sarang Dharmapurikar and Abdul Kabbani Overview Counter Braids

More information

CSCI2100B Data Structures Heaps

CSCI2100B Data Structures Heaps CSCI2100B Data Structures Heaps Irwin King king@cse.cuhk.edu.hk http://www.cse.cuhk.edu.hk/~king Department of Computer Science & Engineering The Chinese University of Hong Kong Introduction In some applications,

More information

Pi-PIFO: A Scalable Pipelined PIFO Memory Management Architecture

Pi-PIFO: A Scalable Pipelined PIFO Memory Management Architecture Pi-PIFO: A Scalable Pipelined PIFO Memory Management Architecture Steven Young, Student Member, IEEE, Itamar Arel, Senior Member, IEEE, Ortal Arazi, Member, IEEE Networking Research Group Electrical Engineering

More information

CSE 214 Computer Science II Heaps and Priority Queues

CSE 214 Computer Science II Heaps and Priority Queues CSE 214 Computer Science II Heaps and Priority Queues Spring 2018 Stony Brook University Instructor: Shebuti Rayana shebuti.rayana@stonybrook.edu http://www3.cs.stonybrook.edu/~cse214/sec02/ Introduction

More information

UNIT 2 TRANSPORT LAYER

UNIT 2 TRANSPORT LAYER Network, Transport and Application UNIT 2 TRANSPORT LAYER Structure Page No. 2.0 Introduction 34 2.1 Objective 34 2.2 Addressing 35 2.3 Reliable delivery 35 2.4 Flow control 38 2.5 Connection Management

More information

Adaptive Multimodule Routers

Adaptive Multimodule Routers daptive Multimodule Routers Rajendra V Boppana Computer Science Division The Univ of Texas at San ntonio San ntonio, TX 78249-0667 boppana@csutsaedu Suresh Chalasani ECE Department University of Wisconsin-Madison

More information

Improving QOS in IP Networks. Principles for QOS Guarantees

Improving QOS in IP Networks. Principles for QOS Guarantees Improving QOS in IP Networks Thus far: making the best of best effort Future: next generation Internet with QoS guarantees RSVP: signaling for resource reservations Differentiated Services: differential

More information

The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat

The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat BITAG TWG Boulder, CO February 27, 2013 Kathleen Nichols Van Jacobson Background The persistently full buffer problem, now called bufferbloat,

More information

Simulation of a Scheduling Algorithm Based on LFVC (Leap Forward Virtual Clock) Algorithm

Simulation of a Scheduling Algorithm Based on LFVC (Leap Forward Virtual Clock) Algorithm Simulation of a Scheduling Algorithm Based on LFVC (Leap Forward Virtual Clock) Algorithm CHAN-SOO YOON*, YOUNG-CHOONG PARK*, KWANG-MO JUNG*, WE-DUKE CHO** *Ubiquitous Computing Research Center, ** Electronic

More information

1 Hazards COMP2611 Fall 2015 Pipelined Processor

1 Hazards COMP2611 Fall 2015 Pipelined Processor 1 Hazards Dependences in Programs 2 Data dependence Example: lw $1, 200($2) add $3, $4, $1 add can t do ID (i.e., read register $1) until lw updates $1 Control dependence Example: bne $1, $2, target add

More information

Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks

Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks Yuhua Chen Jonathan S. Turner Department of Electrical Engineering Department of Computer Science Washington University Washington University

More information

Priority Queues and Heaps. Heaps of fun, for everyone!

Priority Queues and Heaps. Heaps of fun, for everyone! Priority Queues and Heaps Heaps of fun, for everyone! Learning Goals After this unit, you should be able to... Provide examples of appropriate applications for priority queues and heaps Manipulate data

More information

COMP 250 Midterm #2 March 11 th 2013

COMP 250 Midterm #2 March 11 th 2013 NAME: STUDENT ID: COMP 250 Midterm #2 March 11 th 2013 - This exam has 6 pages - This is an open book and open notes exam. No electronic equipment is allowed. 1) Questions with short answers (28 points;

More information

Efficient Rectilinear Steiner Tree Construction with Rectangular Obstacles

Efficient Rectilinear Steiner Tree Construction with Rectangular Obstacles Proceedings of the th WSES Int. onf. on IRUITS, SYSTEMS, ELETRONIS, ONTROL & SIGNL PROESSING, allas, US, November 1-3, 2006 204 Efficient Rectilinear Steiner Tree onstruction with Rectangular Obstacles

More information