Informatica Universiteit van Amsterdam. Distributed Load-Balancing of Network Flows using Multi-Path Routing. Kevin Ouwehand. September 20, 2015

Size: px
Start display at page:

Download "Informatica Universiteit van Amsterdam. Distributed Load-Balancing of Network Flows using Multi-Path Routing. Kevin Ouwehand. September 20, 2015"

Transcription

1 Bachelor Informatica Informatica Universiteit van Amsterdam Distributed Load-Balancing of Network Flows using Multi-Path Routing Kevin Ouwehand September 20, 2015 Supervisor(s): Stavros Konstantaros, Benno Overeinder (NLnetLabs) Signed:

2 2

3 Abstract With the growth of the internet, networks become bigger and data flows are increasing. To keep up with this increase, new technologies like SDN (which is explained in chapter 2) are invented to manage these networks and to load-balance them to maintain network performance. This managing and load-balancing has thus far been done with a central OpenFlow controller. This presents us with a network bottleneck and a single point of failure. Therefore, we present a unified approach that uses multiple controllers to combine the advantages of OpenFlow with a distributed approach. Each controller will make routing decisions based on an intelligent algorithm that uses the estimated bandwidth usage of flows and the estimated amount of free bandwidth a path has. Various parameters for proper bandwidth estimation and its effect on the performance of the system will be tested, as well as the stability of the system with regards to oscillation between multiple paths for a flow. It is observed that with good parameters, an accurate estimation of the bandwidth usage of flows leads to good network performance and optimal results. However, the detection of congestion at links with flows that use relatively lowbandwidth compared to the link capacity, is not sufficient, and will have to be improved. Finally, the system is stable and shows no signs of oscillation of paths.

4 2

5 Contents 1 Introduction Research Questions Related Work Thesis Outline Background Current load-balancing techniques TRILL SPB and ECMP Software Defined Networks (SDN) Flow-based forwarding using OpenFlow Approach Strategy description Intelligent algorithm Retrieving information Network topology Bandwidth estimation for flows and paths Other controller functionality Packet-ins for new flows Algorithm and problematic flows Configurable parameters Implementation Controller to controller communication Topology handling Implementation constraints Manageability Flow issues Experimental results Test-bed configuration First scenario Second scenario Performance measurements Functionality Test Stability Test Discussion 33 7 Conclusion 35 8 Future work 37 3

6 9 References 39 4

7 CHAPTER 1 Introduction Since its creation, the internet has kept growing both in size and usage. Networks become bigger, more datacenters are being built and expanding in physical size as well as capacity [1]. This growth is needed to keep up with more data flows that also become bigger and bigger. According to a recent article published by Cisco [2], the annual global IP traffic is now five times larger than in The amount of yearly IP traffic will reach the zettabyte barrier in 2016, and will be 2 zettabytes by Also, by 2019, Content Delivery Networks (CDNs, who are responsible for delivering content like video-streams to end-users) will carry more than half of the internet s traffic. With the increasing popularity of video-streaming services like Netflix, this should be no surprise, especially with new technologies like 4K that offer video-streams at a higher resolution, and will thus require even more bandwidth. All this traffic is delivered via high-speed networks to end-users. With this increase of traffic, proper load-balancing of the networks becomes important to maintain high speeds. Using the classical Ethernet switches to build these networks presents a few problems. Although they are easy to setup and require no or only little maintenance, it is not possible to do any form of load-balancing by dynamically using multiple paths. In large networks, multiple paths are often available, but these are not always used, which leads to network congestion and inefficient network usage. There are other technologies, like TRILL and ECMP, which are discussed in the next chapter. TRILL provides some form of load-balancing, but in this method, a switch keeps state about all other switches to make a routing decision [3]. With the networks ever-increasing, this method does not scale well. ECMP is a method used to make routing decisions, but it only considers paths of equal cost and does not load-balance along all possible paths [4]. With the networks ever-increasing, these methods could create serious scalability problems. Another technology that enables load-balancing is Software Defined Networking (SDN). A classical switch handles the forwarding of packets (data plane) and the routing (control plane) on its own. With SDN, these two planes are separated, and the switch is left only with the data plane [5]. The complex control plane is moved to a different system, which is usually an average commodity server. This system can be programmed to make the routing decisions for the switch. A great example of SDN is OpenFlow, which will be explained in more detail in the next chapter. While this method allows for great manageability and thus load-balancing of the network via that central controller, it also presents a few problems. The most notable one being that, with the aforementioned growth of networks and data-flows, the central approach does not scale and thus presents a bottleneck. Furthermore, it has a single point of failure. If this controller stops functioning for any reason, and the switches lose their connection with the controller, they will delete all the installed rules [6]. This means that the network will be completely unusable until the controller functions properly again. To mitigate this problem, OpenFlow allows network engineers to configure a backup controller. Also, emergency rules can be installed by a controller that will be activated upon losing connection with the controller. However, these rules will be static, and provide no proper (dynamic) load-balancing of the network. 5

8 1.1 Research Questions Our approach combines SDN with a distributed approach, to get the best of both worlds. Multiple, independent controllers will be used to control the switches, and multiple paths will be used to do dynamic load-balancing. Controllers can communicate with each other to exchange information to enhance their routing decisions. In our research, we will look at the performance of this approach. Also, when using multiple paths, oscillation of paths can take place, so the stability of the solution is tested as well. Finally, the controllers communicate with each other to exchange free bandwidth information of their own links, since there is no central controller anymore. Interesting question is whether local information is enough to achieve high throughput, or how the usage of non-local information changes the results. Our research questions are thus: What is the performance of the system? How stable is the system? How does communication between controllers influence the network throughput? 1.2 Related Work Various other studies also analyzed the effect on performance when load-balancing traffic flows over various paths, but so far they all depended on a central entity to handle the load-splitting. However, all studies showed an increase in response time and/or throughput, depending on their optimization goals. A thorough research by Konstantaras et al.[7] found that for larger file-transfers, splitting traffic flows (instead of using the same path) yielded a maximum of 45 percent increase in throughput. Also, Sridharan et al.[8] showed and tested a near-optimal solution for trafficengineering while preserving the existing infrastructure, by using traffic knowledge to reroute traffic using multiple paths. Finally, Jarschel et al.[9] researched several methods on how to load-balance traffic. They tested Round-Robin, Bandwith-based, DPI (Deep Packet Inspection), and Application-Aware path selection. DPI outperformed all other path selection methods, with the exception of the Application-Aware method, at the cost of a much less efficient use of network resources. The Application-Aware method yielded results equal to DPI, but with a more efficient use of network resources comparable with the non-dpi methods. 1.3 Thesis Outline In the next chapter, relevant backgrounds are explained, these include the concepts of switching in general, its limitations, and a few other techniques with regards to load-balancing network loads such as the aforementioned Thrill and ECMP. Also, the general concepts of SDN will be presented, most notably OpenFlow will be explained in more depth, as the presented solution in this paper uses OpenFlow. In chapter 3, the presented solution will be explained, covering the design and the algorithm. In chapter 4, the implementation and the overall constraints of that design will be explained. Then, the results of the experiments will be shown, covering configuration of the test-bed, tested scenarios, and the results. In chapter 6, the results will be discussed and in chapter 7 a conclusion will be drawn. Finally, future work is discussed in chapter 8, after which the list of references is presented. 6

9 CHAPTER 2 Background There are various methods with regards to forwarding packets through a network. The most widely used forwarding switches are the classical Ethernet switches, which are not without limitations. For instance, dynamic load-balancing of an Ethernet network is not possible. Using multiple paths for packet flows is also not possible, and flooding can be problematic as well. The last issue can be attributed to the fact that Ethernet does not have a Time-To-Live (TTL) value as IP does. If a network contains loops (which is often the case for networks that have multiple paths between destinations), flooding packets using Ethernet will be flooded forever. However, protocols like STP (Spanning Tree Protocol) counter this by disabling flooding on certain ports. In IP, the TTL value is decremented at each hop to prevent packets from being flooded forever, as packets with a TTL value of zero are discarded. 2.1 Current load-balancing techniques Although some of these limitations have been overcome by adding to the Ethernet standard (e.g., STP), not all issues are accounted for: dynamic load-balancing of the network is still not possible. A few methods that do address this problem will be discussed TRILL Transparent Interconnection of Lots of Links (TRILL) is an IETF standard that combines bridging and routing by using special TRILL switches (also called RBridges) within an existing Ethernet network [3]. These RBridges broadcast their connectivity to all other RBridges, so that each one knows about all the others and their connectivity (a so-called link state protocol). A TTL value in the form of a hop count is used to address the issue of flooding in networks with loops. Using the connectivity information, each TRILL switch calculates all the shortest paths from itself to every other TRILL switch. TRILL then selects, based on a certain selection algorithm (e.g. layer 3 techniques), which path(s) are used for what frames. The best part of TRILL is that it can co-exist within an existing Ethernet network and add layer 3 techniques to improve the performance of the network. The downside is that every RBridge keeps connectivity information and paths to all the other RBridges, which does not scale well SPB and ECMP While using a spanning tree prevented the use of some links that could yield a loop of packets when flooding, Shortest Path Bridging (SPB) enables all paths that have the same (least) cost. This method is also known as Equal-Cost Multi-Path (ECMP) routing, and can be combined with TRILL as the selection algorithm. ECMP is a per-hop strategy where a router can select which next hop a packet takes, if multiple equal-cost paths are available. If multiple paths are available, but there are no paths with the same cost as the least cost path, then only the least cost path is used. The advantage that ECMP has is the fact that load-balancing can be done 7

10 for paths with equal costs, making sure that higher cost (and potentially much slower or worse) paths are not used. The downside is the other side of that coin: paths with a slightly higher cost are not used altogether, which could lead to congestion at the selected least cost path(s), while leaving the slightly higher cost paths idle. [10] 2.2 Software Defined Networks (SDN) In a normal router or switch, the data path (forwarding of packets) and the control path (routing) are handled by the same device. The SDN architecture separates these two, leaving the simple data path at the switch, and moving the complex control path to another system. This concept aims to be simple, dynamic, and manageable by making the network control programmable [5]. By doing this, the network control can be programmed to be very flexible and do all sorts of things, like dynamic load-balancing, using multiple paths, handle flooding, etc Flow-based forwarding using OpenFlow The OpenFlow protocol is one way of implementing SDN, by providing communication between the data plane and the control plane. With OpenFlow, the OpenFlow-enabled switches have a flow table with rules on what packets to output to which ports. A central OpenFlow controller manages all these rules for all the switches, which are communicated via the OpenFlow protocol [6]. An overview of flow-based forwarding using the OpenFlow standard version 1.0 will be given in this section, using information from the specification [6]. For an even more in-depth explanation, the OpenFlow white paper and specification can be consulted. Figure 2.1: Structure of a flow entry, showing the 3 parts that make up a flow entry. At the heart of the OpenFlow approach are flow entries, which are used by OpenFlow switches to handle the data plane. An OpenFlow switch has 1 or more flow tables, and each table has 0 or more flow entries. A flow entry consist of header fields, counters, and a list of actions (see figure 2.1). The header fields are used for matching incoming packets to this flow entry by using, for example, the link-layer source and destination address. Wild-carding can be used to match all flows using a certain TCP source port, for instance. Also, if the switch supports it, subnet masks can provide more fine-grained wild-carding. Table 2.1 shows an example of the header fields part of a flow entry. in port eth src eth dst ip src ip dst ip proto tp src tp dst 1 * aa:bb:cc:dd:ee:ff * (TCP) Table 2.1: Example of a flow entry showing some of the more important header fields that are used in this research. Wild-carded options are denoted with a *. It should be noted that in the description below, Ethernet addresses are also known as MAC addresses. 8

11 Field in port eth src eth dst ip src ip dst ip proto Description This is the port from which the packet entered the switch, also known as the ingress port. The Ethernet source address of the packet. The Ethernet destination address of the packet. The IP source address of the packet. The IP destination address of the packet. The protocol of the transport layer of the packet. A value of 6 indicates TCP, whereas 17 indicates UDP. Other values are possible and are mentioned in the specification. There is also a header field with nw proto, indicating the protocol of the network layer (e.g., IP). tp src The transport layer source port, e.g. incoming TCP port tp dst The transport layer destination port, e.g. outgoing TCP port The counters of a flow entry are used for keeping track of per-flow statistics. These statistics include, among others, the number of packets that have been matched with this flow entry, the amount of bytes that has been sent using this flow entry, and more. An example of a flow entry showing some of the more important counters is in table 2.2. duration packet count byte count idle timeout idle age hard timeout s s 2s 600s Table 2.2: A flow entry showing some of the more important counters, of which a few were used in this research. They are explained in detail in the table below. Counter duration packet count byte count idle timeout idle age hard timeout Description The duration shows how long the flow entry is in the flow tables of the switch. This shows how many packets have matched this flow entry. This tells how many bytes of packets have been forwarded using this entry. The idle timeout value tells the switch how long to wait before removing the flow entry, if no packets matched that entry in the specified amount of time. This shows how long no packets have matched this flow entry. The hard timeout value tells the switch how long to wait before removing the flow entry, regardless of whether it is active or not. The switch will remove a flow entry if either the idle age reaches idle timeout seconds, or if duration reaches hard timeout, whichever comes first. Finally, the actions list is used to handle packets that matched this flow entry. Various (and even multiple) actions can be set, e.g., send all matched packets to the controller, forward all matched packets out of a certain port, etc. The next table (table 2.3) shows some of the possible actions of a flow entry. There are more actions defined, but these are the most important ones, as they are used in this research. 9

12 Action modify drop forward enqueue (optional) Description This action allows the switch to modify various headers of the actual packets that matched this flow entry. For example, the Ethernet destination address can be modified, VLAN IDs can be changed, or the transport-layer destination port can be changed. This action simply drops the packet. This action tells the switch to forward the packet over one or more of its physical ports. Virtual ports can be used as well, and two examples of virtual ports are the CONTROLLER, IN PORT and ALL ports. Forwarding can then be used to send the packets to the controller (CONTROLLER), to flood the packets (ALL), to send it back from the port it arrived from (IN PORT), or to send it to its destination using any of the normal, physical ports. This action enqueues a packet to a pre-defined queue attached to a port, and is used to provide basic Quality-of-Service (QoS) as configured by the queue. For example, a simple configuration can be used to rate-limit the bandwidth usage of flows. This action is optional because the OpenFlow specification does not require the switch to implement this action. All the other actions and items mentioned are required to be implemented by an OpenFlow switch. Table 2.3: A table showing some of the possible actions that can be used for flow entries. OpenFlow protocol The OpenFlow protocol is an application-layer protocol that runs on top of TCP, and handles communication between OpenFlow switches (data plane) and OpenFlow controllers (control plane). It defines various communication messages that are used by the controller and switch to manage the flow tables. These messages can be used to send and receive network packets from the switch, install flow-table entries to forward flows over a certain path, query the switch for statistics, and much more. The switch and controller can initiate and send messages to each other asynchronously, resulting in communication in both directions (switch-to-controller and controller-to-switch communication). A list of some of these messages and their descriptions is provided in table 2.4. Statistics Various types of statistics can be gathered using OpenFlow. There are counters per table, showing the number of active flow entries, the number of packets looked up in this table, and the number of matched packets for this table. It also includes how many entries can be stored in that table. Second, there are statistics per individual flow entry, which is the counters field in a flow entry as explained and shown above in the previous section on flow entries (table 2.2). And finally, there are statistics per port, showing comparable statistics to flow entries. It includes, among others, the number of packets and bytes sent and received via this port. A short list of these port statistics is presented below in table

13 Message packet-in send-packet flow-mod setup/configuration error read-state Description The switch sends a packet-in message to the controller if the switch received a packet that did not match any flow entry in its tables. This message is also sent if the packet matches an existing entry with the action specifying that it should be sent to the controller. With this message, the switch sends the first, often 128 bytes of the packet as well as the ingress port to the controller. This way, it enables the controller to determine the proper header fields for a potential new flow entry. The controller can then create a new flow entry, or update an existing one, to accommodate this new packet and communicate that to the switch via a flow-mod message. This message is sent by the controller to send a packet (included in the message) out of a specified port on the switch. The controller sends a flow-mod message if a new flow entry needs to be added to the tables of the switch. This message is also used to update an existing flow entry, which can be seen as removing and re-adding that entry (effectively resetting any counters for that entry). This message can be sent by the controller as a response to a packet-in event, or it can be a standalone message. The controller includes in this message the header fields, and a list of actions. The setup/configuration messages are used to setup the initial OpenFlow connection, and to exchange information about configuration details. These details include, among others, the features supported by the switch, and the version of OpenFlow that will be used by both the switch and the controller. The error messages are used by the switch to indicate errors to the controller, such as a failure to modify a flow entry. This message is sent by the controller to request various statistics about the flow tables, flow entries, and ports. The switch must respond with the requested information. Table 2.4: A table showing some of the OpenFlow messages that are defined by the OpenFlow protocol. Statistic rx packets tx packets rx bytes tx bytes drops, and errors Description This shows how many incoming packets have been received on this port. This shows how many outgoing packets have been sent using this port. This shows how many bytes have been received on this port. This shows how many bytes have been sent using this port. This shows how many received and transmitted packets have been dropped, as well as how many receive and transmit errors have occurred. If any packets are dropped or errors occur, these statistics will show that. The switch can also send an error message to the controller with more details. Table 2.5: A table showing some of the OpenFlow statistics that are reported for OpenFlow ports. 11

14 12

15 CHAPTER 3 Approach There are currently various ways of load-balancing network flows using multiple paths, as explained in the previous chapter. The idea is to use multiple paths to reduce congestion, increase the throughput of the network, and to optimally use the network topology. As shown in chapter 2, there are already a few methods to dynamically use multiple paths to route traffic over the network. There are multiple ways to select a path out of multiple paths. For instance, the shortest path can be chosen, or the path with the least congestion can be chosen. Since the network traffic keeps increasing, requiring more and more bandwidth, our approach selects the path with the most free bandwidth. This allows flows to use as much bandwidth as possible and thereby increase the network throughput. However, using the path with the most free bandwidth may not always be the shortest path, so it is quite possible that the latency increases. On the other hand, if the shortest path is congested, then the longer path could very well be faster. 3.1 Strategy description Currently, this load-balancing using OpenFlow is all done using a centralized approach, and now we present our distributed approach. Our approach uses multiple OpenFlow controllers, each controlling an OpenFlow switch. These controllers can communicate with each other to exchange information about the network status. The approach is distributed in the sense that each controller in the network makes a decision about which packet is forwarded over which link ( next-hop forwarding ). A controller makes this decision based on local information (its own statistics), and may use additional non-local information to improve its decision (statistics communicated by other controllers). In order to do dynamic load-balancing, multiple paths will be used for routing. The routing decisions that will be made by the controllers will be made based on the bandwidth usage of network flows, as well as how much free bandwidth a path has. This will be explained in paragraph When multiple independent controllers are used, no single point of failure is present, as each controller manages his own switch (or a small group of switches), leading to independent devices (or independent sub-networks respectively). If one or more controllers were to stop functioning, their corresponding switches would stop working. However, this time, other controllers could detect this and use a different path to work around the problem. The network will still be usable, and will thus also be more stable than with the central approach. Finally, the biggest advantage of this approach is scalability: with the growth of networks and more switches being added, more controllers can be used to handle these new switches. An overview of the entire architecture of the solution can be seen in figure 3.1. The POX controller [11] provides the basis of our OpenFlow controller, handling the OpenFlow connection with the OpenFlow switch. We extended the POX controller with various components. Most notably, the intelligent algorithm assigns paths to flows, so that each controller makes his own routing decisions. To make good routing decisions, some information is needed, which is retrieved from the other components. They provide network topology information, as well as information 13

16 about the network status. The network status is regarded as the amount of bandwidth flows use, and also how much free bandwidth a path has. As each controller controls only his switch, each controller can only gather statistics from his own switch. Therefore, another component was added to communicate with other controllers, so that local information can be shared among controllers to improve their routing decisions. All these components will be explained in more detail in the following sections. Figure 3.1: Architecture of the presented solution, showing the various components of the system and how they interact with each other. The POX Controller [11] and Open vswitch [12] provided the basis of our architecture, to which we added various components for our method to function. 3.2 Intelligent algorithm At the heart of the system is the algorithm that makes the routing decisions, and thus decides which flows take which paths. This algorithm is run by every controller, using local information and optionally non-local information. As explained, since the optimization goal is network throughput, decisions need to be made based on bandwidth usage. The following algorithm presents the proposed solution to the problem statement in pseudocode in algorithm 1. A flowchart of the algorithm is presented in figure 3.2. First, for each flow in F, all the paths between the current switch and the destination of that flow are determined. The paths are determined via the use of the Topology manager in figure 3.1. The current switch represents the switch for which the controller runs the algorithm. Optionally, the paths can be sorted to, for instance, prefer shorter paths. All the paths that cannot support the required bandwidth of this flow are filtered out. This way, it is made sure that flows who use a lot of bandwidth, do not switch to slower paths that cannot support them. Also, paths that lead back to the source of this flow are removed, to prevent loops. Now that all the paths for all the flows are known, the flows are grouped based on the number of paths they have. Then, for each group in F grouped, all the flows are sorted by priority, so that flows with a higher priority are processed first, making sure that they get the paths with more bandwidth first. 14

17 After the flows have been grouped and sorted, the algorithm processes each group. Per flow in this group, the list of paths is consulted. It selects a new path if the gain in bandwidth by switching to that path is more than a given percentage of the current bandwidth usage of this flow. This is to prevent constant switching of paths for flows. If no such path can be found (or no paths can support this flow), this flow does not change paths. The algorithm processes the groups in ascending order of the number of paths that their flows have. This way, flows with only 1 path are handled first, then flows with 2 paths, etc. The idea is that flows with multiple paths can use multiple paths and adjust to flows with less paths. Furthermore, by determining the amount of free bandwidth of a path, local information is always used. Optionally, the amount of free bandwidth is requested from controllers along the path, using at most n edges. If n is 0, then only local information is used. If n is 1 or more, then other controllers are queried for their free bandwidth information along the path. The retrieval of how much free bandwidth a path has is done via the Path bandwidth calculator component in figure 3.1. Finally, after all the flows have been processed, new flow entries are installed for the flows that changed paths. Data: Graph G = (V, E), current node v, list of flows F, n (hops away) Result: A list of flows with their assigned paths Grouped flows F grouped = None; for flow in F do f low.paths f indallshortestp aths(v, f low.destination); Remove paths that cannot support this flows bandwidth requirement; Remove paths that lead back to the source of this flow (to prevent loops); Sort paths based on number of edges, so shorter paths are processed first; Add flow based on flow.paths to F grouped ; end for group in F grouped do Sort flows in group based on priority; for flow in group do for path in flow.paths do Get the amount of free bandwidth b along at most n edges of this path; bandwidthgain b f low.bandwidthrequirement; if bandwidthgain f low.bandwidthrequirement pathgaint hresopehold then Assign this flow to path path and update the amount of free bandwidth along this path; break; end end end end Algorithm 1: Algorithm in pseudocode using flows as objects for simplicity. Optionally, if not all flows fit (there is a shortage on available bandwidth), then Weighted Fair Queuing (WFQ) could be used to allocate bandwidth to flows based on priority. 3.3 Retrieving information In order for our approach to function, various information is needed. Network topology needs to be known, so we know all the paths and packets can be forwarded correctly. Furthermore, the controllers can communicate with each other to exchange information. The information that they exchange is a simple estimate of how much free bandwidth a certain link has (see paragraph 3.3.2). This non-local information is used in the decision-making process of the algorithm to make better decisions with regards to the network status. 15

18 Figure 3.2: Algorithm Flowchart Network topology First, information about the network topology needs to be known to all controllers, so they all know how to route packets over the network. Specifically, we need to know which host is connected to which switch behind which port, as well as all the links (which switch is connected to which other switch behind which port). Furthermore, we need to know the bandwidth capacities of all the links, which is used to calculate how much free bandwidth a link has. Since our approach uses multiple OpenFlow controllers, each controller must also know which controller controls which switch. This information could be learned and changed dynamically, as is common in the centralized approach by using link discovery methods. Since handling topology issues is not part of our research, in our implementation, all of this information is simply read from a file, and handled by the Topology manager (see figure 3.1) Bandwidth estimation for flows and paths The piece of information that is used in the decision making process, is the amount of bandwidth a flow uses. Each controller uses a statistics and bandwidth estimation handler, as seen in figure 3.1. The statistics handler queries the connected switch for the byte count values of all the flow entries (flows), and the bandwidth estimation handler keeps track of these measurement values. The statistics handler uses a certain query interval, to measure the byte count values at certain times and thereby get the bandwidth estimation handler a good idea of the bandwidth usage of a flow. The controller also keeps track of the time when the switch responds with this information. For each flow, all byte count measurements, and their respective measurement times, are used to calculate the intermediate bandwidth usages between measurements. The final estimated bandwidth is then simply the average bandwidth of these intermediate values. By using this method, the controller can estimate how much bandwidth a flow uses, and use it to make a decision. Another critical part of the decision making process, is the amount of free bandwidth a path (a list of links) has. This information can be communicated to other controllers, who can use it in their decision making process. The amount of free bandwidth that a link has, is estimated using almost the exact same process as the one for flows. However, instead of querying the switch for the statistics of all the flow entries, the switch is queried for the statistics of all the ports instead. The bandwidth usage of a port/link is then estimated by using the same method as the one for flows. Since the capacity of the link is known (see paragraph 3.3.1), we now have an estimate of how much free bandwidth a link has. By changing the value of the query interval, bandwidth estimates can be done over a longer period of time, thereby making the estimations more smooth. This also means that the implemented solution will be less sensitive to changes in bandwidth for flows than if a lower value was used. Of course, more complex bandwidth estimation methods can be used, but that is not the goal of this research project. 16

19 3.4 Other controller functionality Besides the retrieval of information necessary for the algorithm, other functionality is necessary as well for the controller to function. First, communication between controllers is handled using JSON messages, communicating the amount of free bandwidth of a link. This is done via a simple request/response model using the controller-to-controller handler of each controller. A controller requests from another controller how much free bandwidth it has at a certain port (representing a link). The other controller then replies the request with this information by consulting its own statistics. Other functionality is the handling of packet-ins and problematic flows, which are explained next Packet-ins for new flows Incoming packets that do not match an existing flow entry yield a packet-in event, and are sent to the controller of the switch that received the packet. The controller then determines all the paths to the destination of this packet. It selects the path with the most free bandwidth, using at most n hops-away information, by querying other controllers. After the path has been chosen, a new flow entry is installed at the switch for this new flow. Now that the flow is known to the controller, the algorithm is scheduled to run again after a configurable amount of seconds. It is currently scheduled to run after three times the measurement/query interval. This is done to allow the controller to measure the bandwidth of this new flow and get an accurate estimate, so that it can make a good routing decision. Finally, the idle timeout and hard timeout values, which are set to 30 and 600 seconds respectively, control how long a flow entry should be kept in the flow tables. The flow entry gets removed if no packets matched this flow entry for 30 seconds, or if 600 seconds have passed since this flow entry got installed/modified. When updating flow entries (assigning different paths to flows), the timers for the idle and hard timeout deadlines are reset. This is equivalent to removing and re-adding the flow entry. Other values are of course possible, but this falls outside the scope of the research project Algorithm and problematic flows The algorithm is also scheduled to run every 30 seconds (another configurable interval). Furthermore, it is run again if the controller detects a problematic flow, which is signaled by the bandwidth estimation handler (see figure 3.1). A flow is deemed problematic if the new bandwidth estimation drops below a percentage of the previous bandwidth estimation. In order to detect this, a certain number of byte count samples and intermediate bandwidths are stored. The algorithm then runs for all flows who have a bandwidth estimation, and ignores those flows who do not have an estimation yet. The flows that do not have an estimation yet will thus not change paths. Also, if desired, flows with an estimated bandwidth below a certain value can be filtered out from being able to trigger the algorithm. This is done so that low-bandwidth flows can be more or less ignored, as optimizing for those flows is not very interesting compared to optimizing for big flows. 3.5 Configurable parameters Table 3.1 summarizes and describes all the parameters introduced in this chapter. Each parameter adjusts the workings of each component in figure 3.1 and as explained in this chapter. Experiments will be done to test the performance of our method with regards to the components and their parameters (see chapter 5). 17

20 Symbol Name Description q query interval / This parameter controls how often the controller measurement interval queries the switch for statistics on flow entries and ports (see figure 3.1). a algorithm interval This parameter can be adjusted to specify how often the controller should run the intelligent algorithm (see figure 3.1). t new flow timeout This value controls how long the controller should wait before running the algorithm after installing a flow entry for a new flow. n look ahead This parameter determines how much non-local information is used, if any. Its value corresponds directly to how many hops away along a path another controller is queried for statistics. m num samples The number of byte count samples to store is specified via this parameter. A minimum value of 2 is required. g bw gain The threshold to switch to another path is indicated via this parameter. It is the bandwidthgain variable in algorithm 1. p path gain threshold The path gain threshold determines the needed gain in bandwidth for a flow to switch to another path. Only if the bandwidth gain of switching to that path exceeds this value (percentage-wise), then that flow is assigned to the new path. f filter This value can be set to filter out flows with an estimated bandwidth below this value from triggering the algorithm as a problematic flow (see figure 3.1). s bw sensitivity This parameter controls how sensitive the bandwidth estimation handler is with regards to flows with a lower bandwidth estimation than the previous estimation. If the drop in bandwidth usage is more than this value, percentage-wise, then the algorithm can be signaled to run again (see figure 3.1), provided both estimations are not below the value of the filter parameter. i idle timeout The idle timeout value determines how long an idle flow entry should be kept in the flow tables of the switch (see table 2.2). h hard timeout The hard timeout value determines how long a flow entry should be kept in the flow tables of the switch (see table 2.2). Table 3.1: This table shows all the parameters used in this research, along with a description of each parameter. 18

21 CHAPTER 4 Implementation Now that the approach has been explained in the previous chapter, some of the implementation details will be discussed in this chapter. As explained, each OpenFlow switch in the network is controlled by its own OpenFlow controller. This controller is the open source POX controller [11]. An extension module for POX was written in Python, implementing all the components mentioned in the previous chapter. Although POX offers support to use PyPy (which offers significant performance improvements over the standard Python interpreter [13]), this was not used in our implementation, since it was not compatible with various Python modules that were used. Since the implementation of the algorithm and the bandwidth estimation handlers are fairly straightforward (as they are discussed in detail in chapter 3), more important components are discussed here: the communication between controllers, and the handling of topology. 4.1 Controller to controller communication As mentioned in the previous chapter, a controller can request statistics from another controller to enhance its routing decision. More precisely, it requests how much free bandwidth there is available at a certain port, representing a link. This communication between controllers is done via simple JSON messages, using a request/response model. The controller-to-controller handler in figure 3.1 handles this communication between controllers. An example and the format of these communication messages is shown in listing 1. { } "type": "request", "port": 1, "origin": "c0" { } "type": "response", "bandwidth": 123, "origin": "c1" Listing 1: JSON example messages showing the request and response format of the communication between two controllers c0 and c1. Here, c0 requests from c1 how much free bandwidth it has available at port 1. Controller c1 then responds with the requested information, telling c0 that it has 123 Mb/s available at that port. 19

22 4.2 Topology handling In order to simulate the network, Mininet [14] was used to create topologies with hosts, switches, and controllers. Links were added between hosts and switches, with a certain latency (currently 2 milliseconds) and various bandwidth capacities. Currently, no (random) loss takes place, unless the buffers of a switch are full. For simulating the switches, Open vswitch [12] was used, along with their provided kernel module to enhance performance. When creating these topologies, a JSON file was written containing the details of the network topology required by the controller (as discussed in section 3.3.1), and is handled by the topology manager (see figure 3.1). An example scenario (figure 4.1) and the translation to this JSON config-file is shown in listing 2 and continues in listing 3. When the controllers are initializing, each one reads this config-file and uses it to create a MultiDiGraph: a directed graph where nodes can have multiple edges between them. This MultiDiGraph is consulted to find all the paths between two nodes, and to determine various other details, like the bandwidth capacity between nodes. In our case, a node is either a switch or a host. This graph functionality is provided by the NetworkX package. Figure 4.1: Example of a scenario that can be created with Mininet. The translation to a JSON topology file is shown in listings 2 and 3. 20

23 { "cmap": { "s0": "c0", "s1": "c1" }, "controllers": { "c0": { "info_port": 6634, "ip": " ", "of_port": 6633 }, "c1": { "info_port": 6636, "ip": " ", "of_port": 6635 } }, "hosts": { "h0": { "ip": " ", "mac": "aa:bb:cc:dd:ee:ff" }, "h1": { "ip": " ", "mac": "ff:ee:dd:cc:bb:aa" } }, "switches": { "s0": { "mac": "00:00:00:00:00:11" }, "s1": { "mac": "00:00:00:00:00:12" } }, Listing 2: JSON example file showing all the details of the network topology (part 1). The cmap key holds a mapping between switches and controllers, so each controller knows which controller handles which switch. The controllers key is a list of all controllers with their details. The info port is the port each controller uses to request statistics from other controllers, whereas the of port is used for the OpenFlow connection to a switch. The hosts and switches keys provide information about hosts and switches respectively, showing their MAC addresses (and IP addresses for hosts). 4.3 Implementation constraints Now that the approach and the important parts of the implementation have been explained, it can be observed that there are various (implementation) constraints. Currently, there is no support for flooding packets. Also, dynamic topologies, where hosts can freely move and connect via different switches, are not handled. The detection of broken links between switches is not implemented as well, and the dynamic detection of hosts, switches and controllers is not implemented either. However, these issues can be grouped under topology issues. It is possible to implement them, but this was not done as it is not part of the research. There are however, a few more important constraints, explained in the next paragraphs Manageability In our research, there is currently 1 controller for each switch. For very large networks, managing this may not be very practical. It is quite possible to have 1 controller manage multiple switches, 21

24 } "topology": { "hosts": { "h0": { "bandwidth": , "switch": "s0", "switch_port": 1 }, "h1": { "bandwidth": , "switch": "s1", "switch_port": 1 } }, "switches": { "s0": { "switches": [ "s1", "s1" ] "ports": [ 2, 3 ], "bandwidths": [ 900.0, ], }, "s1": { "switches": [ "s0", "s0" ] "ports": [ 4, 5 ], "bandwidths": [ 900.0, ], } } } Listing 3: JSON example file showing all the details of the network topology (part 2). The topology key is used to describe the network topology, by using a list of hosts and switches. Each host in that list ( hosts key) shows to which switch it is connected, what its (currently symmetric) bandwidth capacity to that switch is, and behind which port on the switch it is connected. For the switches key, a slightly different format is used. Each switch has a list of switches it is connected to, as well as behind which port and at which capacity. The three lists for each switch should be read column-wise. For instance, for switch s0, it is connected to switch s1 via port 2, with a bandwidth capacity of 900 Mb/s. without losing the entire distributed approach. One controller could control a group of switches, and make decisions for each switch based on all the information that it knows. Another option is to follow the distributed approach more closely, and use only the information for each switch that it would know as if the controller would control only that switch Flow issues More important are the issues related to flows. With the centralized approach, if the central controller receives a packet-in from a switch for a new flow, it chooses a new path for this flow. It then installs flow entries for this new flow along the chosen path. However, using the presented distributed approach, if a controller receives a packetin from his switch, it queries controllers along all the paths for their free bandwidth information. After the best path with the most free bandwidth has been chosen, the controller installs a flow entry for this new flow on his switch. The new flow then gets forwarded from this switch to the next switch along the chosen path. Now, this next switch sends a packetin to his controller, and the process repeats itself. For flows that need to be routed over a long path (because for instance, no shorter path is available), this introduces a lot of latency. This latency issue becomes bigger if every controller queries a lot of other controllers along all the paths. However, by using only local information when choosing a new path for a new flow, this issue can be reduced to the absolute minimum. The tradeoff is then that the newly chosen path may not be a very good path. 22

25 Another thing that can be improved is the fact that right now, each flow has its own flow entry. Once again, for very large networks, this does not scale. These networks can (potentially) have more network flows flowing through them than can be stored in the flow tables of their switches. Therefore, flow entries need to be aggregated into flow entries that match a group of entries, to reduce the number of entries. The level of aggregation will then determine how fine-grained control of the network is required. In the current implementation, these aggregate flow entries are currently not used, but its possible to implement this. Flow entries that match multiple flows will then be treated as 1 flow, and follow the usual steps. 23

26 24

27 CHAPTER 5 Experimental results The results will be presented in this section. First, the setup that was used for achieving these results will be explained, followed by the actual results. The first test is a functionality test to see if the solution is working correctly, and to test various parameter values to see the effects on the bandwidth estimation accuracy and load-balancing. The second test is a stability test to see if the solution is stable. Both tests are done for two scenarios. 5.1 Test-bed configuration To measure the throughput between two hosts, Iperf was used to generate and measure both TCP and UDP traffic between two hosts. One host runs an Iperf client, and the other runs an Iperf server. The throughput measured by the Iperf server is also compared to the throughputs estimated by the controllers. This way, the direct impact of the solution on the throughput between the hosts can be measured. Also, some parameters are set to a fixed value, and are not experimented with. The path gain threshold is currently set to 10%. The tests that were done, are not suited for experimenting with this parameter, because the tests and scenarios are fairly simple in nature, and this is a parameter suited for more complex scenarios and tests. Also, the filtering of flows is currently set to flows that have an estimated throughput of 10 Mb/s or less. These flows are unable to trigger the algorithm, unless they increase in throughput First scenario The first scenario that was tested is a very simple scenario. It consists of two hosts h0 and h1, two switches s0 and s1, and two controllers c0 and c1, as can be seen below. There are 2 paths between the hosts, a slow path, and a fast path (compared to each other). This simple scenario will be used to see if the presented solution is working correctly, as well as to see the effect of the parameter values on the performance of the system. See figure

28 Figure 5.1: Scenario 1: A simple scenario connecting 2 hosts via 2 switches, in order to validate the correct workings of the system and the influence of the various parameters Second scenario The second scenario is a bit more advanced, with 3 switches (and 3 controllers) in a triangle form, and 3 hosts, each connected to a different switch. Once again, there are two paths between each pair of distinctive hosts. There is one slow path, and one fast path. This scenario will be used for a slightly more complex test, namely to show how stable the solution is, and to see if, and how much, the oscillation of paths takes place. See figure 5.2. Figure 5.2: Scenario 2: A scenario connecting 3 hosts via 3 switches, in order to test the workings and stability of the system, as well as the influence of the various parameters. 5.2 Performance measurements Now that the test scenarios are known, performance measurements can be done. The following tests are explained, after which the results for each scenario are shown. For each test, a number of different parameter values were tested to test the performance of the implemented solution and the influence of these parameters. The bandwidth, as reported by Iperf, will be compared to the estimated bandwidth by a controller. In our tests, UDP flows will be used as background traffic (to simulate a network that is being used), and therefore only the results for the TCP flows will be presented. The results for the UDP flows are also omitted because they rarely change in bandwidth significantly. The biggest change in bandwidth for these UDP flows was found to be at most 1 Mb/s more or less. Overall, these UDP flows are very stable, as TCP adapts its transmission rate, but UDP does not. 26

29 5.2.1 Functionality Test The first test consists of installing static flow entries on each switch (so that packets can be routed between host h0 and host h1 without the controller and algorithm doing it). These static entries use the slow path of scenario 1 and 2, with a capacity of 100 Mb/s. After these entries are installed, a new UDP flow is started at t = 0 from h0 to h1, with a fixed bit-rate of 80 Mb/s, thereby almost satisfying the slow link. Five seconds later (at t = 5), a TCP flow is started at h0 directed to h1. The UDP flow runs for 90 seconds and then stops, the TCP flow runs for 120 seconds and then stops. In figure 5.3, a time-line of this test can be seen. Figure 5.3: A time-line representing the functionality test with the start and end of the UDP and TCP flow. If the implemented solution is working correctly (and good parameters have been chosen), we hope to observe that the controller detects the sub-optimal solution, runs the algorithm, and installs new flow entries on the switches. Ideally, it installs new flow entries such that the optimal solution would be reached. In this case, the optimal solution would be to route the TCP flow over the fast path (900 Mb/s), and the UDP flow over the slow path (100 Mb/s). We will now show the results of this test for both scenarios with various parameters. For this test, only local information was used to make decisions, as both scenarios do not benefit from requesting free bandwidth information from another controller for this test. First scenario The first parameter that is tested is the query interval parameter. As can be seen in figure 5.4, the performance of the algorithm is quite depending on the proper estimation of the bandwidth usage of flows. A query interval of 0.5 seconds yields a very spiky estimation graph, and leads the controller to believe that the flow has sufficient bandwidth. Using an interval of 1 second yields much better results. An interval of 2 seconds yields an even smoother estimation graph, while still showing comparable results. It was expected that the sub-optimal situation with regards to the TCP flow would be detected. However, this does not seem to be the case. The situation does get corrected after the algorithm is run again (algorithm interval), which is after about 30 seconds. 27

30 q = 0.5 seconds q = 1.0 seconds q = 2.0 seconds Figure 5.4: Results for a query interval of 0.5, 1.0, and 2.0 seconds respectively. For each result, 2 byte count samples were stored, and a bandwidth sensitivity of 10% was used. Using a query interval of 2.0 seconds consistently leads to better results than the results where a query interval of 0.5 was used, since the estimation is more accurate and shows the best performance. The following results are achieved by using 2, 5 and 10 measurement samples, while using a query interval of 2.0 seconds and a bandwidth sensitivity of 10%. m = 2 m = 5 m = 10 Figure 5.5: Results for 2, 5, and 10 byte count samples used respectively. For each result, a query interval of 2.0 seconds was used, as well as a bandwidth sensitivity of 10%. From figure 5.5, it can be seen that either 2 or 5 samples lead to good results. While the results with 10 stored samples are very smooth, the estimated bandwidth falls behind the real bandwidth too much (being too slow to update). Therefore, in the next test, 2 byte count samples are stored, and the influence of the bandwidth sensitivity values will be shown. A value of 10%, 1% and 0.1% will be tested. s = 10% s = 1% s = 0.1% Figure 5.6: Results of the bandwidth sensitivity parameter with values of 10%, 1%, and 0.1% respectively. For each result, a query interval of 2.0 seconds was used, and 2 byte count samples were stored. As can be seen in figure 5.6, even a bandwidth sensitivity value of 0.1% is not sufficient to detect the sub-optimal solution in the first 30 seconds. Apparently, some other form of detection 28

31 needs to take place to solve this issue. Overall, for good parameters, the optimal solution is reached after seconds. Basically, it can be seen that the system of network flows converges to that optimum as soon as the algorithm is run, or else soon afterwards (within 10 seconds). Second scenario Following the results for the first scenario, are now the results from the same test for the second scenario. As can be seen in figure 5.7, a very similar result with regards to bandwidth estimation and TCP throughput is achieved when using the same parameters. Once again, a query interval of 2.0 seconds leads to the smoothest estimation, while still showing comparable results to that where an interval of 1.0 second was used. The results with a query interval of 0.5 seconds is too spiky and therefore leads to very bad results. q = 0.5 s q = 1.0 s q = 2.0 s Figure 5.7: Results for a query interval of 0.5, 1.0, and 2.0 seconds respectively. For each result, 2 byte count samples were stored, and a bandwidth sensitivity of 10% was used. Using a measurement interval of 2.0 seconds consistently leads to better results than the results where a query interval of 0.5 was used. The following results are achieved by using 2, 5 and 10 measurement samples, while using a query interval of 2.0 seconds and a bandwidth sensitivity of 10%. m = 2 m = 5 m = 10 Figure 5.8: Results for 2, 5, and 10 byte count samples respectively. For each result, a query interval of 2.0 seconds was used, as well as a bandwidth sensitivity of 10%. From figure 5.8, it can be seen that either 2 or 5 samples lead to good results. While the results with 10 stored samples are very smooth, the estimated bandwidth falls behind the real bandwidth too much (being too slow to update). Therefore, in the next test, 2 byte count samples are stored, and the influence of the bandwidth sensitivity values will be shown. A value of 10%, 1% and 0.1% will be tested. 29

32 s = 10% s = 1% s = 0.1% Figure 5.9: Results of the bandwidth sensitivity parameter with values of 10%, 1%, and 0.1% respectively. For each result, a query interval of 2.0 seconds was used, and 2 byte count samples were stored. As can be seen in figure 5.9, like with the first scenario, even a bandwidth sensitivity value of 0.1% is not sufficient to detect the sub-optimal solution in the first 30 seconds. Apparently, some other form of detection needs to take place to solve this issue. Overall, for good parameters, the optimal solution is reached after seconds. Basically, it can be seen that the system of network flows converges to that optimum as soon as the algorithm is run, or else soon afterwards (within 10 seconds) Stability Test The second test is a stability test, meant to test if flows oscillate between paths. This time, no static entries are pre-installed, and new flows should be assigned to the path with the most free bandwidth. A TCP flow is started at h0 with destination h1, that runs for 10 minutes. After 5 seconds, a new UDP flow is started with a fixed bit-rate of 80 Mb/s again, which runs for 30 seconds, and then stops. After another 30 seconds have passed, the cycle of 30 seconds of UDP flow and 30 seconds of waiting is repeated for as long as the TCP flow runs. If the implemented solution works properly, then the TCP flow should not be influenced (or only very little) by the UDP flow. In figure 5.10, a time-line of this test can be seen as well. Figure 5.10: A time-line representing the stability test with the start and end of the UDP and TCP flow. Scenario 1 As can be seen in figure 5.11, in the 5 seconds after the TCP flow is started, it quickly reaches the max 900 Mb/s throughput, but it also has to share the bandwidth with the UDP flow (hence the around 800 Mb/s speeds for about 30 seconds). This happens because the moment the controller c0 receives the new UDP flow, the amount of free bandwidth for the 900 Mb/s path is more than it is for the slower path, assigning the UDP flow to the fast link as well. After the UDP flow stops, the TCP flow reaches the max throughput and is further on not influenced by the other flow, which is exactly as desired (apart from the estimation peaks happening). 30

Configuring OpenFlow 1

Configuring OpenFlow 1 Contents Configuring OpenFlow 1 Overview 1 OpenFlow switch 1 OpenFlow port 1 OpenFlow instance 2 OpenFlow flow table 3 Group table 5 Meter table 5 OpenFlow channel 6 Protocols and standards 7 Configuration

More information

II. Principles of Computer Communications Network and Transport Layer

II. Principles of Computer Communications Network and Transport Layer II. Principles of Computer Communications Network and Transport Layer A. Internet Protocol (IP) IPv4 Header An IP datagram consists of a header part and a text part. The header has a 20-byte fixed part

More information

Application-Aware SDN Routing for Big-Data Processing

Application-Aware SDN Routing for Big-Data Processing Application-Aware SDN Routing for Big-Data Processing Evaluation by EstiNet OpenFlow Network Emulator Director/Prof. Shie-Yuan Wang Institute of Network Engineering National ChiaoTung University Taiwan

More information

Internetworking Part 1

Internetworking Part 1 CMPE 344 Computer Networks Spring 2012 Internetworking Part 1 Reading: Peterson and Davie, 3.1 22/03/2012 1 Not all networks are directly connected Limit to how many hosts can be attached Point-to-point:

More information

Configuring QoS. Finding Feature Information. Prerequisites for QoS. General QoS Guidelines

Configuring QoS. Finding Feature Information. Prerequisites for QoS. General QoS Guidelines Finding Feature Information, on page 1 Prerequisites for QoS, on page 1 Restrictions for QoS, on page 2 Information About QoS, on page 2 How to Configure QoS, on page 10 Monitoring Standard QoS, on page

More information

Configuring QoS. Understanding QoS CHAPTER

Configuring QoS. Understanding QoS CHAPTER 29 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

Software-Defined Networking (Continued)

Software-Defined Networking (Continued) Software-Defined Networking (Continued) CS640, 2015-04-23 Announcements Assign #5 released due Thursday, May 7 at 11pm Outline Recap SDN Stack Layer 2 Learning Switch Control Application Design Considerations

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 34 This chapter describes how to use different methods to configure quality of service (QoS) on the Catalyst 3750 Metro switch. With QoS, you can provide preferential treatment to certain types

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 30 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

Advanced Computer Networks. Flow Control

Advanced Computer Networks. Flow Control Advanced Computer Networks 263 3501 00 Flow Control Patrick Stuedi Spring Semester 2017 1 Oriana Riva, Department of Computer Science ETH Zürich Last week TCP in Datacenters Avoid incast problem - Reduce

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 36 This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

Chapter 4: network layer. Network service model. Two key network-layer functions. Network layer. Input port functions. Router architecture overview

Chapter 4: network layer. Network service model. Two key network-layer functions. Network layer. Input port functions. Router architecture overview Chapter 4: chapter goals: understand principles behind services service models forwarding versus routing how a router works generalized forwarding instantiation, implementation in the Internet 4- Network

More information

CS 5114 Network Programming Languages Data Plane. Nate Foster Cornell University Spring 2013

CS 5114 Network Programming Languages Data Plane. Nate Foster Cornell University Spring 2013 CS 5114 Network Programming Languages Data Plane http://www.flickr.com/photos/rofi/2097239111/ Nate Foster Cornell University Spring 2013 Based on lecture notes by Jennifer Rexford and Michael Freedman

More information

Configuring QoS. Finding Feature Information. Prerequisites for QoS

Configuring QoS. Finding Feature Information. Prerequisites for QoS Finding Feature Information, page 1 Prerequisites for QoS, page 1 Restrictions for QoS, page 3 Information About QoS, page 4 How to Configure QoS, page 28 Monitoring Standard QoS, page 80 Configuration

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 37 This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750-E or 3560-E switch. With QoS,

More information

Application of SDN: Load Balancing & Traffic Engineering

Application of SDN: Load Balancing & Traffic Engineering Application of SDN: Load Balancing & Traffic Engineering Outline 1 OpenFlow-Based Server Load Balancing Gone Wild Introduction OpenFlow Solution Partitioning the Client Traffic Transitioning With Connection

More information

This Lecture. BUS Computer Facilities Network Management. Switching Network. Simple Switching Network

This Lecture. BUS Computer Facilities Network Management. Switching Network. Simple Switching Network This Lecture BUS0 - Computer Facilities Network Management Switching networks Circuit switching Packet switching gram approach Virtual circuit approach Routing in switching networks Faculty of Information

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 27 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

BIG-IQ Centralized Management: ADC. Version 5.0

BIG-IQ Centralized Management: ADC. Version 5.0 BIG-IQ Centralized Management: ADC Version 5.0 Table of Contents Table of Contents BIG-IQ Application Delivery Controller: Overview...5 What is Application Delivery Controller?...5 Managing Device Resources...7

More information

EEC-484/584 Computer Networks

EEC-484/584 Computer Networks EEC-484/584 Computer Networks Lecture 13 wenbing@ieee.org (Lecture nodes are based on materials supplied by Dr. Louise Moser at UCSB and Prentice-Hall) Outline 2 Review of lecture 12 Routing Congestion

More information

OpenFlow. Finding Feature Information. Prerequisites for OpenFlow

OpenFlow. Finding Feature Information. Prerequisites for OpenFlow Finding Feature Information, page 1 Prerequisites for, page 1 Restrictions for, page 2 Information About Open Flow, page 3 Configuring, page 8 Monitoring, page 12 Configuration Examples for, page 12 Finding

More information

OpenFlow. Finding Feature Information. Prerequisites for OpenFlow

OpenFlow. Finding Feature Information. Prerequisites for OpenFlow Finding Feature Information, page 1 Prerequisites for, page 1 Restrictions for, page 2 Information About Open Flow, page 3 Configuring, page 8 Monitoring, page 12 Configuration Examples for, page 12 Finding

More information

Advanced Computer Networks. Datacenter TCP

Advanced Computer Networks. Datacenter TCP Advanced Computer Networks 263 3501 00 Datacenter TCP Spring Semester 2017 1 Oriana Riva, Department of Computer Science ETH Zürich Today Problems with TCP in the Data Center TCP Incast TPC timeouts Improvements

More information

Software Defined Networking

Software Defined Networking Software Defined Networking Daniel Zappala CS 460 Computer Networking Brigham Young University Proliferation of Middleboxes 2/16 a router that manipulatees traffic rather than just forwarding it NAT rewrite

More information

Performing Path Traces

Performing Path Traces About Path Trace, page 1 Performing a Path Trace, page 13 Collecting QoS and Interface Statistics in a Path Trace, page 15 About Path Trace With Path Trace, the controller reviews and collects network

More information

Routing in packet-switching networks

Routing in packet-switching networks Routing in packet-switching networks Circuit switching vs. Packet switching Most of WANs based on circuit or packet switching Circuit switching designed for voice Resources dedicated to a particular call

More information

Chapter 4 Network Layer: The Data Plane

Chapter 4 Network Layer: The Data Plane Chapter 4 Network Layer: The Data Plane Chapter 4: outline 4.1 Overview of Network layer data plane control plane 4.2 What s inside a router 4.3 IP: Internet Protocol datagram format fragmentation IPv4

More information

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin,

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin, Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin, ydlin@cs.nctu.edu.tw Chapter 1: Introduction 1. How does Internet scale to billions of hosts? (Describe what structure

More information

Delay Controlled Elephant Flow Rerouting in Software Defined Network

Delay Controlled Elephant Flow Rerouting in Software Defined Network 1st International Conference on Advanced Information Technologies (ICAIT), Nov. 1-2, 2017, Yangon, Myanmar Delay Controlled Elephant Flow Rerouting in Software Defined Network Hnin Thiri Zaw, Aung Htein

More information

Da t e: August 2 0 th a t 9: :00 SOLUTIONS

Da t e: August 2 0 th a t 9: :00 SOLUTIONS Interne t working, Examina tion 2G1 3 0 5 Da t e: August 2 0 th 2 0 0 3 a t 9: 0 0 1 3:00 SOLUTIONS 1. General (5p) a) Place each of the following protocols in the correct TCP/IP layer (Application, Transport,

More information

Question Score 1 / 19 2 / 19 3 / 16 4 / 29 5 / 17 Total / 100

Question Score 1 / 19 2 / 19 3 / 16 4 / 29 5 / 17 Total / 100 NAME: Login name: Computer Science 461 Midterm Exam March 10, 2010 3:00-4:20pm This test has five (5) questions. Put your name on every page, and write out and sign the Honor Code pledge before turning

More information

Chapter 3 Part 2 Switching and Bridging. Networking CS 3470, Section 1

Chapter 3 Part 2 Switching and Bridging. Networking CS 3470, Section 1 Chapter 3 Part 2 Switching and Bridging Networking CS 3470, Section 1 Refresher We can use switching technologies to interconnect links to form a large network What is a hub? What is a switch? What is

More information

F5 BIG-IQ Centralized Management: Local Traffic & Network. Version 5.2

F5 BIG-IQ Centralized Management: Local Traffic & Network. Version 5.2 F5 BIG-IQ Centralized Management: Local Traffic & Network Version 5.2 Table of Contents Table of Contents BIG-IQ Local Traffic & Network: Overview... 5 What is Local Traffic & Network?... 5 Understanding

More information

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Semra.gulder@crc.ca, mathieu.deziel@crc.ca Abstract: This paper describes a QoS mechanism suitable for Mobile Ad Hoc Networks

More information

Lecture 16: Network Layer Overview, Internet Protocol

Lecture 16: Network Layer Overview, Internet Protocol Lecture 16: Network Layer Overview, Internet Protocol COMP 332, Spring 2018 Victoria Manfredi Acknowledgements: materials adapted from Computer Networking: A Top Down Approach 7 th edition: 1996-2016,

More information

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs.

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Internetworking Multiple networks are a fact of life: Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Fault isolation,

More information

Configuring Local SPAN and ERSPAN

Configuring Local SPAN and ERSPAN This chapter contains the following sections: Information About ERSPAN, page 1 Licensing Requirements for ERSPAN, page 5 Prerequisites for ERSPAN, page 5 Guidelines and Limitations for ERSPAN, page 5 Guidelines

More information

ELEC / COMP 177 Fall Some slides from Kurose and Ross, Computer Networking, 5 th Edition

ELEC / COMP 177 Fall Some slides from Kurose and Ross, Computer Networking, 5 th Edition ELEC / COMP 177 Fall 2016 Some slides from Kurose and Ross, Computer Networking, 5 th Edition Presentation 2 Security/Privacy Presentations Nov 3 rd, Nov 10 th, Nov 15 th Upload slides to Canvas by midnight

More information

Introduction. Network Architecture Requirements of Data Centers in the Cloud Computing Era

Introduction. Network Architecture Requirements of Data Centers in the Cloud Computing Era Massimiliano Sbaraglia Network Engineer Introduction In the cloud computing era, distributed architecture is used to handle operations of mass data, such as the storage, mining, querying, and searching

More information

COMP211 Chapter 4 Network Layer: The Data Plane

COMP211 Chapter 4 Network Layer: The Data Plane COMP211 Chapter 4 Network Layer: The Data Plane All material copyright 1996-2016 J.F Kurose and K.W. Ross, All Rights Reserved Computer Networking: A Top Down Approach 7 th edition Jim Kurose, Keith Ross

More information

Traditional network management methods have typically

Traditional network management methods have typically Advanced Configuration for the Dell PowerConnect 5316M Blade Server Chassis Switch By Surendra Bhat Saurabh Mallik Enterprises can take advantage of advanced configuration options for the Dell PowerConnect

More information

Assignment 5. 2 Assignment: Emulate a Data Center and Manage it via a Cloud Network Controller

Assignment 5. 2 Assignment: Emulate a Data Center and Manage it via a Cloud Network Controller University of Crete Computer Science Department Lecturer: Prof. Dr. X. Dimitropoulos TAs: Dimitrios Gkounis, George Nomikos Manos Lakiotakis, George Vardakis HY436 - Software Defined Networks Tasks of

More information

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Dr. Vinod Vokkarane Assistant Professor, Computer and Information Science Co-Director, Advanced Computer Networks Lab University

More information

H3C S5130-EI Switch Series

H3C S5130-EI Switch Series H3C S5130-EI Switch Series OpenFlow Configuration Guide New H3C Technologies Co., Ltd. http://www.h3c.com Software version: Release 311x Document version: 6W102-20180323 Copyright 2016-2018, New H3C Technologies

More information

LS Example 5 3 C 5 A 1 D

LS Example 5 3 C 5 A 1 D Lecture 10 LS Example 5 2 B 3 C 5 1 A 1 D 2 3 1 1 E 2 F G Itrn M B Path C Path D Path E Path F Path G Path 1 {A} 2 A-B 5 A-C 1 A-D Inf. Inf. 1 A-G 2 {A,D} 2 A-B 4 A-D-C 1 A-D 2 A-D-E Inf. 1 A-G 3 {A,D,G}

More information

Lecture 10: Internetworking"

Lecture 10: Internetworking Lecture 10: Internetworking" CSE 123: Computer Networks Alex C. Snoeren HW 2 due NOW! Lecture 10 Overview" Spanning Tree Internet Protocol Service model Packet format 2 Spanning Tree Algorithm" Each bridge

More information

Chapter 4 Network Layer: The Data Plane

Chapter 4 Network Layer: The Data Plane Chapter 4 Network Layer: The Data Plane A note on the use of these Powerpoint slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you see

More information

System Management Guide: Communications and Networks

System Management Guide: Communications and Networks [ Bottom of Page Previous Page Next Page Index Legal ] System Management Guide: Communications and Networks EtherChannel and IEEE 802.3ad Link Aggregation EtherChannel and IEEE 802.3ad Link Aggregation

More information

Configuring EtherChannels and Link-State Tracking

Configuring EtherChannels and Link-State Tracking CHAPTER 37 Configuring EtherChannels and Link-State Tracking This chapter describes how to configure EtherChannels on Layer 2 and Layer 3 ports on the switch. EtherChannel provides fault-tolerant high-speed

More information

Routing, Routing Algorithms & Protocols

Routing, Routing Algorithms & Protocols Routing, Routing Algorithms & Protocols Computer Networks Lecture 6 http://goo.gl/pze5o8 Circuit-Switched and Packet-Switched WANs 2 Circuit-Switched Networks Older (evolved from telephone networks), a

More information

Outline. Routing. Introduction to Wide Area Routing. Classification of Routing Algorithms. Introduction. Broadcasting and Multicasting

Outline. Routing. Introduction to Wide Area Routing. Classification of Routing Algorithms. Introduction. Broadcasting and Multicasting Outline Routing Fundamentals of Computer Networks Guevara Noubir Introduction Broadcasting and Multicasting Shortest Path Unicast Routing Link Weights and Stability F2003, CSG150 Fundamentals of Computer

More information

Wireless TCP Performance Issues

Wireless TCP Performance Issues Wireless TCP Performance Issues Issues, transport layer protocols Set up and maintain end-to-end connections Reliable end-to-end delivery of data Flow control Congestion control Udp? Assume TCP for the

More information

First Exam for ECE671 Spring /22/18

First Exam for ECE671 Spring /22/18 ECE67: First Exam First Exam for ECE67 Spring 208 02/22/8 Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 75 minutes to complete the exam. Be a

More information

CSC 4900 Computer Networks: Network Layer

CSC 4900 Computer Networks: Network Layer CSC 4900 Computer Networks: Network Layer Professor Henry Carter Fall 2017 Chapter 4: Network Layer 4. 1 Introduction 4.2 What s inside a router 4.3 IP: Internet Protocol Datagram format 4.4 Generalized

More information

Sharing Bandwidth Fairly During Congestion

Sharing Bandwidth Fairly During Congestion CHAPTER 12 When no QoS policies exist, the router serves traffic with best effort service. The router makes no distinction between high and low priority traffic and makes no allowances for the needs of

More information

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service CSCD 433/533 Advanced Networks Spring 2016 Lecture 22 Quality of Service 1 Topics Quality of Service (QOS) Defined Properties Integrated Service Differentiated Service 2 Introduction Problem Overview Have

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Computer Network Architectures and Multimedia. Guy Leduc. Chapter 2 MPLS networks. Chapter 2: MPLS

Computer Network Architectures and Multimedia. Guy Leduc. Chapter 2 MPLS networks. Chapter 2: MPLS Computer Network Architectures and Multimedia Guy Leduc Chapter 2 MPLS networks Chapter based on Section 5.5 of Computer Networking: A Top Down Approach, 6 th edition. Jim Kurose, Keith Ross Addison-Wesley,

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Lecture (08, 09) Routing in Switched Networks

Lecture (08, 09) Routing in Switched Networks Agenda Lecture (08, 09) Routing in Switched Networks Dr. Ahmed ElShafee Routing protocols Fixed Flooding Random Adaptive ARPANET Routing Strategies ١ Dr. Ahmed ElShafee, ACU Fall 2011, Networks I ٢ Dr.

More information

EXAM TCP/IP NETWORKING Duration: 3 hours With Solutions

EXAM TCP/IP NETWORKING Duration: 3 hours With Solutions SCIPER: First name: Family name: EXAM TCP/IP NETWORKING Duration: 3 hours With Solutions Jean-Yves Le Boudec January 2016 INSTRUCTIONS 1. Write your solution into this document and return it to us (you

More information

Configuring Virtual Port Channels

Configuring Virtual Port Channels This chapter contains the following sections: Information About vpcs, page 1 Guidelines and Limitations for vpcs, page 10 Configuring vpcs, page 11 Verifying the vpc Configuration, page 25 vpc Default

More information

Cisco Nexus 9500 Series Switches Buffer and Queuing Architecture

Cisco Nexus 9500 Series Switches Buffer and Queuing Architecture White Paper Cisco Nexus 9500 Series Switches Buffer and Queuing Architecture White Paper December 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

More information

IP - The Internet Protocol. Based on the slides of Dr. Jorg Liebeherr, University of Virginia

IP - The Internet Protocol. Based on the slides of Dr. Jorg Liebeherr, University of Virginia IP - The Internet Protocol Based on the slides of Dr. Jorg Liebeherr, University of Virginia Orientation IP (Internet Protocol) is a Network Layer Protocol. IP: The waist of the hourglass IP is the waist

More information

William Stallings Data and Computer Communications. Chapter 10 Packet Switching

William Stallings Data and Computer Communications. Chapter 10 Packet Switching William Stallings Data and Computer Communications Chapter 10 Packet Switching Principles Circuit switching designed for voice Resources dedicated to a particular call Much of the time a data connection

More information

EEC-684/584 Computer Networks

EEC-684/584 Computer Networks EEC-684/584 Computer Networks Lecture 14 wenbing@ieee.org (Lecture nodes are based on materials supplied by Dr. Louise Moser at UCSB and Prentice-Hall) Outline 2 Review of last lecture Internetworking

More information

Chapter 5 (Week 9) The Network Layer ANDREW S. TANENBAUM COMPUTER NETWORKS FOURTH EDITION PP BLM431 Computer Networks Dr.

Chapter 5 (Week 9) The Network Layer ANDREW S. TANENBAUM COMPUTER NETWORKS FOURTH EDITION PP BLM431 Computer Networks Dr. Chapter 5 (Week 9) The Network Layer ANDREW S. TANENBAUM COMPUTER NETWORKS FOURTH EDITION PP. 343-396 1 5.1. NETWORK LAYER DESIGN ISSUES 5.2. ROUTING ALGORITHMS 5.3. CONGESTION CONTROL ALGORITHMS 5.4.

More information

G Robert Grimm New York University

G Robert Grimm New York University G22.3250-001 Receiver Livelock Robert Grimm New York University Altogether Now: The Three Questions What is the problem? What is new or different? What are the contributions and limitations? Motivation

More information

Resource Guide Implementing QoS for WX/WXC Application Acceleration Platforms

Resource Guide Implementing QoS for WX/WXC Application Acceleration Platforms Resource Guide Implementing QoS for WX/WXC Application Acceleration Platforms Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA 408 745 2000 or 888 JUNIPER www.juniper.net Table

More information

Configuring EtherChannels and Layer 2 Trunk Failover

Configuring EtherChannels and Layer 2 Trunk Failover 35 CHAPTER Configuring EtherChannels and Layer 2 Trunk Failover This chapter describes how to configure EtherChannels on Layer 2 and Layer 3 ports on the switch. EtherChannel provides fault-tolerant high-speed

More information

ip rsvp reservation-host

ip rsvp reservation-host Quality of Service Commands ip rsvp reservation-host ip rsvp reservation-host To enable a router to simulate a host generating Resource Reservation Protocol (RSVP) RESV messages, use the ip rsvp reservation-host

More information

Configuring EtherChannels

Configuring EtherChannels This chapter describes how to configure EtherChannels and to apply and configure the Link Aggregation Control Protocol (LACP) for more efficient use of EtherChannels in Cisco NX-OS. It contains the following

More information

CSC358 Week 6. Adapted from slides by J.F. Kurose and K. W. Ross. All material copyright J.F Kurose and K.W. Ross, All Rights Reserved

CSC358 Week 6. Adapted from slides by J.F. Kurose and K. W. Ross. All material copyright J.F Kurose and K.W. Ross, All Rights Reserved CSC358 Week 6 Adapted from slides by J.F. Kurose and K. W. Ross. All material copyright 1996-2016 J.F Kurose and K.W. Ross, All Rights Reserved Logistics Assignment 2 posted, due Feb 24, 10pm Next week

More information

Wide area networks: packet switching and congestion

Wide area networks: packet switching and congestion Wide area networks: packet switching and congestion Packet switching ATM and Frame Relay Congestion Circuit and Packet Switching Circuit switching designed for voice Resources dedicated to a particular

More information

Queuing Mechanisms. Overview. Objectives

Queuing Mechanisms. Overview. Objectives Queuing Mechanisms Overview Objectives This module describes the queuing mechanisms that can be used on output interfaces. It includes the following topics: Queuing Overview FIFO Queuing Priority Queuing

More information

Cybersecurity was nonexistent for most network data exchanges until around 1994.

Cybersecurity was nonexistent for most network data exchanges until around 1994. 1 The Advanced Research Projects Agency Network (ARPANET) started with the Stanford Research Institute (now SRI International) and the University of California, Los Angeles (UCLA) in 1960. In 1970, ARPANET

More information

Configuring Port Channels

Configuring Port Channels This chapter contains the following sections: Information About Port Channels, page 1, page 11 Verifying Port Channel Configuration, page 19 Triggering the Port Channel Membership Consistency Checker,

More information

IP Enhanced IGRP Commands

IP Enhanced IGRP Commands IP Enhanced IGRP Commands Use the commands in this chapter to configure and monitor IP Enhanced IGRP. For configuration information and examples, refer to the Configuring IP Enhanced IGRP chapter of the

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

The Network Layer and Routers

The Network Layer and Routers The Network Layer and Routers Daniel Zappala CS 460 Computer Networking Brigham Young University 2/18 Network Layer deliver packets from sending host to receiving host must be on every host, router in

More information

Configuring EtherChannels

Configuring EtherChannels Configuring EtherChannels This chapter describes how to configure EtherChannels and to apply and configure the Link Aggregation Control Protocol (LACP) for more efficient use of EtherChannels in Cisco

More information

Applications and Performance Analysis of Bridging with Layer-3 Forwarding on Wireless LANs

Applications and Performance Analysis of Bridging with Layer-3 Forwarding on Wireless LANs Applications and Performance Analysis of Bridging with Layer-3 Forwarding on Wireless LANs James T. Yu and Chibiao Liu School of Computer Science, Telecommunications, and Information Systems DePaul University,

More information

Configuring IP Services

Configuring IP Services This module describes how to configure optional IP services. For a complete description of the IP services commands in this chapter, refer to the Cisco IOS IP Application Services Command Reference. To

More information

Modular Policy Framework. Class Maps SECTION 4. Advanced Configuration

Modular Policy Framework. Class Maps SECTION 4. Advanced Configuration [ 59 ] Section 4: We have now covered the basic configuration and delved into AAA services on the ASA. In this section, we cover some of the more advanced features of the ASA that break it away from a

More information

COMP/ELEC 429/556 Introduction to Computer Networks

COMP/ELEC 429/556 Introduction to Computer Networks COMP/ELEC 429/556 Introduction to Computer Networks Let s Build a Scalable Global Network - IP Some slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang T. S. Eugene

More information

T9: SDN and Flow Management: DevoFlow

T9: SDN and Flow Management: DevoFlow T9: SDN and Flow Management: DevoFlow Critique Lee, Tae Ho 1. Problems due to switch HW may be temporary. HW will evolve over time. Section 3.3 tries to defend against this point, but none of the argument

More information

CSC 401 Data and Computer Communications Networks

CSC 401 Data and Computer Communications Networks CSC 401 Data and Computer Communications Networks Network Layer ICMP (5.6), Network Management(5.7) & SDN (5.1, 5.5, 4.4) Prof. Lina Battestilli Fall 2017 Outline 5.6 ICMP: The Internet Control Message

More information

PacketExpert PDF Report Details

PacketExpert PDF Report Details PacketExpert PDF Report Details July 2013 GL Communications Inc. 818 West Diamond Avenue - Third Floor Gaithersburg, MD 20878 Phone: 301-670-4784 Fax: 301-670-9187 Web page: http://www.gl.com/ E-mail:

More information

Congestion Avoidance. Finding Feature Information

Congestion Avoidance. Finding Feature Information This chapter provides information about slow devices that cause congestion (slow-drain devices) in a network and how to avoid slow-drain devices. Finding Feature Information, page 1 Feature History for,

More information

Topic 4a Router Operation and Scheduling. Ch4: Network Layer: The Data Plane. Computer Networking: A Top Down Approach

Topic 4a Router Operation and Scheduling. Ch4: Network Layer: The Data Plane. Computer Networking: A Top Down Approach Topic 4a Router Operation and Scheduling Ch4: Network Layer: The Data Plane Computer Networking: A Top Down Approach 7 th edition Jim Kurose, Keith Ross Pearson/Addison Wesley April 2016 4-1 Chapter 4:

More information

Token Ring VLANs and Related Protocols

Token Ring VLANs and Related Protocols Token Ring VLANs and Related Protocols CHAPTER 4 Token Ring VLANs A VLAN is a logical group of LAN segments, independent of physical location, with a common set of requirements. For example, several end

More information

EXAM TCP/IP NETWORKING Duration: 3 hours With Solutions

EXAM TCP/IP NETWORKING Duration: 3 hours With Solutions SCIPER: First name: Family name: EXAM TCP/IP NETWORKING Duration: 3 hours With Solutions Jean-Yves Le Boudec January 2013 INSTRUCTIONS 1. Write your solution into this document and return it to us (you

More information

Configuring Port Channels

Configuring Port Channels This chapter contains the following sections: Information About Port Channels, page 1, page 9 Verifying Port Channel Configuration, page 16 Verifying the Load-Balancing Outgoing Port ID, page 17 Feature

More information

OpenFlow Performance Testing

OpenFlow Performance Testing White Paper OpenFlow Performance Testing Summary While OpenFlow is a standard and the ONF has strict requirements for a switch to be considered conformant with the specification conformance testing says

More information

Week 7: Traffic Models and QoS

Week 7: Traffic Models and QoS Week 7: Traffic Models and QoS Acknowledgement: Some slides are adapted from Computer Networking: A Top Down Approach Featuring the Internet, 2 nd edition, J.F Kurose and K.W. Ross All Rights Reserved,

More information

Design and development of the reactive BGP peering in softwaredefined routing exchanges

Design and development of the reactive BGP peering in softwaredefined routing exchanges Design and development of the reactive BGP peering in softwaredefined routing exchanges LECTURER: HAO-PING LIU ADVISOR: CHU-SING YANG (Email: alen6516@gmail.com) 1 Introduction Traditional network devices

More information

Cisco Cisco Certified Network Associate (CCNA)

Cisco Cisco Certified Network Associate (CCNA) Cisco 200-125 Cisco Certified Network Associate (CCNA) http://killexams.com/pass4sure/exam-detail/200-125 Question: 769 Refer to exhibit: Which destination addresses will be used by Host A to send data

More information

Optical Packet Switching

Optical Packet Switching Optical Packet Switching DEISNet Gruppo Reti di Telecomunicazioni http://deisnet.deis.unibo.it WDM Optical Network Legacy Networks Edge Systems WDM Links λ 1 λ 2 λ 3 λ 4 Core Nodes 2 1 Wavelength Routing

More information

Configuring STP. Understanding Spanning-Tree Features CHAPTER

Configuring STP. Understanding Spanning-Tree Features CHAPTER CHAPTER 11 This chapter describes how to configure the Spanning Tree Protocol (STP) on your switch. For information about the Rapid Spanning Tree Protocol (RSTP) and the Multiple Spanning Tree Protocol

More information