IRMS: An Intelligent Rule Management Scheme for Software Defined Networking Lei Wang, Qing Li, Yong Jiang,YuWang, Richard Sinnott, Jianping Wu Tsinghua University, China University of Melbourne, Australia University of North Carolina at Charlotte, USA Abstract Software Defined Networking (SDN) enables network innovation and brings flexibility by separation of the control and data planes and logically centralized control. However, this network paradigm complicates flow rule management. Current approaches generally install rules reactively after table misses or pre-installs them by flow prediction. Such approaches consume nontrivial network resources during interactions between the controller and switches (especially for maintaining consistency). In this paper, we explore an intelligent rule management scheme (IRMS), which extends the one-big-switch model and employs a hybrid rule management approach. To achieve this, we first transform all rules into path-based and node-based rules. Pathbased rules are pre-installed whilst the paths for flows are selected at the edge switches of the network. To maintain consistency of forwarding paths, we update path-based rules as a whole and employ a lazy update policy. Node-based rules are optimally partitioned into disjoint chunks by an intelligent partition algorithm and organized hierarchically in the flow table. In this way, we significantly reduce the interaction cost between the control and data planes. This scheme enforces an efficient sliding window policy to enhance the hit rate for the installed chunks. We evaluate our scheme by comprehensive experiments. The results show that IRMS reduces the total flow entries by more than 71% on average and the update time by over 56%. IRMS also reduces the flow setup requests by more than one order of magnitude. I. INTRODUCTION As an emerging networking paradigm, Software Defined Networking (SDN) [1, 2] is widely influencing the evolution of network architectures. Separating the control plane from the data plane and centralizing the intelligence of the network into the controller(s) provides considerable conveniences for network management and allows acceleration of network innovations [3 5]. Whereas this centralization introduces obstacles to flow rule management. Considering flexibility, the controller typically installs rules reactively when a new flow incurs a table miss. However, this flexibility sacrifices forwarding performance because frequent interactions between the control and data planes causes nontrivial resource consumption and communication latency to increase. The state-of-the-art rule management schemes focus on caching more rules in the data plane to reduce the performance penalties for the table misses. For instance, CAB [6] splits the rule space to numerous non-overlapping buckets, and treats the rules in a bucket as a whole for installation and updates. A big challenge for these approaches is the consistency of rules Corresponding Author along a forwarding path, since any inconsistency of the cached rules may require rule reinstallation or even cause the wrong packet behavior. A more radical approach is installing the rules before flows occur. DIFANE [7] and CacheFlow [8] are representative solutions of these proactive schemes. They firstly divide the rule set into several subsets according to rule dependencies and switch capacity, and then distribute them on the certain selected switches. However, such proactive schemes lose the ability to generate rules dynamically according to the evolving network states. The high cost for updating is also an obstacle in these schemes, since any modification of a certain match field or change of rule placement is likely to break the existing dependencies which causes rule redistributing. Furthermore, installing all possible rules in advance imposes a heavy pressure on the flow tables of switches, since SDN switches usually store the rules in the ternary content addressable memory (TCAM), which is a scarce and expensive resource. Additionally, abundant match fields and fine-grained rules in SDN aggravate memory pressure. In our paper, we propose an Intelligent Rule Management Scheme (IRMS) that aims at providing a novel trade-off between flexibility and forwarding performance. We maintain intelligence at the network edge where interactions with the controller occur. All the core switches concentrate on forwarding tasks to achieve a higher performance. To achieve this, we classify flow rules as two types: pathbased rules and node-based rules. Path-based rules are a group of rules that cooperate to enforce a routing policy in a forwarding path. We calculate the possible paths of the network applications in advance and pre-install all related path-based rules. To guarantee consistency, IRMS treats the group of path-based rules as a whole and ensure they have the same life cycle, i.e., they will be updated together proactively by an update manager module and none of them will be withdrawn reactively, e.g., due to timeout. To keep the flexibility of SDN, we adopt an improved reactive approach for node-based rules. We partition them into disjoint chunks and employ hierarchical matching to eliminate rule dependencies. We also employ an intelligent policy to install the chunks according to the historical traffic and TCAM occupancy rate of the edge switch. We evaluate our scheme through a Mininet-based [9] emulation with different topologies and rule sets. We compare our scheme with both proactive and reactive schemes. Our
results show that: 1) IRMS is more efficient in flow table management. It reduces the number of flow entries by more than 71%; 2) IRMS reduces the flow setup requests by more than one order of magnitude and achieves 8% cache rate on average; 3) IRMS reduces average update time by at least 56%; 4) IRMS introduces less than 1% resource overheads measured by the CPU and memory consumption. The contributions of this paper are as follows: As far as we known, we are the first to propose an intelligent flow rule management scheme for SDN that employs both proactive and reactive approaches for different types of rules. We construct a flow rule management model for SDN, which keeps interactions between the controller and the switches at the network edge. We prove that the chunk partition problem of node-based rules in IRMS is NP complete and design an intelligent partition algorithm to solve it. We implement the prototype of IRMS and achieve significant improvement in performance with low overheads. II. BACKGROUND AND MOTIVATION A. Existing Approaches Flow rule management has been a key problem from the beginning of SDN. Our work is inspired by several previous works. We cover them briefly as follows. Reactive Rule Management: Ethane [1], which is widely regard as the origin of SDN, employs a typical reactive rule management mechanism. Its flow setup process is usually considered as the standard of SDN, namely: 1) Switches forward the packet-in message to the controller after determining that the packet does not match any active entries. 2) On receipt of the packet, the controller decides whether to allow or deny the flow according to the policy. 3) If the flow is allowed, the controller computes the flow s route, and adds a new entry to the flow tables of all switches along the path. However, frequent interactions between the controller and the switches impede the scalability and communication performance. CAB [6] aims at caching more rules to reduce the performance impairment. It partitions the rule space into several disjoint buckets. If the flow matches one bucket, all the rules in the bucket will be installed. This scheme focuses on managing the rules on a single switch. However, it is necessary to consider all the switches in the network. For example, it is required to keep the switches along a given forwarding path consistent, otherwise, whilst the flow matches a certain bucket on some switches, it is still forwarded to the controller as long as there is one miss-match in the path. Proactive Rule Management: Proactive approaches aim to keep all traffic in the data plane instead of consulting with the controller. As one example, DIFANE [7] partitions all rules over several selected switches. Similarly, CacheFlow [8] installs popular rules in the TCAM and other rules on software switches to handle miss-matched flows. A common point for these schemes is that they are required to pay great attention to the dependency between the rules. If a rule is installed on one switch, all the rules with high priority whose match filed intersects with this rule are required to be installed. However, well-planned rule partitions make rule updates more difficult, and this is especially challenging when rules change dynamically according to evolving network states. B. Design Paradigm To manage flow rules intelligently, our scheme IRMS aims to achieve the following goals: 1) Correct: Correct is the basic and most important goal for any rule management scheme. The flow must match and enforce the correct rule according to the policy, no matter how it handles overlapping rules, i.e., our scheme must ensure all network behaviors are correct. 2) Flexibility: Flexibility is considered as the guarantee of network innovations. Thus, installing all or part of the rules reactively is vital. Without this, SDN would evolve into a similar solution to VLAN or MPLS. 3) High-performance: Minimal flow setup time is required. Thus any scheme should make every effort to reduce the interaction that occurs between the controller and switches. 4) Resource-saving: Since TCAM is the scarce resource, any scheme is required to not add pressure to the flow table. The scheme is also expected to be a light-weight program and not add excessive computational overheads. 5) Update-friendly: As network continually evolves, rule updates are inevitable. Any scheme should aim at providing an intelligent update approach, i.e., speeding up the rule update process and making minimal impact on existing rules. III. FRAMEWORK DESIGN FOR IRMS In this section, we describe the framework and data plane design for IRMS. First, we formalize the definitions used in order to illustrate our scheme clearly. We use a quadruple (Match, Ins, Loc, pri) to definite a flow rule R. The elements denote match filed, instruction, located switch and priority respectively. As noted in IRMS, we classify all rules into two basic types: path-based rules and node-based rules. Definition 1 (Path-based Rules P). We set the conditions as follows: P = {R 1,R 2,...R m } Match 1 = Match 2 =... = Match m Ins 1 Ins 2... Ins m {Loc 1,Loc 2,..., Loc m } constructs a non-loop path between an ingress switch and an egress switch. The operator implies two instructions are equivalent, e.g., forward to a port or set the same queue. Different from pathbased rules, node-based rules N are usually single switch behaviors. For instance, an access control list (ACL) rule (1.1..2/24,Drop,S 1, 1) is a node-based rule. A. IRMS Architecture IRMS employs different solutions to handle the two types of rules to achieve the aforementioned goals (Section II-B).
The framework of a SDN network applying IRMS is illustrated in Figure 1. We pre-install all path-based rules on the required switches, while installing node-based rules reactively. To achieve the high-performance goal, our scheme includes a caching mechanism for node-based rules. We partition these rules into several non-disjoint chunks that have an upper bound, and install all rules of the chunk when a table-miss event occurs. IRMS has four key modules: management module, install module, monitor module and update module. The install module speaks southbound protocols (e.g., Openflow and Netconf) with the data plane devices and installs the related rules. The monitor module is responsible for network state collection. The update module handles scheduling of the update policy and notifying the details to the install module. The brain of IRMS is the management module. It enforces all computation tasks and interacts with other modules and the rule database. B. Data Plane Design In order to support IRMS, we design three level logic flow tables as illustrated in Figure 2. The rules in the first level of the logical table are the chunk-match rules. The second level of the logical table contains the node-based rules whose chunks are installed. In practice, this level of logical table may include an internal pipeline to support more complicated matching logic. The last level of the logical table contains all of the path-based rules whose start point is the switch. Match Action Pri 192.168./24 Goto Table 1 3 192.168.1/24 Goto Table 1 3... Controller 1 Match Action Pri 192.168..128 Set Label =1 5 /25 Goto Table k 192.168.. Set Label =2 5 /25 Goto Table k Port==8 Drop 1 Match Action Pri Label== Output = 1 5 Label==1 Output = 3 5 Label==2 Output = 2 5 Figure 1. IRMS Architecture To understand this architecture, we consider a simple example. Path-based rules for two forwarding paths (path 1 : IS 1,CS 1,CS 3,ES 1 and path 2 : IS 1,CS 2,ES 1 ) are already installed on the switches. A load balancing policy based on the source IP network address is (1.1.2.128/25 path1, 1.1.2./25 path2). When a packet with source IP address 1.1.2.233 comes in and miss-matches in IS 1, a packet-in message will be forwarded to the controller. The management module then fetches the rule chunk from the database, and notifies the install module to send the related rule chunk in IS 1. To guarantee the correct flow behavior, the node-based rule chunk must contain the rule (1.1.2.128/25, set flag=1, IS 1,5). According to the cache policy, it is possible to contain the rule (1.1.2./25, set flag=2, IS 1,5) and other neighbouring rules. The chunk partition algorithm and cache policy will be discussed in detail in Section IV. To ensure resource-saving, our scheme is shown not to increase the number of rules as long as common paths are enough. It is straightforward that IRMS does not increase the total number of rules as long as there are at least 1/m of common paths, where the parameter m is the average length of the forwarding paths. Recent research work SOL [11] also shows that in many practical scenarios, the number of valid paths is likely to be very small. Thus, we can infer that IRMS can decrease the total number of rules significantly. Figure 2. The framework of an SDN data path applying IRMS Figure 2 shows that if the packet matches the chunk rule in table, it will go to the next table to find the precise nodebased rule(s). In the second level of the logical table, the packet will be set a flag in a certain non-used field (e.g., VLAN, MPLS) to indicate the forwarding path. At the last stage, the packet matches the path-based rule according to the previous flag. In IRMS, managing the flags in one hop is sufficient to manipulate all forwarding behaviors of the whole network. It is noted that the data plane design is supported by current standard SDN switches and needs no data plane modification. IV. KEY ALGORITHM A. Chunk Partition Problem According to the aforementioned design, node-based rules are required to be divided into a number of chunks. We formulate this problem as follows. For a given set of N nodebased rules with K match fields, we would like to partition the flow space into several chunks. Each chunk has a maximum capacity of M rules. Each chunk will be a regular hypercube, because hypercubes are easy to represent as wildcard partition rules. The optimization objective is focused on generating as few chunks and multi-chunk rules as possible. We define a cost function as the optimization objective. We assume that the partition solution generates n chunks. Therefore, the total number of rules are N +n. With regard to updates, the multi-chunk rules (Φ) occupy more space on the switch than single-chunk rules, since they are removed only if all their associated chunks have expired. To represent the cost,
we normalize the two indices and employ a positive value (λ) to adjust the weight. min Cost (1) s.t. Cost = n N + n + λ Φ N + n +Φ (2) N n Φ= ( p i,j 1) (3) i=1 j=1 N p i,j M j 1, 2...n (4) i=1 K(M +1) S (5) n p i,j 1 i 1, 2...N (6) j=1 p i,j {, 1} (7) In these constraints, p i,j is a rule including indicator for the chunk j. We set this to 1 if chunk j contains the rule i, and for otherwise. (4) shows the chunk size constraint and (6) denotes that each node-based rule belongs to at least one chunk. (5) shows that the switch whose capacity is S can holds at least K chunks. Theorem 1. The chunk partition problem is NP-Complete. Proof. Step 1 (Problem Transformation): Since this is an optimization problem, there must exist an equivalent decision problem. This decision problem is such that given a partition cost (Cost c), whether or not there exists a solution, which satisfies all of the constraints above, and with the cost less than or equal to Cost c. Step 2 (NP problem proof): For every n, the solution is a Boolean array (P = [p i,j ] N n ). The target function is a polynomial function of the solution. Thus, we can verify whether the Cost is valid (or not) in polynomial time. Step 3 (NP hard proof): To prove this problem is NP-hard, we show that Bin packing [12] p chunk partition, i.e., we need to show how to reduce any instance of the Bin Packing to an instance of the chunk partition in polynomial time. Suppose the bin is the chunk and the item is the rule. We can find that the constraint of Bin Packing ( n j=1 p i,j =1) is subset of (6) and other constraints are same. Therefore, a valid solution in the bin packing problem also satisfies the chunk partition problem. This means that Bin Packing can reduce to the chunk partition problem in polynomial time. Considering all the three steps, we can conclude that the Rule Space Partition problem is NP-Complete. B. Algorithm Design Considering the computation cost, we design a heuristic algorithm for the chunk partition problem and show the pseudo code in Algorithm 1. First, we employ a decision tree to partition the rules. The root node of the tree represents the entire rule space. In each round of the partition, we pick a node in the decision tree that has more than M rules. The splitting process terminates when each leaf node has a valid number. Then we choose the dimension of the maximal number of non multi-chunk rules after the partition. In the end, a leaf node in the decision tree is equal to a valid chunk. We use a simple example in Figure 3 to illustrate our algorithm. We select the cutting field F 2, F 2 and F 3 in the three rounds to get the appropriate chunks. Algorithm 1 Intelligent Partition Algorithm 1: Initialization: k =. T is the tree node which represents the entire rule space 2: k k +1. Pick a tree node T k to split. T k is a leaf node in the tree that contains more than M rules in its hypercube. If we cannot find such a node, stop. 3: Select a flow match-field dimension i that has a maximum number of non-overlapping rules from the candidate matching fields. 4: For each dimension, try to split the rule space in this dimension into 2 parts and record the number of nonmultiple chunk rules in the partitions. Choose the dimension of maximal number. 5: Put all child nodes of Ti in the tree and goto step 2. 6: Traverse the tree in pre-order, and label the leaf nodes as chunk numbers. Use a hash table to record all the chunks belonging to each rule. Field 1 Field 2 Field 3 Field 4 R1-1 13-15 2-3 R2-1 13-15 1 2 R3-1 8-1 -3 2 R4-1 8-1 2 3 R5-15 -7-3 1 R6-15 14-15 2 1 R7-15 14-15 2 2 R8-15 -15-3 -3 Figure 3. A simple example for Algorithm 1, set chunk size 3 V. ONLINE OPTIMIZATION A. Cache More Chunks After the partition, the basic scheduling unit for our scheme is the chunk. In practice, considering the chunk correlation and the switch capacity, we can optimize the cache problem for node-based rules further, i.e., at each table-miss event, we can install more than one chunk. Cache Optimization policy: Because we cannot get the traffic array TC n at real time, the history traffic matrix TH is employed to make a prediction. The chunk for our selection follows the principle: s chunk = MAX i {p(chunk i match chunk i 1, match chunk i 2,...} If the selected chunk is already installed, we choose the chunk with the second highest probability. In theory, many state-ofthe-art methods can be used to calculate the probability based (8)
on the history traffic. In our scheme, we assume the incoming chunk chain is a Markov chain, i.e., s chunk = MAX i {p(chunk i match chunk i 1 } (9) To determine the appropriate number of chunks, we design a sliding window mechanism. Initially, we set an initial install number (α) and a threshold value (σ). σ shows the bound of TCAM occupancy rate of the switch. When the occupancy rate of TCAM (θ) is under σ, the window size will be increased by 1 when a new table-miss event happens. Otherwise, the number of window size will decrease to half of the current value. We set the lower bound of the install number as 1. The policy is shown in formula (1). α n =1 w size n = w size n 1 +1 n>1&&θ σ MAX{1,.5 w size n 1 } otherwise (1) Furthermore, if the newly installed chunk is replaced in previous rounds, we will increase the timeout value according to the replace frequency (f) and interval time (interval) as formula (11) shown. Here T denotes the baseline of the interval time and τ denotes the baseline of the compensation time for frequently replaced chunks. timeout = timeout init +(f + T/interval) τ (11) B. Rule Update Problem Update policy for path-based rules: Path-based rules represent the valid forwarding path in the network and are more stable than node-based rules. Thus, we use a lazy update policy for them, i.e., the update module in our system updates them at set intervals. During this interval, if the monitor module detects a path failure, we only change the label mapping policy in the node-based rules. Update policy for node-based rules: Node-based rules update immediately whenever conditions change. For example, if a node-based rule changes, we should check all chunks which it belongs to. For uninstalling chunks, the managerment module just refreshes the rule and updates the database. For installed chunks, they need to be updated with a consistent policy. The details are omitted here for brevity. VI. EVALUATION A. Simulation Setup To evaluate our scheme, we implement a prototype of IRMS. We use a lightweight database to store all of the rules. Before we import the rules into the database, the rules need to be preprocessed. This preprocessing contains partitioning node-based rules and transforming the rules into specified JSON format. All path-based rules are installed in advance by the REST API. We use Ryu as the controller and the Open vswitch as the data plane switch running on a machine with a quad-core 2.6 GHz Intel CPU and 16 GB Memory. Rules: In our simulation, we use ClassBench [13] to create 2-1k synthetic ACL rules with 5 fields. The ALLOW rules are randomly distributed to valid paths as path-based rules and other rules are used as node-based rules. Topology: We use Mininet [9] to generate several topologies: a simple 5-node topology (i.e., the topology of Figure 1), 8-Fat Tree and AS29 from Rocketfuel [14]. Traffic: We generate a host for each ingress/egress switch to send or receive traffic. For each host, the source and destination IP addresses are set in accordance with the synthetic rules. We assume that flow sizes follow a Pareto distribution. B. Simulation Results Rule Number: We select 5 different groups of traffic and rule sets to measure the number of rules in all switches. The results in Figure 4 show that our scheme can reduce more than 71% of flow rules compared to other rule management approaches (including CAB [6] and CacheFlow [8]). We also measure the maximal TCAM occupancy rate in each scheme. The results in Figure 5 show that IRMS has a similar occupancy rate to CAB and much lower than CacheFlow. Total Rules Flow Setup Time(ms) 5 4 3 2 IRMS CAB CacheFlow 2 4 6 8 1 Original Rule number(k) Figure 4. Total Number of Rules 7 6 5 4 3 2 1 Simple Topo AS 29 8 Fat Tree 2 4 6 8 1 Original Rule number(k) Figure 6. Flow Setup Time TCAM Occupancy Rate % 12 1 8 6 4 2 IRMS CAB CacheFlow 2 4 6 8 1 Original Rule number(k) Figure 5. Maximal TCAM occupancy rate Setup Requests (#) 1 1 IRMS CAB Exact Match 5 1 4 7 Flow Send Rate /sec Figure 7. Flow Setup Request Flow Setup/Transmission: To evaluate the forwarding performance of IRMS, we measure the worst situation of flow setup time and the number of flow setup requests compared with CAB and exact-match scheme, since the proactive scheme (e.g., CacheFlow) needs not to interact with the controller at the flow setup stage. There are two main factors that affect the flow setup: the interactions with the rule database and the flowmod/packet-out process. In our simulation, we set the send-rate of the flow as 1- flows per second using Scapy tools. The results show that IRMS has a tolerable flow setup time at different topologies (Figure 6) and reduces the flow setup requests by more than one order of magnitude (Figure 7). We also measure the bandwidth consumption of the controllerswitch channel to evaluate the performance impairment during the interaction between the control and data planes and get high performance results (Figure 8). Cache Hit Rate: We compare our scheme with CacheFlow [8] and CAB [6] to evaluate the hit rate. We count cache miss events (i.e., count all the Packet-In packets subtracting
Bdw /kbps 1 IRMS CAB 1 Exact Match 5 1 4 7 Flow Send Rate /sec Cache Hit (%) 12 1 8 6 4 2 IRMS CacheFlow CAB 4 8 1k 1.5k 2k Installed Rules Update time(ms) 1 5 IRMS CAB CacheFlow 2 2 4 6 8 1 Number of Update Rules Flow Set Up Time /ms 5 4 3 2 1 1 15 2 Chunk Size (#rules) Figure 8. Resource Consumption Figure 9. Cache Hit Rate Figure 1. Update Time Figure 11. Effective of chunk size LLDP, ARP and IPv6 packets). The results in Figure 9 show that our average cache hit rate is above 8% that is similar to CacheFlow and higher than CAB. Update Evaluation: We evaluate the average update time of our scheme by randomly changing a group of rules in the database. We also compare the results to CacheFlow and CAB, and achieve more than 56% improvement in Figure 1. C. Parameter Sensitivity Analysis As chunk size is an import parameter in the chunk partition algorithm, we evaluate how it effects our scheme. Figure 11 presents the effect of tuning the chunk size on flow setup time. The larger the chunk size, the higher rate it caches the rules. However, if a table-miss occurs, it wastes more time to query the database for update. Thus, our scheme performs with a moderate chunk size (i.e., 12-18 rules). D. Overhead Evaluation We measure the memory and CPU usage with different topologies on the same machine and evaluate the overheads by comparing an instance only running a L2-learning app using the Ryu controller and the Mininet. The results in Table I show that the increased overhead of IRMS is less than 1%. Table I RESOURCE OVERHEAD Topology L2/CPU L2/Mem IRMS/CPU IRMS/Mem Simple Topo 4% 7% 8% 12% 8 Fat-Tree 26% 45% 33% 54% AS 29 33% 58% 42% 67% VII. CONCLUSION In this paper, we design an intelligent rule management scheme (IRMS) for SDN that separates node-based rules from path-based rules. We label valid paths and pre-install the path-based rules. For node-based rules, we partition them into disjoint chunks and install them reactively. We keep the interaction between the controller and the switches at the network edge. We used different update policies for the two type of rules. The results of our comprehensive experiments show that our work makes a significant improvement on flow rule management for SDN. VIII. ACKNOWLEDGEMENT This work is supported by the National Natural Science Foundation of China under grant No. 6142255, the R&D Program of Shenzhen under grant No. JCYJ215631714683, and No. Shenfagai(215)986. REFERENCES [1] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, Openflow: enabling innovation in campus networks, ACM SIGCOMM Comput. Commun. Rev., vol. 38, no. 2, pp. 69 74, 28. [2] Q. Yan, F. R. Yu, Q. Gong, and J. Li, Software-defined networking (sdn) and distributed denial of service (ddos) attacks in cloud computing environments: A survey, some research issues, and challenges, IEEE Communications Surveys & Tutorials, vol. 18, no. 1, pp. 62 622, 216. [3] H. Kim and N. Feamster, Improving network management with software defined networking, IEEE Communications Magazine, vol. 51, no. 2, pp. 114 119, 213. [4] L. Cui, F. R. Yu, and Q. Yan, When big data meets softwaredefined networking: Sdn for big data and big data for sdn, IEEE Network, vol. 3, no. 1, pp. 58 65, 216. [5] Q. Yan and F. R. Yu, Distributed denial of service attacks in software-defined networking with cloud computing, IEEE Communications Magazine, vol. 53, no. 4, pp. 52 59, 215. [6] B. Yan, Y. Xu, H. Xing, K. Xi, and H. J. Chao, Cab: A reactive wildcard rule caching system for software-defined networks, in Proceedings of ACM HotSDN, Chicago, USA, 214. [7] M. Yu, J. Rexford, M. J. Freedman, and J. Wang, Scalable flow-based networking with difane, in Proceedings of ACM SIGCOMM, NEW DELHI, INDIA, 21. [8] N. Katta, O. Alipourfard, J. Rexford, and D. Walker, Cacheflow: Dependency-aware rule-caching for softwaredefined networks, in Proceedings of ACM SOSR, Santa Clara, CA, 216. [9] N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McKeown, Reproducible network experiments using container-based emulation, in Proceedings of ACM CoNEXT, Nice, France, 212, pp. 253 264. [1] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, and S. Shenker, Ethane: taking control of the enterprise, in Proceedings of ACM SIGCOMM, Kyoto, Japan, 27. [11] V. Heorhiadi, M. K. Reiter, and V. Sekar, Simplifying softwaredefined network optimization using sol, in Proceedings of USENIX NSDI, Santa Clara, CA, 216. [12] B. Korte and J. Vygen, Combinatorial optimization: Theory and algorithms, algorithms and combinatorics 2 (2), 26. [13] D. E. Taylor and J. S. Turner, Classbench: A packet classification benchmark, IEEE/ACM Transactions on Networking, vol. 15, no. 3, pp. 499 511, 27. [14] R. Teixeira, K. Marzullo, S. Savage, and G. M. Voelker, Characterizing and measuring path diversity of internet topologies, in Proceedings of ACM SIGMETRICS, San Diego, USA, 23.