A Software-Defined Framework for Improved Performance and Security of Network Functions

Size: px
Start display at page:

Download "A Software-Defined Framework for Improved Performance and Security of Network Functions"

Transcription

1 A Software-Defined Framework for Improved Performance and Security of Network Functions Thesis submitted for the degree of Doctor of Philosophy by Yotam Harchol Submitted to the Senate of The Hebrew University of Jerusalem December 2016

2 2016 Copyright by Yotam Harchol All Rights Reserved

3 This work was carried out under the supervision of Prof. Anat Bremler-Barr and Prof. David Hay

4

5 Acknowledgements First and foremost, I would like to thank my advisors, Anat Bremler-Barr and David Hay, for guiding me throughout the Ph.D studies. You taught me everything I know on doing research, paper writing, teaching, and presenting ideas, and I am grateful for that. During my studies I have collaborated with several great minds, to which I would like to thank, in addition to my advisors: Yehuda Afek, Yacov Hel-Or, Yaron Koral, Shimrit Tsur-David, Pavel Lazar, and Dan Shmidt. I thank you all for your great ideas, and for being great research companions. Last, but most importantly, I would like to thank my family. My wife Yana, for her endless support and love, my little son Uri, for hugging me every time I get back home, and my parents, Ofra and Iftach, for their encouragement and support throughout my academic journey. This work was supported by the European Research Council under the European Union s Seventh Framework Programme (FP7/ )/ERC Grant agreement n o , the Israeli Centers of Research Excellence (I-CORE) program (Center No. 4/11), the Neptune Consortium, administered by the Office of the Chief Scientist of the Israeli ministry of Industry, Trade, and Labor, and the Check Point Institute for Information Security.

6

7 Abstract Software-defined networking (SDN) has solved crucial problems such as cost, management, scalability, flexibility, and limited innovation space in network forwarding devices (e.g., switches and routers), by decoupling their control plane from their data plane. However, current SDN solutions such as OpenFlow, do not target other network functions (NFs), or middleboxes, at all, and are not suitable for them. In this dissertation we present a novel SDN framework for NFs, which includes an extensible and highly modular protocol that allows a logically-centralized controller to define packet processing goals in a distributed network. Based on the observation that different NFs carry out similar processing steps, the framework, called OpenBox, reuses processing steps to reduce the overall processing. We show that by doing so, the Open- Box framework improves performance and reduces resource consumption. OpenBox supports both hardware and software packet processors, as well as smart NF placement, NF scaling, and multi-tenancy through its controller. The OpenBox framework provides a comprehensive and generic solution for virtually any NF functionality. However, it cannot easily handle existing NFs without having them rewritten on top of the OpenBox provided API. Thus, we present a second framework for extracting a specific component that is common in multiple NFs and providing it as-a-service to other NFs, such that processing is performed once for the entire service chain. We specifically focus on the deep packet inspection (DPI) component, which scans packets to detect predefined signatures. This component is highly popular in contemporary NFs and is known to be one of the worst bottlenecks in many of them. We show how, by using a single DPI engine that is provided as a service for multiple NFs, we reduce the total processing overhead of these NFs, and achieve better performance with the same resources. Having a shared DPI component provides, in addition to the initial performance improvement that is due to the reduced number of DPI scans, the ability to build a fast, highly optimized, and robust DPI engine, at lower costs. Building such an engine

8 requires much effort and specialization. A single engine can be monitored and updated frequently, it is less prone to bugs and security holes, and it is cheaper to improve and perfect. We provide an analysis of the vulnerability of multiple DPI engines to attacks, denoted as algorithmic complexity denial-of-service attacks, which exploit the performance gaps between average case and worst case of DPI engines. By easily crafting network traffic that bring DPI engines to their worst case, we reduce the performance of these engines and effectively take them out of order. We then present a system that mitigates such attacks by detecting and isolating malicious traffic such that it has minimal effect on legitimate traffic. We show that the same system is useful with variety of different attacks on different DPI engines. We also show that when processing non-compressed HTTP traffic, a large portion of the content is repeating over and over. We present a DPI algorithm that leverages these repetitions to improve the overall performance of the DPI engine by avoiding the scan of repetitions that were previously scanned. The algorithms and frameworks presented in this dissertation improve the performance, security, and management of network functions in large-scale networks.

9 Description of Contribution to Joint Papers Chapter 3 This chapter includes research that was published in the following papers: Anat Bremler-Barr, Yotam Harchol, David Hay. OpenBox: Enabling Innovation in Middlebox Applications. ACM HotMiddleboxes 2015 Anat Bremler-Barr, Yotam Harchol, David Hay. OpenBox: A Software-Defined Framework for Developing, Deploying, and Managing Network Functions. ACM SIGCOMM 2016 I was the sole author of these papers other than my advisors. Chapter 4 This chapter includes research that was published in the following paper: Anat Bremler-Barr, Yotam Harchol, David Hay, Yaron Koral. Deep Packet Inspection as a Service. ACM CoNEXT 2014 I was the main author of this paper. I worked with Dr. Yaron Koral who was a post-doc with my advisors. I led the research, implemented the framework and the algorithms, and conducted all the experiments. Chapter 5 This chapter includes research that was published in the following papers: Anat Bremler-Barr, Yotam Harchol, David Hay. Space-Time Tradeoffs in Software-Based Deep Packet Inspection. IEEE HPSR 2011 Yehuda Afek, Anat Bremler-Barr, Yotam Harchol, David Hay, Yaron Koral. MCA 2 : Multi-Core Architecture for Mitigating Complexity Attacks. ACM/IEEE ANCS 2012 Yehuda Afek, Anat Bremler-Barr, Yotam Harchol, David Hay, and Yaron Koral. Making DPI Engines Resilient to Algorithmic Complexity Attacks. IEEE/ACM Transactions on Networking, 24(5) 2016 These works are all based on the results of the first paper in the list above, in which I am the sole author other than my advisors. I then collaborated with Dr. Yaron Koral,

10 who was then a PhD student at Tel Aviv University, and we published the second paper. Yaron has led the research of the second paper, while I was in charge of implementing the whole system, the experiments framework, and performing all the experiments. We then continued and extended the research and published the third paper. I have led the extended research that led to the additional results published in the third paper. Chapter 6 This chapter includes research that was published in the following paper: Anat Bremler-Barr, Shimrit Tzur David, Yotam Harchol, David Hay. Leveraging Traffic Repetitions for High Speed Deep Packet Inspection. IEEE INFOCOM 2015 I was the main author of this paper. I worked with Dr. Shimrit Tzur David who was a post-doc with my advisors. I led the research, implemented the algorithms, and conducted all the experiments.

11 Contents 1 Introduction The OpenBox Framework Deep Packet Inspection as a Service Making DPI Engines Resilient To Algorithmic Complexity Attacks Leveraging Traffic Repetitions for High-Speed DPI Research Objectives Related Work Centralized Control for Network Functions Deep Packet Inspection Methodology 21 3 OpenBox: Software-Defined Network Functions Abstracting Packet Processing Processing Graph Merging Multiple Graphs OpenBox Framework Architecture Data Plane The OpenBox Protocol Control Plane OpenBox Applications Implementation Controller Implementation Service Instance Implementation Experimental Evaluation Experimental Environment xi

12 3.4.2 Test Applications Test Setup Results Deep Packet Inspection as a Service System Overview The DPI Controller Passing Pattern Matching Results Deployment of DPI Service Instances DPI Service Instance Implementation Initialization Packet Inspection Dealing with Regular Expressions Experimental Results Implementation Experimental Environment Virtual DPI Performance Comparison to Different NF Configurations Analysis of Match Report Size Making DPI Engines Resilient to Complexity Attacks Algorithmic Complexity DoS Attacks on DPI Engines Snort Cache-Miss Complexity Attack Compressed Aho-Corasick Automaton Branching States Representation Path Compression Leaves Compression Pointer Compression Compression Effectiveness The MCA 2 System Description MCA 2 Design overview Cross-Thread Communication Mechanism Thread Allocation Scheme MCA 2 With Drop

13 5.4.5 Flow Affinity MCA 2 for Cache-Miss Attacks MCA 2 for Active-States Attacks MCA 2 for Force-Construction Attacks Experimental Results Experimental Environment Cache-Miss Attack Simulation Results Active-State Attack Simulation Results Leveraging Traffic Repetitions for High-Speed DPI Enhanced Aho-Corasick Algorithm Background: The Aho-Corasick Algorithm Enabling Skips within the Execution of the Aho-Corasick Algorithm Motivating Example System Design The Slow Path The Data Path Analysis Hardware Implementation Analysis Experimental Results Traffic Sources HTTP Content Characteristics Potential Performance Analysis Speedup with Software Implementation Determining the Dictionary Width Dictionary Creation and Update Discussion and Conclusions Contributions Future Work Bibliography 109

14 List of Tables 1.1 DPI in different types of middleboxes Partial list of abstract processing blocks Performance results of the pipelined NFs configuration (Figure 3.8) Average round-trip time for common messages between OBC and OBIs Performance of a single virtual DPI instance The minimal thresholds for non-common states ratio Sample dictionary Example of scanning process for input string CDBCABYTAFGBCD Sample measurements for model components Model predicted speedups and actual speedup achieved xiv

15 List of Figures 1.1 The general architecture of the OpenBox framework Examples of the chain of NFs with and without DPI as a service Pipelined NF scenario An example of multiple service chains scenario Examples of Aho-Corasick automaton representations Sample processing graphs for firewall and intrusion prevention system NFs A naïve merge of the two processing graphs shown in Figure The result of our graph merge algorithm Sample OpenBox network with distributed data plane Distributed processing in data plane OpenBox connection setup process The protocol definition for the RegexClassifier block Test setups under pipelined NF configuration Test setups under the distinct service chain configuration Achievable throughput for the distinct service chain configuration Service chain for the graph merge algorithm test Scalability of the graph merge algorithm DPI as a Service system illustration MCA 2 system design for virtual DPI environment Two sample Aho-Corasick DFAs The DFA and match table The effect of virtualization on performance Pipelined middleboxes throughput Multiple service chain throughput xv

16 4.8 Analysis of match report size Throughput of the Full Matrix AC and Compressed-AC The goodput of MCA 2 for different attack intensities The effect of a cache-miss attack on Snort Illustration of the compression process of the AC automaton Illustration of MCA Distribution of cache-misses under normal traffic and under attack Percentage of normal traffic packets by their non-common states ratio The total system throughput for a different number of common states Percentage of mild attack traffic packets by their non-common states ratio Distribution of number of active states Comparison of the memory behavior of the two AC implementations Throughput of pattern matching algorithms on a larger pattern set Throughput of MCA 2 under different rates of non-common states False-negative rate of the detection mechanism Average throughput per thread over time Goodput of Hybrid-FA and of Hybrid-FA with MCA 2 full-drop setup CDF of number of occurrences of 16-byte sequences by content type Skip ratio per content type when using grams with 32-byte width Actual speedup achieved by our software implementation Speedup achieved on cache-miss attack traffic Throughput change under a cache-miss attack of 33% intensity Speedup with dictionary update only every 72 hours

17 Chapter 1 Introduction Large-scale networks contain a massive amount of appliances that process network traffic for various purposes. About half of these appliances, in real-life deployments [108], are forwarding devices, such as switches and routers, which are responsible for packet forwarding and routing. The other half of appliances are denoted as middleboxes, or network functions (NFs) 1 terms that refer to a wide range of devices that perform a diverse set of processing tasks on network traffic. Popular examples for network functions are security appliances, such as firewalls and intrusion detection systems, load balancers, caches, WAN optimizers, gateways, etc. Software-defined networking (SDN) has been a tremendous game-changer in the management of large-scale networks, as it decouples the control plane of network forwarding appliances (e.g., switches and routers) from their data plane. Before the SDN era, each switch had its own control plane software, provided by the switch vendor. This software executed various distributed algorithms in order to determine network topology and forwarding policies (e.g., the spanning tree protocol [95]). This approach had many disadvantages: First, the distributed control plane had higher complexity and required compatibility from different vendors who did not always agree. Second, it made customization and innovation almost impossible, as network managers had to stick to the vendor s list of supported features. Third, it required a strong coupling with a specific vendor, to have all units work together smoothly. This meant not only high cost of ownership for equipment, but also required costly training and certification for network administrators, in order to correctly use the equipment. 1 We use the terms network function and middlebox interchangeably in this dissertation, often referring to physical standalone devices as middleboxes and to virtual or logical applications as network functions. 1

18 2 CHAPTER 1. INTRODUCTION SDN has succeeded in solving all these important problems in the forwarding plane, such as cost, management, and innovation, as well as additional problems such as multitenancy. It also let new vendors into the market as it lowered the entry barriers. However, in current SDN solutions, such as OpenFlow [53] and its derivatives, only the forwarding appliances are software-defined, while the other data plane appliances middleboxes and network functions continue to suffer from all of the above problems. Moreover, these appliances often suffer from additional, more complex problems as well, such as more complex packet processing tasks. Traditionally, each middlebox was marketed as a single piece of hardware, with its proprietary software already installed on it, for a high price tag. This prevented ondemand scaling and provisioning. The call for network function virtualization (NFV) [46] aims at reducing the cost of ownership and management of NFs by making NFs virtual appliances, running on top of a hypervisor in a virtual machine (VM) or in a container. While NFV improves on-demand scaling and provisioning, it does not solve other problems such as the limited and separate management of each NF. Network traffic nowadays usually traverses a sequence of NFs (a.k.a. a service chain). For example, a packet may go through a firewall, then through an intrusion prevention system (IPS), and then through a load balancer, before reaching its destination. A closer look into these NFs shows that many of them process the packets using very similar processing steps. For example, most NFs parse packet headers and then classify the packets on the basis of these headers, while some NFs modify specific header fields, or also classify packets based on Layer 7 payload content. Nonetheless, each NF has its own logic for these common steps. Moreover, each NF has its own management interface: each might be managed by a different administrator, who does not know, or should not know, about the existence and the logic of the other NFs. In this dissertation we present OpenBox, a framework and a protocol that make network functions software-defined, using a logically centralized controller. OpenBox decomposes NFs logic as defined in the control plane, and realizes the logic in the data plane, while improving performance and reducing entry barriers and management limitations. We further extend the idea of decomposition to provide specific common NF tasks as a service to other NFs. In our work we focus on the deep packet inspection (DPI) engine, which scans the payload of packets for patterns. We show how, by providing this engine as a centralized service, performance is improved and resources are saved.

19 1.1. THE OPENBOX FRAMEWORK 3 Figure 1.1: The general architecture of the OpenBox framework. In addition, when extracting such complex engine to an external service, one can invest and improve the service for the benefit of the entire network. For example, one could harden it against attacks, or improve its performance by using better algorithms. We show such improvements in this dissertation. 1.1 The OpenBox Framework Our OpenBox framework, presented in Chapter 3, addresses the challenges of efficient NF management by completely decoupling the control plane of a NF from its data plane using a newly defined communication protocol [99], whose highlights are presented in Chapter 3. The observation that many NFs have similar data planes but different control logic is leveraged in OpenBox to define general-purpose (yet flexible and programmable) data plane entities called OpenBox Instances (OBIs), and a logically-centralized control plane, which we call the OpenBox Controller (OBC). NFs are now written as OpenBox Applications on top of the OBC, using a northbound programming API. The OBC is in charge of deploying application logic in the data plane and realizing the intended behavior of the applications in the data path. The OpenBox protocol defines the communication channel between the OBC and the OBIs. The OpenBox framework is designed to enable processing steps reuse in a finer granularity than any other framework before (e.g., [12, 56, 57, 93, 104]). Using a novel algorithm we present in this dissertation, and the network-wide view of the packet processing tasks in the network, which is available to our OpenBox controller, the controller merges processing steps of multiple applications such that the eventual processing remains the same, but packets do not go through the same kind of processing over and

20 4 CHAPTER 1. INTRODUCTION over. As we show in Section 3.4, this greatly improves performance. OpenBox is designed to also support hardware-based OBIs, which use specific hardware to accelerate the data plane processing. For example, certain web optimizers may require specific video encoding hardware. However, the OBI with this hardware does not have to implement the entire set of processing logic defined by the OpenBox protocol. Instead, it can be chained with other, software- or hardware-based OBIs, which would provide the additional logic. This reduces the cost of the specialized OBI and the effort required to develop the NF application that uses it. Developers may also create a purely-software version of their optimizer OBI, which will be used by the OpenBox framework to scale up at peak load times. Another advantage of OpenBox is the intrinsic support for multi-tenancy: Multiple network tenants can run their NFs in the same network, on the same data plane resources, while being effectively isolated. For example, two different administrators may deploy two different IPSs with different rule sets. In OpenBox, these two IPSs may be consolidated to one OBI in the data plane, while keeping isolation at the application s control plane. This significantly reduces the cost of ownership and operating expenses, as the data plane can be much better utilized. OpenBox promotes innovation in the NF domain. Developers can develop and deploy new NFs as OpenBox applications, using basic building blocks (e.g., header classification) provided by the framework. Furthermore, the capabilities of the OpenBox data plane can be extended beyond these basic building blocks: an application can provide an extension module code in the control plane. This module can then be injected into the corresponding OBIs in the data plane, without having to recompile or redeploy them. One useful example for such innovation potential is the process of packet classification based on Layer 7 payload content, which is usually referred to as deep packet inspection (DPI). DPI is one of the most challenging processing tasks in contemporary NFs [46]. 2 A partial list of existing NFs that use a DPI engine is shown in Table 1.1. Creating a high-performance DPI engine for NFs is hard, in particular when taking into account security concerns. Thus, high-performance DPI has been an active area of research in the recent years, with plenty of algorithms for improved DPI performance [17, 20, 31, 32, 103]. 2 In an experiment we conducted on Snort IDS [112], DPI slows packet processing by a factor of at least 2.9.

21 1.1. THE OPENBOX FRAMEWORK 5 Table 1.1: DPI in different types of middleboxes. NF DPI patterns Examples Intrusion Detection Malicious SNORT [112], System activity BRO [34] AntiVirus/SPAM Malicious activity ClamAV [40] L7 Firewall Malicious Linux L7-filter, activity ModSecurity [1] L7 Load Balancing Apps/URLs F5 [89], A10 [87] Leakage Prevention Leakage Check Point DLP [39] System activity Network Analytic Protocol IDs Qosmos [42] Traffic Shaper Applications Blue Coat PacketShapper [24] Each NF today implements its own DPI engine. Packets that go through a chain of DPI NFs would go through a chain of different DPI engines. While each of these engines may use a different pattern matching algorithm, they all do essentially the same basic operation of matching the payload against a set of patterns. The drawbacks of such situation are significant: first, packets go through a complex process of DPI over and over, which means higher latency. Second, the slowest DPI engine in the chain is a throughput bottleneck for the entire chain. Likewise, if one DPI engine is exposed to an attack (e.g., a denial-of-service attack), the entire chain is exposed to this attack in terms of overall performance. We show that the OpenBox framework can mitigate these drawbacks by merging the processing of multiple NFs, for example, by merging their DPI engines, programmatically. Specifically, we show that this merge process significantly reduces latency and improves performance. As the merged DPI engine can use any of the available engine implementations, it can use a highly-optimized and secure implementation to achieve higher performance and provide better resiliency to attacks. We present techniques for such improvements in Chapters 5 and 6. We have implemented a prototype of the OpenBox framework and we provide a simple installation script that installs the entire system on a Mininet VM [75], so users can easily create a large-scale network environment and test their network functions. All our code is publicly available at The work on OpenBox was published in the proceedings of ACM SIGCOMM Hot-

22 6 CHAPTER 1. INTRODUCTION L2-L4 FW IDS AV Tr. Shaper (a) Without DPI service: Multiple NFs perform DPI on packets. L2-L4 FW DPI Service Instance IDS AV Tr. Shaper (b) With DPI service: Packets go through DPI once. Figure 1.2: Examples of the chain of NFs (a.k.a. service chains [100]) with and without DPI as a service. Middleboxes 2015 [28] and in the proceedings of ACM SIGCOMM 2016 [29]. 1.2 Deep Packet Inspection as a Service While the OpenBox framework provides a complete solution for creating, deploying, and managing NFs, it requires rewriting of existing NFs as OpenBox applications in order to provide the full set of services of the framework. In Chapter 4, we devise an additional framework for providing specific common processing tasks as network services in SDN networks. More specifically, our work focuses on the process of DPI, which has already been discussed above. Our framework provides enhanced DPI capabilities as-a-service for NFs, whether they are OpenBox NFs or native NFs. In this framework, NFs inform the network controller (which is essentially an extended SDN controller) about the requirements they have for the outsourced service. In our DPI case, this means patterns they are interested to look for in network traffic. 3 Then, the framework makes sure that each packet arriving at a NF is annotated with its corresponding results from the service. In our case, these are matches for the patterns requested by that NF. The DPI process is performed once for each packet by a dedicated DPI service instance. Such instances are distributed in various locations in the network, and the framework steers traffic through them using SDN traffic steering [100]. If a packet 3 We assume that this information is either not proprietary or can be disclosed to our service over a secure channel.

23 1.2. DEEP PACKET INSPECTION AS A SERVICE 7 IDS1 with DPI IDS2 with DPI (a) Without a DPI Service DPI Service Instance IDS1 no DPI IDS2 no DPI DPI Service Instance (b) With a DPI Service Figure 1.3: Pipelined NF scenario. With DPI Service, resources are used for running multiple instances of the service while NFs do not have to re-scan the packets. IDS1 with DPI DPI Service Instance IDS1 no DPI HTTP P2P IDS2 with DPI DPI Service Instance IDS2 no DPI (a) Without a DPI Service (b) With a DPI Service Figure 1.4: An example of multiple service chains scenario. With DPI Service, flows are multiplexed to multiple DPI Service instances. This allows dynamic load balancing on the DPI engines without adding NFs. matches some patterns that are of interest of one or more NFs in its corresponding service chain, this information is added as metadata to that packet, so that the corresponding NF could find it without performing DPI again. As detailed in [100], and referenced therein, traffic nowadays goes through a chain of NFs before reaching its destination. This implies that traffic is scanned over and over

24 8 CHAPTER 1. INTRODUCTION again by NFs with a DPI component (see Figure 1.2(a)). Alternatively, an opposite trend is to consolidate NFs in a single location (or even a hardware device) [11, 104]. However, the different components of this consolidated NF perform DPI separately, from scratch. Upon receiving the results from the DPI Service, each NF applies the rules corresponding to the matched patterns according to its internal logic. Figures 1.3 and 1.4 show two configurations for using DPI as a service. We elaborate on these scenarios and provide their corresponding experimental results in Section It is important to note that while fields in a packet header might be modified along the path of the packet, the payload is usually not changed, and thus, the DPI service may be used even in service chains that contain NATs and other NFs that modify header fields. Our approach is shown to provide superior performance and also to reduce the memory footprint of the DPI engine data-structures. The framework also allows dynamic resource sharing as the hardware used for DPI is decoupled from the specific NF, as shown in Section Since DPI is performed once, the effect of decompression or decryption, which usually takes place prior to the DPI phase, may be reduced significantly, as these heavy processes are executed only once for each packet. We have implemented a prototype of the DPI as a service framework as well, and also modified the well-known Snort IDS [112] to work with it instead of performing DPI by itself. Our code is publicly available at moly. We provide performance evaluation and analysis in Section 4.3. The work on deep packet inspection as a service was published in the proceedings of ACM CoNEXT 2014 [30]. 1.3 Making DPI Engines Resilient To Algorithmic Complexity Attacks As mentioned above, having a single DPI engine implementation makes it easier to improve it. We present in Chapter 5 a study on the vulnerability of DPI engines to algorithmic complexity denial-of-service (DoS) attacks, where the attacker exploits the gap between the amount of resources the system requires when processing normal packets and when processing carefully-crafted packets that consume drastically more

25 1.4. LEVERAGING TRAFFIC REPETITIONS FOR HIGH-SPEED DPI 9 resources (either computing, memory, cache, or other resource). These crafted packets are on one hand easy to construct, while on the other hand, require very intensive processing from the target system. This implies that with a little effort on the attacker side, the target system spends a lot of effort and is bound to lose. Security devices, such as Network Intrusion Detection or Prevention Systems (NIDS or NIPS), are the front defense line against cyber attacks over the Internet. A central component of NIDS/NIPS is a DPI engine. Being such a central component, DPI engines may serve as a preferred target for denial-of-service attacks. In recent years, such attacks are part of a trend of a two-phase combined attack on security devices: the attackers first neutralize the security device (e.g., by overwhelming it with traffic), and then, when the security device has been knocked down, attack the assets it was protecting. For example, an attack on SONY in 2011 combined a DDoS attack with credit cards theft [116]. We show how an attacker could easily craft malicious network traffic that takes down all the popular DPI engines, reducing their performance by orders of magnitude, without having to take over a large portion of the bandwidth. We then present a novel system architecture that mitigates such attacks on DPI engines using multi-core processors. Our system, named MCA 2, detects such attacks and once attack is detected, the system enters alert mode, where suspected malicious traffic is diverted to specific CPU cores. We show in Section 5.8 that our system improves the overall goodput in up to 73% when not dropping suspected traffic, and almost completely recovers performance for legitimate traffic while the system is under attack. We have implemented a prototype of the MCA 2 system. Our code is publicly available at The work on DPI engine vulnerabilities and on the MCA 2 system was published in the proceedings of IEEE HPSR 2011 [27], in the proceedings of ACM/IEEE ANCS 2012 [5], and in IEEE/ACM Transactions on Networking [6]. 1.4 Leveraging Traffic Repetitions for High-Speed DPI Content providers, such as Internet Service Providers (ISPs), Google, and Netflix maintain datacenters to host their content, or their customers content. Usually, such providers also maintain NFs that use DPI, e.g., IDS/IPS, L7 Load balancers, L7 fire-

26 10 CHAPTER 1. INTRODUCTION walls, etc. In content providers networks, most of the data is highly similar and often it is simply the same files, or files with minimal modifications, that are being sent over the network. Also in NFV environments, in some cases, the virtual appliances scan traffic from a closed set of servers or even a single server that serves several virtual machines. Thus, the similarity between pieces of data to be scanned is expected to increase. Moreover, using SDN, traffic can be made to flow so that similar traffic (from similar sources) flows to the same monitoring appliances. In Chapter 6 we present a mechanism that uses such repetitions efficiently in order to accelerate the signature matching component of the DPI engine. Our mechanism is based solely on modifications to the signature matching algorithm, and thus does not involve any change to the inspected traffic or require any cooperation from any other component in the network. Conceptually, it is divided to two parts: a slow path that samples the traffic and creates a dictionary with the fixed-length popular strings (which we call grams), and a data path that scans the traffic byte by byte and checks the dictionary for matches; if a gram is found in the dictionary, the data path skips the gram and adjusts its state according to an information saved along this gram. We further note that our mechanism is generic and can be implemented either in software or in hardware. In software, the data path is implemented as a thread, while the slow path is implemented as another thread, possibly with lower priority. In a typical multi-core, multi-threaded environment, our solution uses a single slow-path thread that gets packet samples and calculates dictionaries, and many data-path threads (possibly on many cores), each inspecting different packets (or different connections). Since the slow path runs periodically, the marginal loss of computation power is very low, and is also adjustable. Moreover, if repeated strings are known, one can use them as a dictionary without rebuilding it. In hardware, on the other hand, we can parallel the operation in finer granularity (for example, checking the Bloom filter in parallel with scanning a byte), for a significant performance boost. Section analyzes our software implementation and our proposed hardware implementation. In the software implementation, the experimental results match the predictions of the model. We analyze the real weight of each parameter in the model and apply these weights to the proposed hardware model to evaluate the benefits of a hardware implementation. Our solution achieves a significant performance boost, especially when data is from the same content source (e.g., the same website). Our experiments show that for such

27 1.5. RESEARCH OBJECTIVES 11 cases, our solution achieves a throughput gain of times the original throughput, when implemented in software. This work was published in the proceedings of IEEE INFOCOM 2015 [33]. 1.5 Research Objectives The frameworks and algorithms presented in this dissertation aim at improving the performance and security of network functions, focusing on the following aspects: Scalability As network bandwidth constantly increase, and packet processing tasks become more complex, it is important that the algorithms used for such tasks and the frameworks providing them scale with the demands, with as low overheads as possible. In this dissertation, this is reflected by reducing the memory requirements of such algorithms, improving their initial performance, and providing frameworks to re-use the results of prior processing and to allow automatic scaling and descaling when load changes. Flexibility A flexible deployment of middleboxes and network functions can reduce costs and help scaling at peak load times. The frameworks presented in this dissertation aim to allow flexible deployment of network functions and network services, along with flexible provisioning of services. Resiliency Network functions are required to provide very high rates. Security NFs, which are usually located on the wire, process the packets that go through them onthe-fly, delaying each packet until it finishes processing. Thus, they are also require to provide a constant rate that is not affected by the content of the packets they process. This is important to prevent denial-of-service (DoS) attacks on such NFs. In this dissertation, we present a framework that identifies and protects against such attacks. Innovation As legacy middleboxes are closed appliances from a single vendor, innovation in the middleboxes domain is considered to present high barriers for innovators: They should either cooperate with the appliance vendor or create their own physical appliance. We also focus in this dissertation on enabling innovation on top of the presented frameworks.

28 12 CHAPTER 1. INTRODUCTION 1.6 Related Work Centralized Control for Network Functions In recent years, middleboxes and network functions have been major topics of interest. In this section we discuss and compare the state-of-the-art works that are directly related to this dissertation. In traditional networks, middleboxes are placed at strategic places along the traffic path, determined by the network topology; traffic goes through the middleboxes as dictated by the regular routing mechanism. SDN makes it possible to perform traffic steering, where routing through a chain of middleboxes is determined using middleboxspecific routing considerations that might differ significantly from traditional routing schemes [100]. We use this method to realize our DPI as a Service framework presented in Chapter 4, and also in our implementation of the OpenBox framework, presented in Chapter 3. Recently, telecommunication vendors launched the network function virtualization (NFV) initiative [46] that aims to virtualize network appliances at the operator network. The main objective of NFV is to reduce the operational costs of these appliances (which are traditionally implemented in middleboxes) by obtaining the same functionality in software that runs on commodity servers. NFV provides easier management and maintenance by eliminating the need to deal with multiple hardware types and vendors; moreover, as NFV is implemented in software, it promotes innovation in this domain. CoMb [104] focuses on consolidating multiple virtual middleboxes into a single physical data plane location, thus improving the performance of the network in the common case where not all the middleboxes have peak load at the same time. E2 [93] is a scheduling framework for composition of multiple virtual NFs. It targets a very specific hardware infrastructure, and manages both the servers on which NFs are running and the virtual network switches that interconnect them. Unlike OpenBox, CoMb and E2 only decompose NFs to provide I/O optimizations such as zero-copy and TCP reconstruction, but not to reuse core processing blocks such as classifiers and modifiers. xomb [11] presents a specific software platform for running middleboxes on general purpose servers. However, it does not consolidate multiple applications to the same processing pipeline. ClickOS [81] is a runtime platform for virtual NFs based on the Click modular router [68] as the underlying packet processor. ClickOS provides I/O

29 1.6. RELATED WORK 13 optimizations for NFs and reduced latency for packets that traverse multiple NFs in the same physical location. ClickOS does not have a network-wide centralized control, and it does not merge multiple NFs, but only chains them and optimizes their I/O. Commercial solutions such as OpenStack [91], OpenMANO [90], OpNFV [92], and UNIFY [64] are focused on the orchestration problem. They all assume each NF is a monolithic VM, and try to improve scaling, placement, provisioning, and migration. Stratos [56] also provides a solution for NFV orchestration, including placement, scaling, provisioning, and traffic steering. Kekely et al. [65] presents a hardware architecture for unified flow measurement and collection of application layer protocols information. It uses a centralized control software to aggregate demands from multiple middleboxes and realize them using their suggested hardware. OpenBox can use such hardware implementations as part of its data plane, while providing richer control logic and richer data plane processing for much wider range of applications. OpenNF [57] proposes a centralized control plane for sharing information between software NF applications, in cases of NF replication and migration. However, their work focuses only on the state sharing and on the forwarding problems that arise with replication and migration, so in a sense it is orthogonal to our work. Sherry et al. [107] proposed outsourcing the NF to cloud services. This is completely orthogonal to our work as OpenBox can be used in the cloud in order to provide the outsourced NF functionality, or locally, instead of outsourcing at all. OpenState [22] and SNAP [13] are programming language for stateful SDN switches. OpenState makes it possible to apply finite automata rules to switches, rather than match-action rules only. SNAP takes a network-wide approach where programs are written for one big switch and the exact local policies are determined by the compiler. Both these works are focused on header-based processing, but such ideas could be useful to create programming languages on top of the OpenBox framework, as discussed in Section 7. To the best of our knowledge, Slick [12] is the only work to identify the potential in core processing step reuse across multiple NFs. They present a framework with centralized control that lets NF applications be programmed on top of it, and use Slick machines in the data plane to realize the logic of these applications. The Slick framework is mostly focused on the placement problem, and the API it provides is much more

30 14 CHAPTER 1. INTRODUCTION limited than the OpenBox northbound API. Slick does not share its elements across multiple applications and the paper does not propose a general communication protocol between data plane units and their controller. Unlike our OBIs, Slick only support software data plane units; these units cannot be extended. This work complements ours as the solutions to the placement problems presented in [12] can be implemented in the OpenBox control plane. OpenBox allows easier adoption of hardware accelerators for packet processing. Very few works have addressed hardware acceleration in an NFV environment [84], and those that have focused on the hypervisor level [35, 55]. Such ideas can be used in the OpenBox data plane by the OBIs, and thus provide additional hardware acceleration support. The Click modular software router [68] is an extendable software package for programming network routers and packet processors. It has numerous modules for advanced routing and packet processing; additional modules can be added using the provided API. OpenBox generalizes the modular approach of Click to provide an networkwide framework for developing modular NFs. We use Click as the packet processing engine, as part of our software implementation for an OBI, described in Section 3.3. Another related work in this context is the P4 programmable packet processor language [25]. The P4 language aims to define the match-action table of a general purpose packet processor, such that it is not coupled with a specific protocol or specification (e.g., OpenFlow of a specific version). A P4 switch can be used as part of the OpenBox data plane, by translating the corresponding protocol directives to the P4 language Deep Packet Inspection A DPI engine is a major component in many middleboxes and in most security tools, which usually performs pattern matching to detect signatures of malicious traffic. Two major types of pattern matching are exact matching and regular expression matching. The former usually uses a Deterministic Finite Automaton (DFA), while the latter uses either a DFA or a Nondeterministic Finite Automaton (NFA) for the ongoing inspection of the input data. String matching is an essential building block of most contemporary DPI engines. In many implementations (such as Snort [112], even if most patterns are regular expressions, string matching is performed first (namely, as a pre-filter) and constitutes most of the work performed by the engine. Specifically, Snort extracts the strings that ap-

31 1.6. RELATED WORK 15 peared in the regular expressions (called anchors). Then, string matching is performed over these anchors, and if all anchors originating from a specific regular expression are matched, then a regular expression matching of the corresponding expression is performed (e.g., using PCRE [2]). This is a common procedure since regular expression engines work inefficiently on a large number of expressions. The aforementioned DFA solutions suffer from memory explosion especially when combining a few expressions into a single data structure, while the NFA solutions suffer from lower performance due to multiple active state computation for each state transition. Efficient regular expression matching is still an active area of research. For the reasons above, we mostly focus in this dissertation on the exact matching algorithms. The classical algorithms for exact multiple string matching used for DPI are those of Aho-Corasick [8] and Wu-Manber [130]. For regular expression matching, two common solutions are using Deterministic Finite Automata (DFA) or Nondeterministic Finite Automata (NFA) [17, 71]. Efficient regular expression matching is still an active area of research [18, 49, 71, 73, 133]. The Aho-Corasick (AC) algorithm uses a DFA to scan the traffic against a set of predefined exact strings. A DFA is a five-tuple S, Σ, δ, s 0, F, where S is a finite set of states, Σ is a finite set of input symbols, δ : S Σ S is a transition function, returning the next state, given the current state and any symbol from the input, s 0 S is the initial state, and F S is a set of accepting states. Aho-Corasick algorithm provides a method to build such an automaton (a.k.a. AC DFA) from a set of patterns. Given the DFA, a packet is inspected by traversing the automaton symbol by symbol from s 0 ; a pattern is detected if a state in F is reached in this traversal. Fig. 1.5(a) depicts the AC DFA for the pattern-set {E,BE,BD,BCD,CDBCAB,BCAA}. In today s security tools, AC DFA s are huge e.g., Snort s AC DFA has 77, 182 states for 6, 422 patterns raising the question of how to store it efficiently in memory. The alternatives naturally trade memory space with execution time. In addition, most security tools (including Snort) divide their patterns to several sets, according to the type of traffic. Snort uses a full-matrix encoding for its AC DFAs as presented in [8]. In this representation, transitions are stored in a two-dimensional array with S rows and Σ columns. An entry at position (i, j) holds the value of δ(s i, j), implying that the number

32 16 CHAPTER 1. INTRODUCTION E s 0 B C E E s 0 E E B C C s 1 B s 2 B s 7 C B C C B E B D C D E B s C E 3 s 4 s 5 s 8 D E A D B D B s 13 s s 6 B 9 C A C B s 14 C s 10 E A C E s 11 s 1 s 2 E D C s 3 s 4 s 5 A D s 13 s 6 A s 14 s 7 D s 8 B s 9 C s 10 A s 11 E B B C B s 12 s 12 (a) Full Matrix representation. Failure-path representa- (b) based tion. Figure 1.5: Examples of Aho-Corasick automaton representations for {E, BE, BD, BCD, CDBCAB, BCAA}. of bits in each entry is at least log 2 S. In the typical case, when the input is inspected one byte at a time, Σ = 256, resulting in overall memory footprint of 256 S log 2 S. For Snort s AC DFAs, this translates to a combined footprint of MB. On the other hand, the main advantage of this encoding is that a transition consists of a single memory load operation that reveals directly the next state. An alternative approach to represent the Aho-Corasick automaton is using a trie that contains transitions from each state in depth d to its successors in depth d + 1, called forward transitions. That is, a state may not have a transition for each possible input symbol, but only for those symbols that actually transition to depth d + 1 in the trie. In addition, every state has a single failure transition, that is taken when no suitable forward transition exists: let label(s) be the word that leads from s 0 to s. The failure transition of state s is to state s such that label(s ) is the longest proper suffix of label(s) among all DFA states. Note that label(s 0 ) = ε (that is, the empty word) is a suffix of all other labels, and therefore, the failure transitions are properly defined. The longest failure path (namely, a path that consists of failure transitions only) that starts at state s is of length at most depth(s). This, in turn, implies that the total number of transitions (both forward and failure transitions) is at most twice as the number of inspected symbols. Fig. 1.5(b) depicts a failure-transitions based representation of the Aho-Corasick automaton in Fig. 1.5(a), where the solid edges are forward transitions and the dotted edges are failure transitions (for clarity, failure transitions to s 0 are omitted). There is extensive research on accelerating the DPI process, both in hardware [14,

33 1.6. RELATED WORK 17 44, 83] and in software [49, 71]. Most software-based solutions [49, 71] accelerate the DPI process by optimizing its underlying data structure (namely, its DFA) Algorithmic Complexity Attack on DPI Engines Crosby and Wallach were among the first to demonstrate an algorithmic complexity attack on the commonly-used Open Hash data structure [43]: an attacker designs an input that requires O(n) elementary operations per insertion, instead of O(1) operations that are required on the average. More recent works show that many other systems and algorithms are vulnerable to algorithmic complexity attacks including QuickSort [82], regular expression matcher [96], intrusion detection systems [50, 110], the Linux route-table cache [126], SSL authentication algorithm [38], and the retransmission algorithm in wireless networks [21]. Complexity attacks on different components of NIDS/NIPS were suggested in the past. For example, Bro maintains a hash table with the IP header fields of packets as keys; thus, by tailoring the traffic with specific headers, one can cause the hash insert-operation to last significantly longer, resulting in Bro failure. While in some cases modifying the algorithm suffices to mitigate the problem (e.g., Crosby and Wallach s attack can be solved by using hash functions that are not known to the attacker), this does not hold in general. We believe that only a system approach like MCA 2, can systematically alleviate the attack scenarios discussed in this dissertation. Current multi-core implementations of NIDS/NIPS systems such as Snort [112] and Bro [34] split the load to many sequential sub-tasks in a pipeline manner. Other works, such as [125], suggest fine-grained pipelining for parallelizing network applications on multi-core architectures. This partitioning is effective if processing cost for each sub-task is similar, which is usually not the case for NIDS/NIPS. A different line of research focuses on equally load balance the traffic flows between the different cores and performing the inspection in parallel [41, 59, 78, 86, 114]. Thus, each core has the same functionality. The load balancing is based on both the packet header parameters and some Layer 7 parameters. We note that such architectures are orthogonal to MCA 2 and can be applied to load balance the work between general threads that process the normal traffic. If MCA 2 is not used in conjunction with these architectures, they are all vulnerable to complexity attacks. Becchi et al. [20] focus on the DPI engine and present performance evaluation scheme

34 18 CHAPTER 1. INTRODUCTION for multiprocessor systems. The proposed design also splits the traffic between several cores with the same DPI engine on each, which supports regular expression matching. Their study identifies and evaluates algorithmic and architectural trade-offs and limitations. It also highlights how the presence of caches affects the overall performances. However, the scheme is geared at optimizing the normal case and is vulnerable to complexity attacks as we describe in Chapter 5. Such attacks can be mitigated by incorporating MCA 2 to this scheme as well. Another multi-core load-balancing approach is to partition the patterns among the cores (cf. [124, 131, 134]) and duplicate each packet to all cores. Then, different DPI algorithms, each is specialized in a different kind of pattern set, run on the different cores. In some cases, the partitioning itself is done so as to balance the load between the algorithms. It is important to note that, unlike MCA 2, in this kind of architectures, each packet is examined by several cores (each performs only part of the inspection). In addition, traffic splitting is determined a-priori and it does not take into account the incoming traffic, and therefore, is vulnerable to a complexity attack on each core separately. In recent years there have also been some works that use graphical processing units (GPUs) for DPI. The main approaches of such works are either to execute parallel DPI engines on the hundreds of GPU cores [63] or to use the parallelism for nondeterministic traversal of a NFA [37]. These approaches provide very high throughput while being less vulnerable to the attacks presented in this dissertation. However, they require the existence of expensive GPU hardware. As our algorithms are designed for general purpose CPUs we do not compare them to such approaches. With regards to DFA compression, there exists an extensive line of research (either exact pattern matching or regular expression) for hardware implementations [18, 31, 71, 72, 77, 94, 119, 122, 123, 127, 132], but most of these solutions are not applicable in our context, since they were tailored for specialized hardware implementation, although some works (e.g., [18, 72]) deal with impacts of the encoding of DFA transitions, as we do in Section 5.3. In our context, the most relevant paper is of Tuck et al. [122], which focuses on improving Aho-Corasick algorithm in hardware, but also shows that such compression solutions do not drastically affect the memory performance in software. Therefore, the authors conclude that such solutions should be taken into consideration also in software. Our compression techniques reduces the space by 60% with respect

35 1.6. RELATED WORK 19 to the solution proposed in [122], on real-life datasets such as Snort IDS [112] and ClamAV [40]. In addition, while Tuck et al. show only minor worst-case performance degradation, they did not take into account the system architecture (and specifically, the influence of the cache). Our worst-case scenario, on the other hand, shows a major adverse impact on the performance. Note that current state-of-the-art works [18, 19, 49, 72, 111, 115] suggest a set of memory efficient schemes. This work novelty is derived from the combination of two algorithms rather the specifics of a single one. Thus, our space efficient algorithm may be replaced with any of the above schemes using our MCA 2 architecture. Kumar et al. [70] present several methods to reduce regular-expressions-based DFA size. One of the mechanisms used in that paper is based on the assumption that normal flows rarely match more than the first few symbols of any signature. Thus, the most frequently visited portions of the automaton are used to build a fast path DFA, and the rest of the automaton is represented by a separated NFA, which is the slow path. The authors suggest a solution, which is somewhat similar to MCA 2 as it handles heavy traffic with a different algorithm and applies a lightweight classification algorithm to distinguish between heavy and normal traffic. In addition, [70] suggests a protection against DoS attacks, by attaching lower priority to flows with higher probability of being malicious. Nevertheless, that work analyzes the case of a single core, and therefore, could not benefit from the multi-core properties as MCA 2 does. Furthermore, the suggested protection in [70] fails under a continues DoS attack since the heavy packets that receive lower priority eventually overload the system buffer. MCA 2 is resilient also to DoS attacks with longer duration. CompactDFA [31] introduces a compressed DFA design for string matching for either a TCAM implementation or a software implementation. While for TCAMs, CompactDFA achieves a notably improved performance, its software implementation performance are inferior to both the classical Aho-Corasick algorithm and our compressed automata encoding (presented at Section 5.3). In fact, CompactDFA is equivalent to our compressed automaton using only leaves compression Leveraging Traffic Repetitions for High-Speed DPI The algorithm presented in Chapter 6 leverages the many repetitions in web traffic to accelerate the DPI engine. Another approach that also leverages traffic repetitions

36 20 CHAPTER 1. INTRODUCTION is deduplication. Network data deduplication is used to reduce the number of bytes that must be transferred between endpoints, thus reducing the required bandwidth [7, 10, 36, 47, 85, 117, 128, 129, 135, 136]. In these works, the authors find a redundancy of 35%-45% in general traffic and up to 90% redundancy in web traffic, depending on the type of the traffic. Leveraging repetitions in DPI engines is entirely different from deduplication, which requires extensions and modifications on both the server and client sides, while a DPI engine scans traffic on the route between them and cannot force deduplication or assume it is used. Furthermore, leveraging repetition in DPI requires finding the repetitions on the fly, and repetitions can be short. Note that these requirements do not exist for deduplication solutions. The work presented in [106] provides a limited solution to accelerate the DPI process using the Aho-Corasick algorithm. In this work, a repetition is defined as a repeated string that also starts at the same state in the DFA. Thus, this approach only works when scanning several copies of the exact same string, or when the same strings are stored over and over along with different starting states. However, not only can this approach miss a repeated string, it only checks sequential strings of fixed length. The solution is thus limited and can only take advantage of repetition of big chunks such as bytes.

37 Chapter 2 Methodology The research presented in this dissertation requires multiple methods from different disciplines. In this section, we list the different methods. Implementation We implemented all our proposed algorithms and frameworks, and we use these implementations in our experiments in order to compare them to previous state-of-the-art solutions and to publicly available open-source solutions. All the code of our implementations is open-sourced and publicly available. Equipment When testing software implementations we use high-end servers as listed in the experimental results section of each chapter in this dissertation. In some cases we interconnect the servers with an OpenFlow switch, as described in the corresponding experimental results section. Experiments With Software Middleboxes In Chapter 5 we show experiments with four software middleboxes: Snort [112], Bro [34], ClamAV [40], and Click [68]. In order to do that, we stream packets to these programs and measure the running time of their processing engine in order to compute their throughput. We measure latency using timestamps of ingress and egress packets. We use packets captured from a campus wireless network and from crawling top websites [9]. For attack traffic we use synthesized traces that exploit the vulnerabilities being tested. For NIDS/AV systems such as Snort, Bro, and ClamAV, we use real pattern sets as provided for each tool, such that we do the experiments with the most updated security rules. 21

38 22 CHAPTER 2. METHODOLOGY DPI Engines Simulation In Chapters 4, 5, and 6 we simulate DPI engines separately from their enclosing middlebox application. We compare our implementations to the original implementations of the DPI engines, whether taken from actual middleboxes (Snort, Bro) or from other open-source repositories, as stated in the corresponding chapters. We added timing and byte counting in order to measure throughput and latency.

39 Chapter 3 OpenBox: A Software-Defined Framework for Developing, Deploying, and Managing Network Functions 3.1 Abstracting Packet Processing We surveyed a wide range of common network functions to understand the stages of packet processing performed by each. Most of these applications use a very similar set of processing steps. For example, most NFs do some sort of header-based classification. Then, some of them (e.g., translators, load balancers) do some packet modification. Others, such as intrusion prevention systems (IPSs) and data leakage prevention systems (DLP), further classify packets based on the content of the payload (a process usually referred to as deep packet inspection (DPI)). Some NFs use active queue management before transmitting packets. Others (such as firewalls and IPSs) drop some of the packets, or raise alerts to the system administrator. In this section we discuss and present the abstraction of packet processing applications required to provide a framework for the development and deployment of a wide range of network functions. 23

40 24 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS Drop Read Packets Header Classifier Output Alert (a) Firewall Regex Classifier Regex Classifier Drop Read Packets Header Classifier Regex Classifier Alert Output (b) Intrusion Prevention System (IPS) Figure 3.1: Sample processing graphs for firewall and intrusion prevention system NFs Processing Graph Packet processing is abstracted as a processing graph, which is a directed acyclic graph of processing blocks. Each processing block represents a single, encapsulated logic unit to be performed on packets, such as header field classification, or header field modification. Each block has a single input port (except for a few special blocks) and zero or more output ports. When handling a packet, a block may push it forward to one or more of its output ports. Each output port is connected to an input port of another block using a connector. The notion of processing blocks is similar to Click s notion of elements [68] and a processing graph is similar to Click s router configuration. However, the OpenBox protocol hides lower level aspects such as the Click push/pull mechanism, as these may be implementation-specific.a processing block can represent any operation on packets, or on data in general. A processing block can buffer packets and coalesce them before forwarding them to the next block, or split a packet. In our implementation, described in Section 3.3, we use Click as our data plane execution engine. We map each OpenBox processing block to a compound set of Click elements, or to a new element we implemented, if no Click element was suitable. Figure 3.1 shows sample processing graphs for a firewall network function (Fig. 3.1(a)) and an IPS network-function (Fig. 3.1(b)). The firewall, for example, reads packets, classifies them based on their header field values, and then either drops the packets, sends an alert to the system administrator and outputs them, or outputs them without any additional action. Each packet will traverse a single path of this

41 3.1. ABSTRACTING PACKET PROCESSING 25 Drop Regex Classifier Alert (Firewall) Regex Classifier Drop Read Packets Header Classifier Header Classifier Regex Classifier Alert (IPS) Output Figure 3.2: A naïve merge of the two processing graphs shown in Figure 3.1. graph. Some processing blocks represent a very simple operation on packets, such as dropping all of them. Others may have complex logic, such as matching the packet s payload against a set of regular expressions and outputting the packet to the port that corresponds to the first matching regex, or decompressing gzip-compressed HTTP packets. Our OpenBox protocol defines over 40 types of abstract processing blocks [99]. An abstract processing block may have several implementations in the data plane, depending on the underlying hardware and software in the OBI. For example, one block implementation might perform header classification using a trie in software while another might use a TCAM for this task [121]. As further explained in Section 3.2 and in the protocol specification [99], the OBI informs the controller about the implementations available for each supported abstract processing block. The controller can then specify the exact implementation it would like the OBI to use, or let the OBI use its default settings and choose the implementation itself. The OpenBox protocol also allows injecting new custom blocks from the controller to the OBI, as described in detail in Section Table 3.1 lists some of the fundamental abstract processing blocks defined by the OpenBox protocol. Each block has its own configuration parameters and additional information, as described in Section Merging Multiple Graphs Our framework allows executing multiple network functions at a single data plane location. For example, packets may have to go through a firewall and then through an IPS. We could simply use multiple processing graphs at such locations, making packets traverse the graphs one by one, as shown in Figure 3.2. In this section we show how to merge multiple graphs while preserving the correct processing order and results. Consider two network functions as shown in Figure 3.1, running at the same physical location in the data plane. We would like to merge the two graphs into one, such that

42 26 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS Abstract Block Name Role Class FromDevice Read packets from T interface ToDevice Write packets to T interface Discard Drop packets T HeaderClassifier Classify on C header fields RegexClassifier Classify using C regex match HeaderPayload Classify on header C Classifier and payload NetworkHeader Rewrite fields M FieldRewriter in header Alert Send an alert St to controller Log Log a packet St ProtocolAnalyzer Classify based C on protocol GzipDecompressor Decompress HTTP M packet/stream HtmlNormalizer Normalize HTML M packet BpsShaper Limit data rate Sh FlowTracker Mark flows M VlanEncapsulate Push a VLAN tag M VlanDecapsulate Pop a VLAN tag M Table 3.1: Partial list of abstract processing blocks. The class property is explained in Section the logic of the firewall is first executed on packets, followed by the execution of the IPS logic. Additionally, we would like to reduce the total delay incurred on packets by both NFs by reducing the number of blocks each packet traverses. The desired result of this process is shown in Figure 3.3: We would like packets to go through one header classification instead of two, and execute the logic that corresponds to the result of this classification Graph Merge Algorithm Our graph merge algorithm must ensure that correctness is maintained: a packet must go through the same path of processing steps such that it will be classified, modified and queued the same way as if it went through the two distinct graphs. We also want

43 3.1. ABSTRACTING PACKET PROCESSING 27 Alert (Firewall) Regex Classifier Alert (Firewall) Regex Classifier Read Packets Header Classifier Alert (Firewall) Regex Classifier Alert (IPS) Output Alert (Firewall) Drop Figure 3.3: The result of our graph merge algorithm for the two processing graphs shown in Figure 3.1. Algorithm 1 Path compression algorithm 1: function compresspaths(g = (V, E), root V ) Require: G is normalized 2: Q empty queue 3: Add (root, 1) to Q 4: start null 5: while Q is not empty do 6: (current, port) Q.poll() 7: if current is a classifier, modifier or shaper then 8: if start is null then Mark start of path 9: start current 10: for each outgoing connector c from current do 11: if c.dst not in Q then 12: Add (c.dst, c.srcp ort) to Q 13: continue 14: else start is not null - end of path 15: end current 16: if start and end are mergeable classifiers then 17: merged merge(start, end) 18: for each output port p of merged do 19: Clone the path from start s correct 20: successor for port p to end (exclusive) 21: Mark clone of last block before end 22: Clone the sub-tree from end s correct 23: successor for port p 24: Rewire connectors from merged port p 25: to the clones and between clones 26: current merged 27: else Not mergeable classifiers 28: if start and end are classifiers then 29: Treat start and end as a single classifier 30: Find next classifier, modifier or shaper 31: and mark the last block before it. 32: for each outgoing connector c from current do 33: Find mergeable blocks from c to a marked 34: block. Merge and rewire connectors 35: if graph G was changed then 36: Restart compresspaths(g, current) 37: else Skip statics, terminals 38: for each outgoing connector c from current do 39: if c.dst not in Q then 40: Add (c.dst, c.srcp ort) to Q 41: continue 42: return G to make sure that static operations such as alert or log will be executed on the same packet, at the same state, as they would without merging. Our goal in this process is to

44 28 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS reduce the per-packet latency, so we would like to minimize the length of paths between input and output terminals in the graph. In order to model the merge algorithm, we classify blocks into five classes: Terminals (T): blocks that start or terminate the processing of a packet. Classifiers (C): blocks that, given a packet, classify it according to certain criteria or rules and output it to a specific output port. Modifiers (M): blocks that modify packets. Shapers (Sh): blocks that perform traffic shaping tasks such as active queue management or rate limiting. Statics (St): blocks that do not modify the packet or its forwarding path, and in general do not belong to the classes above. We use these classes in our graph merge algorithm in order to preserve the correctness of the merged graph: We can change the order of static blocks, or move classifiers before static blocks, but we cannot move classifiers across modifiers or shapers, as this might lead to incorrect classification. We can merge classifiers, as long as we pay special attention to the rule set combination and output paths. We can also merge statics and modifiers in some cases. The right column in Table 3.1 specifies the class of each block. Our algorithm works in four stages. First, it normalizes each processing graph to a processing tree, so that paths do not converge. 1 Then, it concatenates the processing trees in the order in which the corresponding NFs are processed. Note that a single terminal in the original processing graph may correspond to several leaves in the processing tree. A copy of the subsequent processing tree will be concatenated to each of these leaves. Nevertheless, the length of any path in the tree (from root to leaf) is exactly the same as it was in the original processing graph, without normalization. 2 While the number of blocks in the merged tree can increase multiplicatively, 3 in practice this rarely happens, and most importantly, the number of blocks in the graph 1 The process of graph normalization may theoretically lead to an exponential number of blocks. This only happens with a certain graph structure, and it never happened in our experiments. However, if it does, our system rolls back to the naïve merge. 2 The process of graph concatenation requires careful handling of special cases with regard to input and output terminals. We address these cases in our implementation. However, due to space considerations, we omit the technical details from the dissertation. 3 For graphs G 1 = (V 1, E 1) and G 2 = (V 2, E 2), the number of blocks in the merged graph is up to V 1 2 (1 + V 2 2 )

45 3.1. ABSTRACTING PACKET PROCESSING 29 has no effect on OBI performance. The significant parameter is the length of paths, as longer paths mean greater delay. Moreover, two graphs need not be merged if the overheads are too high. The controller is responsible for avoiding such a merger. As the processing tree is in fact a collection of paths, the third stage in our algorithm is re-ordering and merging blocks along a path. This is shown in Algorithm 1. As mentioned before, the algorithm works by examining the class of the blocks and deciding whether blocks can be merged (Line 7). Perhaps the most interesting case is merging two classifier blocks. Specifically, classifier blocks of the same type can support merging by having their own merge logic. The merge should resolve any conflicts according to the ordering and priorities of the two input applications (if applicable) and on the priority of the merged rules (Lines 18-29). For example, in our implementation, the HeaderClassifier block is mergeable: it implements a specific Java interface and a mergewith(...) method, which creates a cross-product of rules from both classifiers, orders them according to their priority, removes duplicate rules caused by the cross-product and empty rules caused by priority considerations, and outputs a new classifier that uses the merged rule set. After merging classifier blocks, our algorithm rewires the connectors and clones the egress paths from the classifiers such that packets will correctly go through the rest of the processing blocks. The merge algorithm is then applied recursively on each path, to compress these paths when possible. See the paths from the header classifier block in Figure 3.3 for the outcome of this process in our example. It is also possible to merge static and modifier blocks, if they are of the same class and type, and their parameters do not conflict. For example, two instances of a rewrite header block can be merged in constant time if they modify different fields, or the same field with the same value (lines in Algorithm 1). The last stage of our algorithm takes place after the merge process is completed. It eliminates copies of the same block and rewires the connectors to the remaining single copy, so that eventually the result is a graph as shown in Figure 3.3, and not necessarily a tree. Note that the diameter of the merged processing graph, as shown in Figure 3.3, is shorter (six blocks) than the diameter of the graph we would have obtained from a naïve merge (seven blocks, see Figure 3.2). The correctness of the process stems from the following: First, any path a packet would take on the naïvely merged graph exists, and will be taken by the same packet, on

46 30 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS OpenBox Applications OpenBox Controller 2 HW OBI 3 4 OBI VM 5 OBI VM A 1 6 B Figure 3.4: Sample OpenBox network with distributed data plane processing (as in Figure 3.5). the normalized and concatenated graph. Second, when merging classifiers we duplicate paths such that the previous property holds. Third, we only eliminate a copy of a block if the remaining copy is pointing to exactly the same path (or its exact copy). 3.2 OpenBox Framework Architecture In this section we describe the OpenBox framework in detail by dividing it into layers, as shown in Figure 1.1: from OpenBox service instances (OBIs) in the data plane at the bottom, through the OpenBox protocol and the OpenBox controller (OBC), to the applications at the top Data Plane The OpenBox data plane of consists OpenBox service instances (OBIs), which are lowlevel packet processors. An OBI receives a processing graph from the controller (described in Section 3.2.3). The OBI applies the graph it was assigned on packets that traverse it. It can also answer queries from the controller and report its load and system information. OBIs can be implemented in software or hardware. Software implementations can run in a VM and be provisioned and scaled on demand. An OBI provides implementations for the abstract processing blocks it supports, and declares its implementation block types and their corresponding abstract block in the Hello message sent to the OBC. The controller may use a specific implementation in the processing graph it sends to the OBI, or use the abstract block name, leaving the choice of exact implementation

47 3.2. OPENBOX FRAMEWORK ARCHITECTURE 31 Read Packets Header Classifier Write Metadata Encapsulate Metadata Output Drop (a) First OBI: Performs header classification on hardware TCAM and, if necessary, forwards the results as metadata along with the packet to next OBI. Alert (Firewall) Regex Classifier Alert (Firewall) Regex Classifier Drop Read Packets Decapsulate Metadata Read Metadata Alert (Firewall) Regex Classifier Alert (IPS) Output Alert (Firewall) (b) Second OBI: Receives header classification results and applies the corresponding processing path. Figure 3.5: Distributed processing in data plane with the processing graph from Figure 3.3. to the OBI. An OBI may be in charge of only part of a processing graph. In this case, one or more additional OBIs should be used to provide the remaining processing logic. A packet would go through a service chain of all corresponding OBIs, where each OBI attaches metadata (using some encapsulation technique [48, 58, 101] see also Section 3.2.4) to the packet before sending it to the next OBI. Upon receiving a packet from a previous OBI, the current OBI decodes the attached metadata and acts according to it. For example, consider the merged processing graph shown in Figure 3.3 and suppose its header classification block can be implemented in hardware, e.g., using a TCAM. Thus, we can realize this processing graph using two OBIs. The first OBI, residing on a server or a dedicated middlebox equipped with the appropriate hardware, performs only packet classification. Only if the packet requires further processing does the first OBI store the classification result as metadata, attach this metadata to the packet, and send it to another, software-based OBI, to perform the rest of the processing. The split processing graphs are illustrated in Figure 3.5. Even an SDN switch that supports packet encapsulation could be used as the first OBI. Figure 3.4 illustrates this scenario in a network-wide setting: packets from host A (Step 1 in the figure) to host B should go through the firewall and the IPS. This is realized using two OBIs as described above. The first performs header classification on

48 32 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS hardware (Step 2), then sends the results as metadata attached to the packets (Step 3) to the next, software-based OBI. In this example, this OBI is scaled to two instances, multiplexed by the network for load balancing. It extracts metadata from the packets (Step 4), performs the rest of the processing graph, and sends the packets out without metadata (Step 5). Eventually the packets are forwarded to host B (Step 6). In our implementation, we use NSH [101] to attach metadata to packets. Other methods such as VXLAN [79], Geneve [58], and FlowTags [48] can also be used but may require increasing the MTU in the network, which is a common practice in large scale networks [98]. Different OpenBox applications may require different size metadata. In most cases, we estimate the metadata to be a few bytes, as it should only tell the subsequent OBI which path in the processing graph it should follow. Nevertheless, it is important to note that attaching metadata to packets is required only when two blocks that have originated from the same OpenBox application are split between two OBIs. Finally, an OBI can use external services for out-of-band operations such as logging and storage. The OpenBox protocol defines two such services, for packet logging and for packet storage (which can be used for caching or quarantine purposes). These services are provided by an external server, located either locally on the same machine as the OBI or remotely. The addresses and other parameters of these servers are set for the OBI by the OBC The OpenBox Protocol The OpenBox communication protocol [99] is used by OBIs and the controller (OBC) to communicate with each other. The protocol defines a set of messages for this communication and a broad set of processing blocks that can be used to build network function applications. Figure 3.6 presents the connection setup process, as defined by the protocol. When an OBI starts, it sends to the OBC a Hello message that contains information on the instance such as its identification and supported capabilities. OBC then sets the required configuration to the OBI. This may include setting parameters such as KeepAlive message interval, or addresses for log and storage servers. It can also include injecting a custom module to OBI, if this capability is supported by the OBI (see Section ). Eventually, the OBC sends a processing graph to the OBI and by that it sets the packet processing logic for the OBI. A barrier request is then sent to ensure that the OBI

49 3.2. OPENBOX FRAMEWORK ARCHITECTURE 33 Controller Hello SetParametersRequest SetParametersResponse Service Instance AddCustomModuleRequest AddCustomModuleResponse BarrierRequest SetProcessingGraphRequest SetProcessingGraphResponse BarrierRequest Figure 3.6: OpenBox connection setup process. finishes configuration before processing any further message from the OBC. Abstract processing blocks are defined in the protocol specification. Each abstract block has its own configuration parameters. In addition, similarly to Click elements [68], blocks may have read handles and write handles. A read handle in our framework allows the controller, and applications that run on top of it, to request information from a specific processing block in the data plane. For example, it can ask a Discard block how many packets it has dropped. A write handle lets the control plane change a value of a block in the data plane. For example, it can be used to reset a counter, or to change a configuration parameter of a block. Figure 3.7 shows the protocol definition for the RegexClassifier block, which classifies packets based on regular-expression matching, and is used by the IPS application shown in Figure 3.1(b). Different OBIs may have different implementations for such a RegexClassifier block. For example, our implementation uses Google s RE2 library [103], while alternative implementations could use PCRE [2] or other algorithms Custom Module Injection An important feature in the OpenBox protocol allows injecting custom software modules from the control plane to OBIs, if supported by the specific OBI implementation. Our implementation, described in Section 3.3, supports this capability. This allows application developers to extend existing OBIs in the data plane without having to change their code, or to compile and re-deploy them. To add a custom module, an application developer creates a binary file of this module and then defines any new blocks implemented by this module, in the same way

50 34 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS Figure 3.7: The protocol definition for the RegexClassifier block. as existing blocks are defined in the protocol. The format of the binary file depends on the target OBI (in our implementation the file is a compiled Click module). When such a module is used, the controller sends an AddCustomModuleRequest message to the OBI, providing the module as a binary file along with metadata such as the module name. In addition, this message contains information required in order to translate the configuration from the OpenBox notation to the notation expected by the lower level code in the module. A custom module should match the target OBI. A module developer may create multiple versions of a custom module and let the controller choose the one that is best suited to the actual target OBI Discussion: OpenBox Protocol Extension for Infrastructure Offloading The OpenBox protocol can be used not only between the OpenBox controller to OBIs, but also between the OpenBox controller to the underlying infrastructure on which OBIs and other virtual network functions run. By that, the OpenBox controller can programmatically offload processing from OBIs to the underlying infrastructure, given that this infrastructure supports such offloading. This includes, for example, hardware

51 3.2. OPENBOX FRAMEWORK ARCHITECTURE 35 switches, hypervisors and their internal virtual switches, and network interface cards (NICs). Nowadays, hardware switches become more and more customizable and support growing number of complex operations on network traffic [25, 53]. Recent works also suggested adding stateful operations and data storage to switches to allow even more complex processing on switch hardware (e.g., [109]). Smart NICs also emerge, mainly for datacenter networks. These NICs show increased processing capabilities and increased programmability [16]. The OpenBox protocol can be used almost as-is to offload processing from OBIs and VNFs to the underlying infrastructure, by specifying partial processing graphs to the infrastructure, similarly to the ones described in Section For example, a hypervisor may provide packet classification tasks through its virtual switch, which anyway performs packet classification for packets to and from VNFs, or OBIs, that run on top of it. In some cases, the underlying infrastructure has memory that can be shared with the OBIs that run on top of it. In such cases, this shared memory can be used in order to transfer the results of previous processing (e.g., modified packet and classification results), instead of the encapsulation method suggested in Section 3.2.1, when further processing is required for packets. In the hypervisor case, for example, it can place packets that require further processing by guest OBIs, or NFs, in specific queues in its memory which are accessible for the guest, instead of using NSH. Packets that do not require further processing can immediately continue to their next hop in the service chain Control Plane The OpenBox controller (OBC) is a logically centralized software server. It is responsible for managing all aspects of the OBIs: setting processing logic, and controlling provisioning and scaling of instances. In an SDN network, the OBC can be attached to a traffic-steering application [100] to control chaining of instances and packet forwarding between them. OBC and OBIs communicate through a dual REST channel over HTTPS, and the protocol messages are encoded with JSON [45]. The OBC provides an abstraction layer that allows developers to create networkfunction applications by specifying their logic as processing graphs. We use the notion of

52 36 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS segments to describe logical partitions in the data plane. Different segments can describe different departments, administrative domains, or tenants, and they can be configured with different policies and run different network function applications. Segments are hierarchical, so a segment can contain sub-segments. Each OBI belongs to a specific segment (which can, in turn, belong to a broader segment). Applications declare their logic by setting processing graphs to segments, or to specific OBIs. This approach allows for flexible policies in the network with regard to security, monitoring, and other NF tasks, and by definition, supports the trend of micro-segmentation [118]. Microsegmentation reduces the size of network segments to allow highly customized network policies. Upon connection of an OBI, the OBC determines the processing graphs that apply to this OBI in accordance with its location in the segment hierarchy. Then, for each OBI, the controller merges the corresponding graphs to a single graph and sends this merged processing graph to the instance, as discussed in Section Our OBC implementation uses the algorithm presented in Section to merge the processing graphs. The controller can request system information, such as CPU load and memory usage, from OBIs. It can use this information to scale and provision additional service instances, or merge the tasks of multiple underutilized instances and take some of them down. Applications can also be aware of this information and, for example, reduce the complexity of their processing when the system is under heavy load (to avoid packet loss or to preserve SLAs). The OBC, which knows the network topology and the OBI locations, is in charge of setting the forwarding policy chains. It does so on the basis of the actual deployment of processing graphs to OBIs. As OpenBox applications are defined per segment, the OBC is in charge of deciding which OBI(s) in a segment will be responsible for a certain task, and directing the corresponding traffic to this OBI OpenBox Applications An application defines a single network function (NF) by statement declarations. Each statement consists of a location specifier, which specifies a network segment or a specific OBI, and a processing graph associated with this location. Applications are event-driven, where upstream events arrive at the application

53 3.2. OPENBOX FRAMEWORK ARCHITECTURE 37 through the OBC. Such events may cause applications to change their state and may trigger downstream reconfiguration messages to the data plane. For example, an IPS can detect an attack when alerts are sent to it from the data plane, and then change its policies in order to respond to the attack; these policy changes correspond to reconfiguration messages in the data plane (e.g., block specific network segments, block other suspicious traffic, or block outgoing traffic to prevent data leakage). Another example is a request for load information from a specific OBI. This request is sent from the application through the OBC to the OBI as a downstream message, which will later trigger an event (sent upstream) with the data. Although events may be frequent (it depends on the applications), graph changes are not frequent in general, as application logic does not change often. Applications that are expected to change their logic too frequently may be marked so that the merge algorithm will not be applied on them. The controller can also detect and mark such applications automatically Multi-Tenancy The OpenBox architecture allows multiple network tenants to deploy their NFs through the same OBC. For example, an enterprise chief system administrator may deploy the OpenBox framework in the enterprise network and allow department system administrators to use it in order to deploy their desired network functions. The OBC is responsible for the correct deployment in the data plane, including preserving application priority and ordering. Sharing the data plane among multiple tenants helps reduce cost of ownership and operating expenditure as OBIs in the data plane may have much higher utilization, as discussed in Section Application State Management Network functions are, in many cases, stateful. That is, they store state information and use it when handling multiple packets of the same session. For example, Snort stores information about each flow, which includes, among other things, its protocol and other flags it may be marked with [113]. Since the state information is used in the data plane of NFs as part of their packet processing, it is important to store this information in the data plane, so it can be quickly fetched and updated. It cannot, for example, be stored in the control plane.

54 38 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS VM1 Firewall1 VM2 Firewall2 VM1 Firewall VM2 IPS VM1 OBI1 VM2 OBI2 (a) Two-firewall service chain (b) Firewall and IPS service chain (c) Test setup with OpenBox Figure 3.8: Test setups under pipelined NF configuration. Hence, the OpenBox protocol defines two data structures that are provided by the OBIs, in the data plane, for storing and retrieving state information. The metadata storage is a short-lived key-value storage that can be used by an application developer to pass information along with a specific packet, as it traverses the processing graph. The information in the metadata storage persists over the OBI service chain of a single packet. It can be encapsulated and sent over from one OBI to another, along with the processed packet, as described in Section The other key-value storage available for applications in the data plane is the session storage. This storage is attached to a flow and is valid as long as the flow is alive. It allows applications to pass processing data between packets of the same flow. This is useful when programming stateful NF applications such as Snort, which stores flow-level metadata information such as flow tags, gzip window data, and search state. Frameworks such as OpenNF [57] can be used as-is to allow replication and migration of OBIs along with their stored data, to ensure correct behavior of applications in such cases. 3.3 Implementation We have implemented the OpenBox framework in two parts: a software-based OBI, and an OpenBox controller. both on a Mininet VM [75]. OpenBoxProject. We provide a simple installation script that installs All our code is available at Controller Implementation Our controller is implemented in Java in about 7500 lines of code. It runs a REST server for communication with OBIs and for the management API. The controller exposes two

55 3.3. IMPLEMENTATION 39 main packages. The first package provides the basic structures defined in the protocol, such as processing blocks, data types, etc. The other package lets the developer define applications on top of the controller, register them, and handle events. It also allows sending requests such as read requests and write requests, which in turn invoke read and write handles, accordingly, in the data plane (as described in Section 3.2.2). When an application sends a request, it provides the controller with callback functions that are called when a response arrives back at the controller. handles multiplexing of requests and demultiplexing of responses. The controller Along with the controller implementation, we have implemented several sample applications such as a firewall/acl, IPS, load balancer, and more. In addition, we implemented a traffic steering application as a plugin for the OpenDaylight OpenFlow controller [52]. We use it to steer the traffic between multiple OBIs Service Instance Implementation Our OBI implementation is divided into a generic wrapper and an execution engine. The generic wrapper is written in Python in about 5500 lines of code. It handles communication with the controller (via a local REST server), storage and log servers, and translates protocol directives to the specific underlying execution engine. The execution engine in our implementation is the Click modular router [68], along with an additional layer of communication with the wrapper and storage server, and several additional Click elements that are used to provide the processing blocks defined in the protocol (a single OpenBox block is usually implemented using multiple Click blocks). All our code for the execution engine is written as a Click user-level module, without any modification to the core code of Click. The code of this module is written in C ++ and consists of about 2400 lines. Note that by changing the translation module in the wrapper, the underlying execution engine can be replaced. This is necessary, for example, when using an execution engine implemented in hardware. Finally, our OBI implementation supports custom module injection, as described in Section An application developer who wishes to extend the OBI with new processing blocks should write a new Click module (in C ++ ) that implements the underlying Click elements of these new blocks, and implement a translation object (in Python) that helps our wrapper translate new OpenBox block definitions to use the code provided in the new Click module.

56 40 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS VM1 Firewall1 VM1 Firewall VM1 OBI1 VM2 Firewall2 (a) Two-firewall service chains VM2 IPS (b) A firewall service chain and an IPS service chain VM2 OBI2 (c) Test setup with Open- Box Figure 3.9: Test setups under the distinct service chain configuration. 3.4 Experimental Evaluation Experimental Environment Our experiments were performed on a testbed with the following machines: a traffic generator with an Intel Xeon E v3 CPU and 32GB RAM, and a hypervisor with a Dual Intel Xeon E v3 CPU and 384GB RAM. Both machines run Linux Ubuntu 14.04, with VMs running on the hypervisor using KVM. The machines are connected through a 10 Gbps network. All NFs and OBIs, as well as the OBC, run on top of the hypervisor. We play a packet trace captured from a campus wireless network on the traffic generator, at 10Gbps. All packets go through the hypervisor on their corresponding service chain as defined for each test Test Applications For our tests we have implemented a set of sample OpenBox applications. For each of the following applications, we also created a reference stand-alone version, in Click. Sample Firewall We use a ruleset of 4560 firewall rules from a large firewall vendor. Our OpenBox firewall application reads the rules from a file and generates a processing graph that raises an alert for packets that match any non-default rule. In order to correctly measure throughput, we have modified the rules so that packets are never dropped. Instead, all packets are transmitted untouched to the output interface. Sample IPS We use Snort web rules to create a sample IPS that scans both headers and payloads of packets. If a packet matches a rule, an alert is sent to the controller. As in the firewall, we have modified the rules to avoid dropping packets.

57 3.4. EXPERIMENTAL EVALUATION 41 Sample Web Cache Our web cache stores web pages of specific websites. If an HTTP request matches cached content, the web cache drops the request and returns the cached content to the sender. Otherwise, the packet continues untouched to the output interface. When measuring performance of service chains that include this NF, we only send packets that do not match cached content. Sample Load Balancer multiple output interfaces. This NF uses Layer 3 classification rules to split traffic to Test Setup We tested our implementation of the OpenBox framework using a single controller and several VMs. Each VM either runs Click with the reference standalone NF implementation, or an OBI that is controlled by our OBC. We consider two different NF configurations in our tests. In the first configuration, packets from the traffic generator to the sink go through a pipeline of two NFs. The throughput in such a configuration is dominated by the throughput of the slowest NF in the pipeline. The latency is the total time spent while processing the two NFs in the pipeline. Figures 3.8 illustrates two test setups under this configuration, without OpenBox: In the first test, packets go through two firewalls with distinct rule sets (Fig. 3.8(a)). In the second test, packets first go through a firewall and then through an IPS (Fig. 3.8(b)). With OpenBox (Fig. 3.8(c)), all corresponding NFs are executed on the same OBI, and the OBI is scaled to use the same two VMs used without OpenBox. In this case, traffic is multiplexed to the two OBIs by the network forwarding plane. We show that the OpenBox framework reduces the total latency (due to the merging of the two processing graphs) and increases the overall throughput (because of the OBI scaling). Another NF configuration we consider is when packets of different flows go through different, distinct service chains, and thus visit different NFs. Under this configuration we test the following scenarios, as illustrated in Figure 3.9: in Figure 3.9(a) packets either go through Firewall 1 or through Firewall 2 while in Figure 3.9(b) packets either go through a firewall or through an IPS. We use the same rule sets as in the previous tests. Merging the two NFs in this case provides dynamic load balancing by leveraging

58 42 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS Network Functions VMs Throughput Latency Used [Mbps] [µs] Firewall IPS Regular FW+FW chain OpenBox: FW+FW OBI (+90%) 48 (-50%) Regular FW+IPS chain OpenBox: FW+IPS OBI (+86%) 80 (-35%) Table 3.2: Performance results of the pipelined NFs configuration (Figure 3.8). off-peak times of one NF to provide higher throughput to the other. We use the OBI setup as shown in Figure 3.9(c), this time only applying the processing graph of one NF on each packet, according to its type or flow information. Note that in both configurations, each NF could come from a different tenant, or a different administrator. The different NFs are not aware of each other, but as discussed in Section 3.2, they may be executed in the same OBI. Maximal throughput of Firewall 2 [Mbps] Dynamic Load Balancing Throughput Region Static Load Balancing Throughput Region Maximal throughput of Firewall 1 [Mbps] (a) Two Firewalls Maximal throughput of Firewall [Mbps] Dynamic Load Balancing Throughput Region Static Load Balancing Throughput Region Maximal throughput of IPS [Mbps] (b) Firewall and IPS Figure 3.10: Achievable throughput for the distinct service chain configuration (Figures 3.9(a) and 3.9(b)) compared to the achievable throughput of the two OBIs that merge both NFs (Figure 3.9(c)) Results Data Plane Performance Pipelined NFs Table 3.2 shows the results of the pipelined NF configuration. Without OpenBox, the throughput is bounded by the throughput of the slowest NF in the pipeline. Thus, in the two pipelined firewalls, the overall throughput is the throughput of a single firewall (both firewalls show the same performance as we split rules evenly).

59 3.4. EXPERIMENTAL EVALUATION 43 In the pipelined firewall and IPS service chain, the IPS dominates the overall throughput as it is much slower than the firewall, since it performs deep packet inspection. The overall latency is the sum of the latencies of both NFs in the chain, as packets should go through both VMs. With OpenBox, the controller merges the two NFs into a single processing graph that is executed on OBIs on both VMs. Packets go through one of the VMs and are processed according to that processing graph. We use static forwarding rules to loadbalance the two OBIs. The overall throughput is of the two OBIs combined. The overall latency is of a single OBI, as packets are only processed by one of the VMs. OpenBox improves the throughput by 90% in the two-firewall setup and by 86% in the firewall and IPS setup. It reduces latency by 50% and 35% in these two setups, respectively. Distinct Service Chains Figure 3.10(a) shows the achievable throughput regions for the distinct service chain configuration with two firewalls, with and without OpenBox. Without OpenBox (see red, dashed lines), each firewall can utilize only the VM it runs on, and thus its throughput is limited to the maximal throughput it may have on a single VM. With OpenBox (see blue, solid line), each firewall can dynamically (and implicitly) scale, when the other NF is under-utilized. We note that if both NFs are likely to be fully utilized at the same time, merging them may not be worthwhile, but they can still be implemented with OpenBox and deployed in different OBIs. Figure 3.10(b) shows the achievable throughput regions when merging a firewall with an IPS. In this case the IPS dominates OBI throughput and it might be less beneficial to merge the two service chains, unless the firewall is never fully utilized while the IPS is often over-utilized. Discussion Two factors help OpenBox improve data plane performance. First, by merging the processing graphs and eliminating multiple classifiers, OpenBox reduces latency and the total computational load. Second, OpenBox allows more flexible NF deployment/replication than with monolithic NFs, so packets should traverse fewer VMs, just as in the case when each NF is deployed separately. This flexible deployment also allows resource sharing.

60 44 CHAPTER 3. OPENBOX: SOFTWARE-DEFINED NETWORK FUNCTIONS Gateway Firewall Web Cache Dept. Firewall Load Balancer Figure 3.11: Service chain for the graph merge algorithm test Merge time [ms] Merged graph size [Number of connectors] Figure 3.12: Scalability of the graph merge algorithm Performance of the Graph Merge Algorithm In order to evaluate the impact of our graph merge algorithm on the performance of the data plane, we considered a longer service chain, as illustrated in Figure In this service chain packets go through a first firewall and then through a web cache. If not dropped, they continue to another firewall, and eventually go through an L3 load balancer. We implemented this service chain in OpenBox by merging the four NFs into a single processing graph. When using a naïve merge, where graphs are simply concatenated to each other, we obtain 749 Mbps throughput (on a single VM, single core) for packets that do not match any rule that causes a drop or DPI. When using our graph merge algorithm, the throughput for the same packets is 890 Mbps (20% improvement). Figure 3.12 evaluates the scalability of the graph merge algorithm. We tested the algorithm with growing sizes of input graphs on the Xeon E CPU. The merge algorithm runs in orders of milliseconds, and the time grows nearly linearly with the size of graphs Control Plane Communication In addition to data plane performance, we also evaluated the performance of the communication with the controller. Table 3.3 shows the round-trip time for three common

61 3.4. EXPERIMENTAL EVALUATION 45 Operation Round Trip Time SetProcessingGraph 1285 ms 4 KeepAlive GlobalStats AddCustomModule 20 ms 25 ms 124 ms Table 3.3: Average round-trip time for common messages between OBC and OBIs, running on the same physical machine. protocol operations: SetProcessingGraph is the process of sending a SetProcessing- GraphRequest from the OBC to an OBI with a new processing graph for the OBI, reconfiguring the execution engine and returning a SetProcessingGraphResponse from the OBI to the OBC. A KeepAlive message is a short message sent from an OBI to the OBC every interval, as defined by the OBC. GlobalStats is the process of sending a GlobalStatsRequest from the OBC to an OBI and returning a GlobalStatsResponse from the OBI to the OBC, with the OBI system load information (e.g., CPU and memory usage). AddCustomModule is the process of sending an AddCustomModuleRequest from the OBC to a supporting OBI with a custom binary module that extends the OBI behavior. In this test we used a module of size 22.3 KB, which adds support for a single processing block. As these times were measured when the OBC and the OBI run on the same physical machine, they ignore network delays and mainly measure software delay. Note that these delays are imposed on the control plane communication channel and they have no direct impact on the data plane performance. However, while networking delays imposed by switches and routers are relatively low (measured in nano- or micro-seconds), software delays of network functions may add delays in milli-seconds scale, and thus it is important to reduce such delays to minimum. 4 This operation involves (re-)configuration of Click elements, which requires polling the Click engine until all elements are updated. In Click, there is a hardcoded delay of 1000 ms in this polling. This can be easily reduced, albeit with a change in the core Click code.

62 Chapter 4 Deep Packet Inspection as a Service 4.1 System Overview This section details the underlying architecture that supports DPI as a service. The main idea is to insert the DPI service in the NF chain prior to any NF that requires DPI. The DPI service scans the packet and logs all detected patterns as meta-data to the packet. As the packet is forwarded, each NF on its route retrieves the DPI scan results instead of performing the costly DPI task. We assume an SDN environment with a traffic steering application (TSA) (e.g., SIMPLE [100]) that attaches policy chains to packets and routes the packets appropriately across the network. Naturally, our solution will negotiate with the TSA, so that policy chains are changed to include DPI as a service (see Figure 1.2) The DPI Controller DPI service scalability is important since DPI is considered a bottleneck for many types of NFs. Therefore, we envision that DPI service instances will be deployed across the network. The DPI controller is a logically centralized entity whose role is to manage the DPI process across the network and to communicate both with the SDN controller and the TSA to realize the appropriate data plane actions. Logically, the DPI controller resides at the SDN application layer on top of the SDN controller as in Figure 4.1. Two kinds of procedures take place between the DPI Controller and the NFs, namely: registration and pattern set management. The first task of the DPI controller is to register NFs that use its service. Communication between the DPI Controller and NFs is performed using JSON messages sent over a direct (possibly secure) communication 46

63 4.1. SYSTEM OVERVIEW 47 DPI Ctrl Traffic Steering Application S2 SDN Ctrl S3 DPI3 TS S1 AV1 S4 IDS1 L2-L4 FW DPI4 DPI1 DPI2 AV2 IDS2 Policy Chain Middlebox Chain Physical Sequence 1. L2L4_FW-DPI-IDS L2L4_FW-DPI3-IDS1 S1 S3 L2L4_FW S3 S2 DPI3 S2 S3 IDS1 S3 S4 2. DPI-IDS-AV-TS DPI3-IDS2-AV1-TS S1 S2 DPI3 S2 S4 IDS2 S4 S2 AV1 S2 TS S2 S4 Figure 4.1: System Illustration. The DPI controller abstracts the DPI process to other network elements and controls DPI service instances across the network. Packets flow through the network as dictated by policy chains. channel. Specifically, a NF registers itself to the DPI service using a registration message. The DPI Controller address and the NF s unique ID and name are preconfigured (we have not deployed any bootstrap procedures at the current stage). A NF may inherit the pattern set of an already registered NF. A NF may state that the DPI service it requires should maintain their state across the packet boundaries of a flow, or that it operates in a read-only mode, in which it performs no actions at the packet itself and it therefore requires receiving only pattern matching results and may avoid unnecessary routing of the packet itself. An IDS is an example of a read-only NF as opposed to an IPS, which performs actions over the packets. Abstractly, NFs operate by rules that contain actions, and conditions that should be satisfied to activate the actions. Some of the conditions are based on patterns in the packet s content. The DPI service responsibility is only to indicate appearances of patterns, while resolving the logic behind a condition and performing the action itself is the NF s responsibility. Patterns are added to and removed from the DPI controller

64 48 CHAPTER 4. DEEP PACKET INSPECTION AS A SERVICE using dedicated messages from NFs to the controller. The DPI Controller maintains a global pattern set with its own internal IDs. If two NFs register the same pattern (since each one of them has a rule that depends on this pattern), it keeps track of each of the rule IDs reported by each NF and associates them with its internal ID. For that reason, when a pattern removal request is received, the DPI Controller removes the NF reference to the corresponding pattern. Only if there are no other NF referrals to that pattern, is it removed. One concern is the traffic incurred by transmitting the pattern sets. However, as opposed to DPI DFAs, which are large, the pattern sets themselves are compact: Recent versions of pattern sets such as Bro or L7-Filter are 12KB and 14KB, respectively. Larger pattern sets such as Snort or ClamAV are 2MB and 5MB, respectively. Still, if the patterns are compressed, their size is no more than two megabytes (55KB and 2MB, respectively). The construction of the data structure that represents the patterns is the responsibility of the DPI instance, and therefore does not involve communication over the network. The DPI controller also receives from the TSA the relevant policy chains (namely, all the sequences of NF types a packet should traverse). It assigns each policy chain a unique identifier that is used later by the DPI service instances to indicate which pattern matching should be performed. Usually, the TSA pushes some VLAN or MPLS tag in front of the packet to easily steer it over the network ([100]). DPI service instances can then read these tags in order to identify the set of patterns a packet should be matched against. In case this tag is not available, the DPI controller can push such a tag, for example using an OpenFlow directive. Finally, the DPI controller is also responsible for initializing DPI service instances (see Section 4.2.1), deployment of different DPI service instances across the network (see Section 4.1.3), and advance features that require a network-wide view (e.g., as described in Section ) Passing Pattern Matching Results Passing the pattern matches results to the NFs should take into account the following three considerations: First, it should be oblivious to the switches and not interfere with forwarding the packet through the chain of NFs and then to its destination. Second, the meta-data is of a variable size as the number of matches varies and is not known

65 4.1. SYSTEM OVERVIEW 49 in advance. Third, the process should be oblivious to the NFs (and hosts) that are not aware of the DPI service. Having these considerations in mind, we suggest three solutions that may suit different network conditions: Adding match result information as an additional layer of information prior to the packet s payload. This allows maximal flexibility and the best performance. Publicly available frameworks such as Network Service Header (NSH) [101] and Cisco s vpath [105] may be used to encapsulate match data, also in an SDN setting [69]. Several commercial vendors support this method in service chain scenarios (e.g. Qosmos [42]). The downside of this approach is that NFs that refer to the payload on the service chain should be aware of this additional layer of information. However, if all NFs that use the DPI service are grouped and placed right after the DPI service instance in the service chain, the last NF can simply remove this layer and forward the original packet. An option that does not require reordering of service chains relies on using some flexible pushing and pulling of tags (e.g., MPLS labels, VLAN tags, PBB tags). This method is supported in current OpenFlow-based SDN networks [53]. A similar alternative is to use the FlowTags mechanism [48]. The downside of the tagging option is that it might be messy as each matching result may require several such tags, which in turn must not collide with other tags used in the system. When NFs on the service chain are all in read-only mode, where the NFs requires only the DPI results rather than the packet itself, it may be appealing to send only the match results using a dedicated packet without the packet itself. As most packets do not contain matches at all, this option may dramatically reduce traffic load over the NF service chain. For example, in Big Switch Networks Big Tap [88] fabric, the traffic is tapped from production networks to a separate monitoring network, where monitoring is done while the original packet is forwarded at the production network regardless of the monitoring results. In all three options one may use a single bit in the header to mark whether patterns were matched. Specifically, a packet with no matches is always forwarded as is without any modification.

66 50 CHAPTER 4. DEEP PACKET INSPECTION AS A SERVICE As our experimental environment is based on Mininet over OpenFlow 1.0, which supports neither NSH nor MPLS, in our implementation we passed the matching results using dedicated packets Deployment of DPI Service Instances The DPI controller abstracts the DPI service for the TSA, SDN controller, and the NFs. Hence, one of its most important tasks is to deploy the DPI instances across the network. There might be many considerations for such deployment and in this section we discuss only a few. First, we emphasize that not all DPI instances need to be the same. Thus, a common deployment choice is to group together similar policy chains and to deploy instances that support only one group and not all the policy chains in the system. The DPI controller will then instruct the TSA to send the traffic to the right instance. Alternatively, one might group the NF types by the traffic they inspect. For example, sets of patterns that correspond to HTTP traffic may be allocated to some DPI service instances, while a set of patterns that corresponds to FTP is allocated to other DPI service instances. Additionally, the DPI controller should manage the DPI instance resources, so that an instance is not overwhelmed by traffic, and therefore, performs poorly. Thus, the DPI controller should collect performance metrics from the working DPI instances and may decide to allocate more instances, to remove service instances, or to migrate flows between instances. This should be done exactly in the same manner as suggested in [102]. Notice that, in general, performing operations on the DPI service instances rather than the NFs themselves is easier as most of the flow s state is typically kept within the NF, while the DPI instance keeps only the current DFA state and an offset within the packet. 1 Finally, we note that allocation, de-allocation, and migration affect the way packets are forwarded in the network. Thus, the DPI controller should collaborate with the TSA (and the SDN controller) to realize the changes and take into account other network considerations (such as bandwidth and delay). The ability to dynamically control the DPI service instances and to scale out provides the DPI controller great flexibility, which can be used for powerful operations. Section shows how this ability is used to enhance the robustness of the DPI 1 Notice that flow migration might require some packet buffering at the source instance, until the process is completed.

67 4.2. DPI SERVICE INSTANCE IMPLEMENTATION 51 service and its performance Enhancing Robustness and Security In Chapter 5 of this dissertation we present the MCA 2 system, which mitigates attacks on DPI engines by deploying several copies of the DPI engine over multiple cores of the same machine. The key operation of MCA 2 is to detect and isolate the heavy packets that cause the degraded performance, and divert them to a dedicated set of cores. Moreover, the dedicated cores may run a different DPI implementation that is more suitable for handling this kind of traffic. MCA 2 can be implemented as-is in each DPI service instance, provided that it runs on a multi-core machine. In addition, our architecture may implement MCA 2, while scaling out to many DPI service instances. As in the original MCA 2 design, each DPI service instance should perform ongoing monitoring and export telemetries that might indicate attack attempts. In the MCA 2 design, these telemetries are sent to a central stress monitor entity. Here, the DPI controller, described in Section 4.1.1, takes over this role. This is illustrated in Figure 4.2: Under normal traffic, all DPI service instances work regularly. Whenever the DPI controller detects an attack on one of the instances, it sets some of the instances as dedicated, and migrates the heavy flows, which are suspected to be malicious, to those dedicated DPI instances (these instances might also use a different DPI algorithm that is tailored for heavy traffic). Flow migration is performed as described in Section 4.1.3, and requires close cooperation with the traffic steering application. Moreover, dedicated DPI instances can be dynamically allocated as an attack becomes more intense, or deallocated as its significance decreases. See Chapter 5 for details on the possible attacks and their corresponding detection mechanism. 4.2 DPI Service Instance Implementation This section describes the implementation of a DPI service instance. At the core of the implementation, we present a virtual DPI algorithm that handles multiple pattern sets. We first focus on string matching and then extend it to handle regular expressions.

68 52 CHAPTER 4. DEEP PACKET INSPECTION AS A SERVICE Traffic Steering DPI Controller DPI Service Instance #1 Data flows Telemetries / Controller Directives Flow migration DPI Service Instance #2 DPI Service Instance #8 Dedicated DPI Service Instance #9 Dedicated DPI Service Instance #10 Figure 4.2: MCA 2 system design for virtual DPI environment Initialization We first show how to combine multiple pattern sets, originating from different NFs such that each packet is scanned only once. Figure 4.3 illustrates two AC automatons for two such pattern sets. Each NF type has a unique identifier and it registers its own pattern set with the DPI controller (see details in Section 4.1). As the DPI controller is a logically-centralized entity that allocates the identifiers, we may assume identifiers are sequential numbers in {1,..., n}, where n is the number of NF types registered to the DPI service. Let P i be the pattern set of NF type i. Upon instantiation, the DPI controller passes to the DPI instance the pattern sets and the corresponding NF identifiers. Along with these sets, the DPI controller may pass additional information, such as a stopping condition for each NF (namely, how deep into L7 payload the DPI instance should look), 2 or whether the NF is stateless (scans each packet separately) or stateful (considers the entire flow, and therefore, should carry the state of the scan between successive packets). Moreover, the DPI controller passes the mapping between policy chain identifiers and the corresponding NF identifiers in 2 The stopping condition is useful, for example, when NFs only care about specific application-layer headers with a fixed or bounded length.

Deep Packet Inspection of Next Generation Network Devices

Deep Packet Inspection of Next Generation Network Devices Deep Packet Inspection of Next Generation Network Devices Prof. Anat Bremler-Barr IDC Herzliya, Israel www.deepness-lab.org This work was supported by European Research Council (ERC) Starting Grant no.

More information

Making Network Functions Software-Defined

Making Network Functions Software-Defined Making Network Functions Software-Defined Yotam Harchol VMware Research / The Hebrew University of Jerusalem Joint work with Anat Bremler-Barr and David Hay Appeared in ACM SIGCOMM 2016 THE HEBREW UNIVERSITY

More information

Deep Packet Inspection as a Service

Deep Packet Inspection as a Service Deep Packet Inspection as a Service Anat Bremler-Barr School of Computer Science The Interdisciplinary Center Herzliya, Israel bremler@idc.ac.il Yotam Harchol School of Computer Science and Engineering

More information

Design and Implementation of a Data Plane for the OpenBox Framework

Design and Implementation of a Data Plane for the OpenBox Framework The Interdisciplinary Center, Herzliya Efi Arazi School of Computer Science Design and Implementation of a Data Plane for the OpenBox Framework M.Sc. final project submitted in partial fulfillment of the

More information

Thomas Lin, Naif Tarafdar, Byungchul Park, Paul Chow, and Alberto Leon-Garcia

Thomas Lin, Naif Tarafdar, Byungchul Park, Paul Chow, and Alberto Leon-Garcia Thomas Lin, Naif Tarafdar, Byungchul Park, Paul Chow, and Alberto Leon-Garcia The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto, ON, Canada Motivation: IoT

More information

Software-Defined Networking (SDN) Overview

Software-Defined Networking (SDN) Overview Reti di Telecomunicazione a.y. 2015-2016 Software-Defined Networking (SDN) Overview Ing. Luca Davoli Ph.D. Student Network Security (NetSec) Laboratory davoli@ce.unipr.it Luca Davoli davoli@ce.unipr.it

More information

Lecture 10.1 A real SDN implementation: the Google B4 case. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 10.1 A real SDN implementation: the Google B4 case. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 10.1 A real SDN implementation: the Google B4 case Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it WAN WAN = Wide Area Network WAN features: Very expensive (specialized high-end

More information

Network Function Virtualization (NFV)

Network Function Virtualization (NFV) Network Function Virtualization (NFV) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ References 1. R. Mijumbi et al., Network Function Virtualization:

More information

BUILDING A NEXT-GENERATION FIREWALL

BUILDING A NEXT-GENERATION FIREWALL How to Add Network Intelligence, Security, and Speed While Getting to Market Faster INNOVATORS START HERE. EXECUTIVE SUMMARY Your clients are on the front line of cyberspace and they need your help. Faced

More information

ADVANCED SECURITY MECHANISMS TO PROTECT ASSETS AND NETWORKS: SOFTWARE-DEFINED SECURITY

ADVANCED SECURITY MECHANISMS TO PROTECT ASSETS AND NETWORKS: SOFTWARE-DEFINED SECURITY ADVANCED SECURITY MECHANISMS TO PROTECT ASSETS AND NETWORKS: SOFTWARE-DEFINED SECURITY One of the largest concerns of organisations is how to implement and introduce advanced security mechanisms to protect

More information

Raj Jain (Washington University in Saint Louis) Mohammed Samaka (Qatar University)

Raj Jain (Washington University in Saint Louis) Mohammed Samaka (Qatar University) APPLICATION DEPLOYMENT IN FUTURE GLOBAL MULTI-CLOUD ENVIRONMENT Raj Jain (Washington University in Saint Louis) Mohammed Samaka (Qatar University) GITMA 2015 Conference, St. Louis, June 23, 2015 These

More information

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

IX: A Protected Dataplane Operating System for High Throughput and Low Latency IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this

More information

MASERGY S MANAGED SD-WAN

MASERGY S MANAGED SD-WAN MASERGY S MANAGED New Performance Options for Hybrid Networks Business Challenges WAN Ecosystem Features and Benefits Use Cases INTRODUCTION Organizations are leveraging technology to transform the way

More information

Pulse Secure Application Delivery

Pulse Secure Application Delivery DATA SHEET Pulse Secure Application Delivery HIGHLIGHTS Provides an Application Delivery and Load Balancing solution purposebuilt for high-performance Network Functions Virtualization (NFV) Uniquely customizable,

More information

Snort Virtual Network Function with DPI Service

Snort Virtual Network Function with DPI Service The Interdisciplinary Center, Herzliya Efi Arazi School of Computer Science Snort Virtual Network Function with DPI Service M.Sc. final project submitted in partial fulfillment of the requirements towards

More information

Ruler: High-Speed Packet Matching and Rewriting on Network Processors

Ruler: High-Speed Packet Matching and Rewriting on Network Processors Ruler: High-Speed Packet Matching and Rewriting on Network Processors Tomáš Hrubý Kees van Reeuwijk Herbert Bos Vrije Universiteit, Amsterdam World45 Ltd. ANCS 2007 Tomáš Hrubý (VU Amsterdam, World45)

More information

NEC Virtualized Evolved Packet Core vepc

NEC Virtualized Evolved Packet Core vepc TE-524262 NEC Virtualized Evolved Packet Core vepc Design Concepts and Benefits INDEX Leading the transformation into Mobile Packet Core Virtualization P.3 vepc System Architecture Overview P.4 Elastic

More information

DEFINING SECURITY FOR TODAY S CLOUD ENVIRONMENTS. Security Without Compromise

DEFINING SECURITY FOR TODAY S CLOUD ENVIRONMENTS. Security Without Compromise DEFINING SECURITY FOR TODAY S CLOUD ENVIRONMENTS Security Without Compromise CONTENTS INTRODUCTION 1 SECTION 1: STRETCHING BEYOND STATIC SECURITY 2 SECTION 2: NEW DEFENSES FOR CLOUD ENVIRONMENTS 5 SECTION

More information

Introduction. Delivering Management as Agile as the Cloud: Enabling New Architectures with CA Technologies Virtual Network Assurance Solution

Introduction. Delivering Management as Agile as the Cloud: Enabling New Architectures with CA Technologies Virtual Network Assurance Solution Delivering Management as Agile as the Cloud: Enabling New Architectures with CA Technologies Virtual Network Assurance Solution Introduction Service providers and IT departments of every type are seeking

More information

Guide to SDN, SD-WAN, NFV, and VNF

Guide to SDN, SD-WAN, NFV, and VNF Evalu- ation Guide Technical Brief SD-WAN, NFV, and With so many acronyms and each one building on another, it can be confusing about how they work together. This guide describes the difference. 006180206

More information

Network Virtualisation Vision and Strategy_ (based on lesson learned) Telefónica Global CTO

Network Virtualisation Vision and Strategy_ (based on lesson learned) Telefónica Global CTO Network Virtualisation Vision and Strategy_ (based on lesson learned) Telefónica I+D @ Global CTO 18.03.2014 Business development requires a continuous evolution of our network but it still seems unable

More information

Business Strategy Theatre

Business Strategy Theatre Business Strategy Theatre Security posture in the age of mobile, social and new threats Steve Pao, GM Security Business 01 May 2014 In the midst of chaos, there is also opportunity. - Sun-Tzu Security:

More information

Managing and Securing Computer Networks. Guy Leduc. Chapter 2: Software-Defined Networks (SDN) Chapter 2. Chapter goals:

Managing and Securing Computer Networks. Guy Leduc. Chapter 2: Software-Defined Networks (SDN) Chapter 2. Chapter goals: Managing and Securing Computer Networks Guy Leduc Chapter 2: Software-Defined Networks (SDN) Mainly based on: Computer Networks and Internets, 6 th Edition Douglas E. Comer Pearson Education, 2015 (Chapter

More information

Network function virtualization

Network function virtualization Network function virtualization Ankit Singla ETH Zürich Spring 2017 News from SIGCOMM 2017 36 papers (out of 250 submissions) On every topic we covered / will cover 2 papers from ETH! 2 An update Beyond

More information

A Firewall Architecture to Enhance Performance of Enterprise Network

A Firewall Architecture to Enhance Performance of Enterprise Network A Firewall Architecture to Enhance Performance of Enterprise Network Hailu Tegenaw HiLCoE, Computer Science Programme, Ethiopia Commercial Bank of Ethiopia, Ethiopia hailutegenaw@yahoo.com Mesfin Kifle

More information

Software Defined Networking

Software Defined Networking Software Defined Networking Jennifer Rexford COS 461: Computer Networks Lectures: MW 10-10:50am in Architecture N101 http://www.cs.princeton.edu/courses/archive/spr12/cos461/ The Internet: A Remarkable

More information

What is an application delivery controller?

What is an application delivery controller? What is an application delivery controller? ADCs have gained traction within the last decade, largely due to increased demand for legacy load balancing appliances to handle more advanced application delivery

More information

The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization

The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization May 2014 Prepared by: Zeus Kerravala The Top Five Reasons to Deploy Software-Defined Networks and Network Functions

More information

G-NET: Effective GPU Sharing In NFV Systems

G-NET: Effective GPU Sharing In NFV Systems G-NET: Effective Sharing In NFV Systems Kai Zhang*, Bingsheng He^, Jiayu Hu #, Zeke Wang^, Bei Hua #, Jiayi Meng #, Lishan Yang # *Fudan University ^National University of Singapore #University of Science

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

WHITE PAPER. Applying Software-Defined Security to the Branch Office

WHITE PAPER. Applying Software-Defined Security to the Branch Office Applying Software-Defined Security to the Branch Office Branch Security Overview Increasingly, the branch or remote office is becoming a common entry point for cyber-attacks into the enterprise. Industry

More information

Hash-Based String Matching Algorithm For Network Intrusion Prevention systems (NIPS)

Hash-Based String Matching Algorithm For Network Intrusion Prevention systems (NIPS) Hash-Based String Matching Algorithm For Network Intrusion Prevention systems (NIPS) VINOD. O & B. M. SAGAR ISE Department, R.V.College of Engineering, Bangalore-560059, INDIA Email Id :vinod.goutham@gmail.com,sagar.bm@gmail.com

More information

Lecture 14 SDN and NFV. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Lecture 14 SDN and NFV. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Lecture 14 SDN and NFV Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Traditional network vs SDN TRADITIONAL Closed equipment Software + hardware Cost Vendor-specific management.

More information

Introducing Avaya SDN Fx with FatPipe Networks Next Generation SD-WAN

Introducing Avaya SDN Fx with FatPipe Networks Next Generation SD-WAN Avaya-FatPipe Solution Overview Introducing Avaya SDN Fx with FatPipe Networks Next Generation SD-WAN The Avaya SDN-Fx and FatPipe Networks solution provides a fabric-based SDN architecture for simplicity

More information

Benefits of SD-WAN to the Distributed Enterprise

Benefits of SD-WAN to the Distributed Enterprise WHITE PAPER Benefits of SD-WAN to the Distributed Enterprise 1 B enefits of SD-WAN to the Distributed Enterprise Branch Networking Today More Bandwidth, More Complexity Branch or remote office network

More information

Container Network Functions: Bringing NFV to the Network Edge

Container Network Functions: Bringing NFV to the Network Edge Container Network Functions: Bringing NFV to the Network Edge Richard Cziva University of Glasgow Richard.Cziva@glasgow.ac.uk SDN / NFV WORLD CONGRESS 2017, The Hague, Netherlands About Netlab University

More information

15-744: Computer Networking. Middleboxes and NFV

15-744: Computer Networking. Middleboxes and NFV 15-744: Computer Networking Middleboxes and NFV Middleboxes and NFV Overview of NFV Challenge of middleboxes Middlebox consolidation Outsourcing middlebox functionality Readings: Network Functions Virtualization

More information

SDN (Software-Defined Networking) Enabling Network Innovation from Edge

SDN (Software-Defined Networking) Enabling Network Innovation from Edge SDN (Software-Defined Networking) Enabling Network Innovation from Edge Gaogang XIE http://www.fnii.cn http://www.ict.ac.cn Internet 20 Year in China April 20, 1994, NCFC (The National Computing and Networking

More information

Building Security Services on top of SDN

Building Security Services on top of SDN Building Security Services on top of SDN Gregory Blanc Télécom SudParis, IMT 3rd FR-JP Meeting on Cybersecurity WG7 April 25th, 2017 Keio University Mita Campus, Tokyo Table of Contents 1 SDN and NFV as

More information

Header Compression Capacity Calculations for Wireless Networks

Header Compression Capacity Calculations for Wireless Networks Header Compression Capacity Calculations for Wireless Networks Abstract Deployment of wireless transport in a data-centric world calls for a fresh network planning approach, requiring a balance between

More information

VNF Chain Allocation and Management at Data Center Scale

VNF Chain Allocation and Management at Data Center Scale VNF Chain Allocation and Management at Data Center Scale Internet Cloud Provider Tenants Nodir Kodirov, Sam Bayless, Fabian Ruffy, Ivan Beschastnikh, Holger Hoos, Alan Hu Network Functions (NF) are useful

More information

Event-Based Software-Defined Networking: Build a Secure Science DMZ

Event-Based Software-Defined Networking: Build a Secure Science DMZ White Paper Event-Based Software-Defined Networking: Build a Secure Science DMZ What You Will Learn As the need to efficiently move large data sets around the world increases, the Science DMZ - built at

More information

Multi-Tenancy Designs for the F5 High-Performance Services Fabric

Multi-Tenancy Designs for the F5 High-Performance Services Fabric Multi-Tenancy Designs for the F5 High-Performance Services Fabric F5 has transformed the traditional networking design of highly available pairs of hardware devices to create a new architecture a multi-tenant

More information

Efficient, Scalable, and Provenance-Aware Management of Linked Data

Efficient, Scalable, and Provenance-Aware Management of Linked Data Efficient, Scalable, and Provenance-Aware Management of Linked Data Marcin Wylot 1 Motivation and objectives of the research The proliferation of heterogeneous Linked Data on the Web requires data management

More information

HP SDN Document Portfolio Introduction

HP SDN Document Portfolio Introduction HP SDN Document Portfolio Introduction Technical Solution Guide Version: 1 September 2013 Table of Contents HP SDN Document Portfolio Overview... 2 Introduction... 2 Terms and Concepts... 2 Resources,

More information

Storage Networking Strategy for the Next Five Years

Storage Networking Strategy for the Next Five Years White Paper Storage Networking Strategy for the Next Five Years 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 8 Top considerations for storage

More information

ASSURING PERFORMANCE IN VDI DEPLOYMENTS

ASSURING PERFORMANCE IN VDI DEPLOYMENTS ASSURING PERFORMANCE IN VDI DEPLOYMENTS EXECUTIVE SUMMARY Virtual Desktop Infrastructure (VDI) holds great promise for end-user computing teams to centralize management and maintenance, lower operational

More information

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision At-A-Glance Unified Computing Realized Today, IT organizations assemble their data center environments from individual components.

More information

SENSS Against Volumetric DDoS Attacks

SENSS Against Volumetric DDoS Attacks SENSS Against Volumetric DDoS Attacks Sivaram Ramanathan 1, Jelena Mirkovic 1, Minlan Yu 2 and Ying Zhang 3 1 University of Southern California/Information Sciences Institute 2 Harvard University 3 Facebook

More information

Network Functions Virtualisation. Kazuaki OBANA Media Innovation Laboratory, NTT Network Innovation Laboratories

Network Functions Virtualisation. Kazuaki OBANA Media Innovation Laboratory, NTT Network Innovation Laboratories Network Functions Virtualisation Looking to the Future NFV ETSI Industry Specification Group Kazuaki OBANA Media Innovation Laboratory, NTT Network Innovation Laboratories Network Functions Virtualisation:

More information

Space-Time Tradeoffs in Software-Based Deep Packet Inspection

Space-Time Tradeoffs in Software-Based Deep Packet Inspection Space-Time Tradeoffs in Software-ased eep Packet Inspection nat remler-arr I Herzliya, Israel Yotam Harchol avid Hay Hebrew University, Israel. OWSP Israel 2011 (Was also presented in I HPSR 2011) Parts

More information

Design of Deterministic Finite Automata using Pattern Matching Strategy

Design of Deterministic Finite Automata using Pattern Matching Strategy Design of Deterministic Finite Automata using Pattern Matching Strategy V. N. V Srinivasa Rao 1, Dr. M. S. S. Sai 2 Assistant Professor, 2 Professor, Department of Computer Science and Engineering KKR

More information

Hybrid Network present & future

Hybrid Network present & future Hybrid Network present & future Felipe Stutz / Luiz Adamo 1 Orange Restricted a brief history of the Wide Area Network - WAN Companies easily connect to their cloud application with our secured network

More information

VNF Service Chaining on SAVI SDI

VNF Service Chaining on SAVI SDI VNF Service Chaining on SAVI SDI Pouya Yasrebi 1,2(B), Spandan Bemby 1,2, Hadi Bannazadeh 1,2, and Alberto Leon-Garcia 1,2 1 University of Toronto, Toronto, ON, Canada {pouya.yasrebi,spandan.bemby,hadi.bannazadeh,

More information

CS 4226: Internet Architecture

CS 4226: Internet Architecture Software Defined Networking Richard T. B. Ma School of Computing National University of Singapore Material from: Scott Shenker (UC Berkeley), Nick McKeown (Stanford), Jennifer Rexford (Princeton) CS 4226:

More information

VMWARE AND NETROUNDS ACTIVE ASSURANCE SOLUTION FOR COMMUNICATIONS SERVICE PROVIDERS

VMWARE AND NETROUNDS ACTIVE ASSURANCE SOLUTION FOR COMMUNICATIONS SERVICE PROVIDERS SOLUTION OVERVIEW VMWARE AND NETROUNDS ACTIVE ASSURANCE SOLUTION FOR COMMUNICATIONS SERVICE PROVIDERS Combined solution provides end-to-end service and infrastructure visibility, service monitoring and

More information

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility White Paper Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility The Cisco 4000 Series Integrated Services Routers (ISRs) are designed for distributed organizations with

More information

A Ten Minute Introduction to Middleboxes. Justine Sherry, UC Berkeley

A Ten Minute Introduction to Middleboxes. Justine Sherry, UC Berkeley A Ten Minute Introduction to Middleboxes Justine Sherry, UC Berkeley This Talk: Three Questions! What is a middlebox? What are some recent trends in middlebox engineering? What research challenges do middleboxes

More information

How can we gain the insights and control we need to optimize the performance of applications running on our network?

How can we gain the insights and control we need to optimize the performance of applications running on our network? SOLUTION BRIEF CA Network Flow Analysis and Cisco Application Visibility and Control How can we gain the insights and control we need to optimize the performance of applications running on our network?

More information

White Paper. OCP Enabled Switching. SDN Solutions Guide

White Paper. OCP Enabled Switching. SDN Solutions Guide White Paper OCP Enabled Switching SDN Solutions Guide NEC s ProgrammableFlow Architecture is designed to meet the unique needs of multi-tenant data center environments by delivering automation and virtualization

More information

Design and Implementation of Virtual TAP for Software-Defined Networks

Design and Implementation of Virtual TAP for Software-Defined Networks Design and Implementation of Virtual TAP for Software-Defined Networks - Master Thesis Defense - Seyeon Jeong Supervisor: Prof. James Won-Ki Hong Dept. of CSE, DPNM Lab., POSTECH, Korea jsy0906@postech.ac.kr

More information

SDN for Multi-Layer IP & Optical Networks

SDN for Multi-Layer IP & Optical Networks SDN for Multi-Layer IP & Optical Networks Sterling d Perrin Senior Analyst, Heavy Reading Agenda Definitions for SDN and NFV SDN Drivers and Barriers SDN Use Cases and Applications General Uses Specific

More information

The New Normal. Unique Challenges When Monitoring Hybrid Cloud Environments

The New Normal. Unique Challenges When Monitoring Hybrid Cloud Environments The New Normal Unique Challenges When Monitoring Hybrid Cloud Environments The Evolving Cybersecurity Landscape Every day, the cybersecurity landscape is expanding around us. Each new device connected

More information

SOLUTION BRIEF Enterprise WAN Agility, Simplicity and Performance with Software-Defined WAN

SOLUTION BRIEF Enterprise WAN Agility, Simplicity and Performance with Software-Defined WAN S O L U T I O N O V E R V I E W SOLUTION BRIEF Enterprise WAN Agility, Simplicity and Performance with Software-Defined WAN Today s branch office users are consuming more wide area network (WAN) bandwidth

More information

Acceleration Systems Technical Overview. September 2014, v1.4

Acceleration Systems Technical Overview. September 2014, v1.4 Acceleration Systems Technical Overview September 2014, v1.4 Acceleration Systems 2014 Table of Contents 3 Background 3 Cloud-Based Bandwidth Optimization 4 Optimizations 5 Protocol Optimization 5 CIFS

More information

Exploiting ICN for Flexible Management of Software-Defined Networks

Exploiting ICN for Flexible Management of Software-Defined Networks Exploiting ICN for Flexible Management of Software-Defined Networks Mayutan Arumaithurai, Jiachen Chen, Edo Monticelli, Xiaoming Fu and K. K. Ramakrishnan * University of Goettingen, Germany * University

More information

Cloud Computing and Cloud Networking

Cloud Computing and Cloud Networking Cloud Computing and Cloud Networking Dr. Adel Nadjaran Toosi Cloud Computing and Distributed Systems (CLOUDS) Laboratory, School of Computing and Information Systems The University of Melbourne, Australia

More information

IJSER. Virtualization Intrusion Detection System in Cloud Environment Ku.Rupali D. Wankhade. Department of Computer Science and Technology

IJSER. Virtualization Intrusion Detection System in Cloud Environment Ku.Rupali D. Wankhade. Department of Computer Science and Technology ISSN 2229-5518 321 Virtualization Intrusion Detection System in Cloud Environment Ku.Rupali D. Wankhade. Department of Computer Science and Technology Abstract - Nowadays all are working with cloud Environment(cloud

More information

White paper. Keys to Oracle application acceleration: advances in delivery systems.

White paper. Keys to Oracle application acceleration: advances in delivery systems. White paper Keys to Oracle application acceleration: advances in delivery systems. Table of contents The challenges of fast Oracle application delivery...3 Solving the acceleration challenge: why traditional

More information

Traffic Engineering with Forward Fault Correction

Traffic Engineering with Forward Fault Correction Traffic Engineering with Forward Fault Correction Harry Liu Microsoft Research 06/02/2016 Joint work with Ratul Mahajan, Srikanth Kandula, Ming Zhang and David Gelernter 1 Cloud services require large

More information

Intel Network Builders Solution Brief. Etisalat* and Intel Virtualizing the Internet. Flexibility

Intel Network Builders Solution Brief. Etisalat* and Intel Virtualizing the Internet. Flexibility Intel Network Builders Solution Brief Etisalat* and Intel Virtualizing the Internet Gateway Gi-LAN for Service Flexibility Introduction Etisalat Group* is one of the world s leading telecom groups in emerging

More information

NaaS Network-as-a-Service in the Cloud

NaaS Network-as-a-Service in the Cloud NaaS Network-as-a-Service in the Cloud joint work with Matteo Migliavacca, Peter Pietzuch, and Alexander L. Wolf costa@imperial.ac.uk Motivation Mismatch between app. abstractions & network How the programmers

More information

Enabling Efficient and Scalable Zero-Trust Security

Enabling Efficient and Scalable Zero-Trust Security WHITE PAPER Enabling Efficient and Scalable Zero-Trust Security FOR CLOUD DATA CENTERS WITH AGILIO SMARTNICS THE NEED FOR ZERO-TRUST SECURITY The rapid evolution of cloud-based data centers to support

More information

Smart Attacks require Smart Defence Moving Target Defence

Smart Attacks require Smart Defence Moving Target Defence Smart Attacks require Smart Defence Moving Target Defence Prof. Dr. Gabi Dreo Rodosek Executive Director of the Research Institute CODE 1 Virtual, Connected, Smart World Real World Billions of connected

More information

ONUG SDN Federation/Operability

ONUG SDN Federation/Operability ONUG SDN Federation/Operability Orchestration A white paper from the ONUG SDN Federation/Operability Working Group May, 2016 Definition of Open Networking Open networking is a suite of interoperable software

More information

The F5 Application Services Reference Architecture

The F5 Application Services Reference Architecture The F5 Application Services Reference Architecture Build elastic, flexible application delivery fabrics that are ready to meet the challenges of optimizing and securing applications in a constantly evolving

More information

Athens, Greece _ October 25, /26

Athens, Greece _ October 25, /26 A Comparative Assessment between Architectural innovations coming from the and the 5G Projects Alexandros Kostopoulos, Ph.D. Research Programs Section, Fixed Research & Development Fixed & Mobile, Technology

More information

Unity EdgeConnect SP SD-WAN Solution

Unity EdgeConnect SP SD-WAN Solution As cloud-based application adoption continues to accelerate, geographically distributed enterprises increasingly view the wide area network (WAN) as critical to connecting users to applications. As enterprise

More information

WHITE PAPER Hybrid Approach to DDoS Mitigation

WHITE PAPER Hybrid Approach to DDoS Mitigation WHITE PAPER Hybrid Approach to DDoS Mitigation FIRST LINE OF DEFENSE Executive Summary As organizations consider options for DDoS mitigation, it is important to realize that the optimal solution is a hybrid

More information

COMPUTING. Centellis Virtualization Platform An open hardware and software platform for implementing virtualized applications

COMPUTING. Centellis Virtualization Platform An open hardware and software platform for implementing virtualized applications COMPUTING Data Sheet Centellis VP provides the hardware and software platform to deploy carrier grade virtualized applications. Application virtualization software framework based on industry standard

More information

CA Test Data Manager Key Scenarios

CA Test Data Manager Key Scenarios WHITE PAPER APRIL 2016 CA Test Data Manager Key Scenarios Generate and secure all the data needed for rigorous testing, and provision it to highly distributed teams on demand. Muhammad Arif Application

More information

Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack

Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack Robert Collazo Systems Engineer Rackspace Hosting The Rackspace Vision Agenda Truly a New Era of Computing 70 s 80 s Mainframe Era 90

More information

Network Function Virtualization. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli

Network Function Virtualization. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli Network Function Virtualization CSU CS557, Spring 2018 Instructor: Lorenzo De Carli Managing middleboxes Middlebox manifesto (ref. previous lecture) pointed out the need for automated middlebox management

More information

소프트웨어기반고성능침입탐지시스템설계및구현

소프트웨어기반고성능침입탐지시스템설계및구현 소프트웨어기반고성능침입탐지시스템설계및구현 KyoungSoo Park Department of Electrical Engineering, KAIST M. Asim Jamshed *, Jihyung Lee*, Sangwoo Moon*, Insu Yun *, Deokjin Kim, Sungryoul Lee, Yung Yi* Department of Electrical

More information

Service Mesh and Microservices Networking

Service Mesh and Microservices Networking Service Mesh and Microservices Networking WHITEPAPER Service mesh and microservice networking As organizations adopt cloud infrastructure, there is a concurrent change in application architectures towards

More information

Deploying TeraVM in an OpenStack Environment

Deploying TeraVM in an OpenStack Environment Application Note Deploying TeraVM in an OpenStack Environment VIAVI Solutions TeraVM in OpenStack* is revolutionizing IP testing! Never before has it been as easy or cost effective to scale test traffic

More information

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE

TALK THUNDER SOFTWARE FOR BARE METAL HIGH-PERFORMANCE SOFTWARE FOR THE MODERN DATA CENTER WITH A10 DATASHEET YOUR CHOICE OF HARDWARE DATASHEET THUNDER SOFTWARE FOR BARE METAL YOUR CHOICE OF HARDWARE A10 Networks application networking and security solutions for bare metal raise the bar on performance with an industryleading software

More information

SOLUTION BRIEF NETWORK OPERATIONS AND ANALYTICS. How Can I Predict Network Behavior to Provide for an Exceptional Customer Experience?

SOLUTION BRIEF NETWORK OPERATIONS AND ANALYTICS. How Can I Predict Network Behavior to Provide for an Exceptional Customer Experience? SOLUTION BRIEF NETWORK OPERATIONS AND ANALYTICS How Can I Predict Network Behavior to Provide for an Exceptional Customer Experience? SOLUTION BRIEF CA DATABASE MANAGEMENT FOR DB2 FOR z/os DRAFT When used

More information

Cisco HyperFlex and the F5 BIG-IP Platform Accelerate Infrastructure and Application Deployments

Cisco HyperFlex and the F5 BIG-IP Platform Accelerate Infrastructure and Application Deployments OVERVIEW + Cisco and the F5 BIG-IP Platform Accelerate Infrastructure and Application Deployments KEY BENEFITS Quickly create private clouds Tested with industry-leading BIG-IP ADC platform Easily scale

More information

Project Proposal. ECE 526 Spring Modified Data Structure of Aho-Corasick. Benfano Soewito, Ed Flanigan and John Pangrazio

Project Proposal. ECE 526 Spring Modified Data Structure of Aho-Corasick. Benfano Soewito, Ed Flanigan and John Pangrazio Project Proposal ECE 526 Spring 2006 Modified Data Structure of Aho-Corasick Benfano Soewito, Ed Flanigan and John Pangrazio 1. Introduction The internet becomes the most important tool in this decade

More information

Communication System Design Projects

Communication System Design Projects Communication System Design Projects KUNGLIGA TEKNISKA HÖGSKOLAN PROFESSOR: DEJAN KOSTIC TEACHING ASSISTANT: GEORGIOS KATSIKAS Traditional Vs. Modern Network Management What is Network Management (NM)?

More information

Hyper-Converged Infrastructure: Providing New Opportunities for Improved Availability

Hyper-Converged Infrastructure: Providing New Opportunities for Improved Availability Hyper-Converged Infrastructure: Providing New Opportunities for Improved Availability IT teams in companies of all sizes face constant pressure to meet the Availability requirements of today s Always-On

More information

Disaggregation and Virtualization within the Juniper Networks Mobile Cloud Architecture. White Paper

Disaggregation and Virtualization within the Juniper Networks Mobile Cloud Architecture. White Paper Disaggregation and Virtualization within the Juniper Networks Mobile Cloud Architecture White Paper June 2017 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

How DPI enables effective deployment of CloudNFV. David Le Goff / Director, Strategic & Product Marketing March 2014

How DPI enables effective deployment of CloudNFV. David Le Goff / Director, Strategic & Product Marketing March 2014 How DPI enables effective deployment of CloudNFV David Le Goff / Director, Strategic & Product Marketing March 2014 Key messages of this presentation 1. DPI (Deep Packet Inspection) is critical for effective

More information

Enabling Agile Service Chaining with Service Based Routing

Enabling Agile Service Chaining with Service Based Routing Enabling Agile Service Chaining with Service Based Routing Contents Abstract...1 1. Introduction...1 2. Huawei s SBR Solution for Agile Service Chaining...3 2.1. Architecture Overview...3 2.2. Multi-Dimensional

More information

Cloud Computing Concepts, Models, and Terminology

Cloud Computing Concepts, Models, and Terminology Cloud Computing Concepts, Models, and Terminology Chapter 1 Cloud Computing Advantages and Disadvantages https://www.youtube.com/watch?v=ojdnoyiqeju Topics Cloud Service Models Cloud Delivery Models and

More information

Casa Systems Axyom Multiservice Router

Casa Systems Axyom Multiservice Router Solution Brief Casa Systems Axyom Multiservice Router Solving the Edge Network Challenge To keep up with broadband demand, service providers have used proprietary routers to grow their edge networks. Cost

More information

Innovating to Increase Revenue

Innovating to Increase Revenue WHITE PAPER Innovating to Increase Revenue Uniquely Driving Differentiation & Competitive Advantage INTRODUCTION The business drivers for mobile operators looking to transform their networks are: 1) Declining

More information

Technical White Paper of. MOAC Mother of All Chains. June 8 th, 2017

Technical White Paper of. MOAC Mother of All Chains. June 8 th, 2017 Technical White Paper of MOAC Mother of All Chains June 8 th, 2017 [Abstract] MOAC is to design a scalable and resilient Blockchain that supports transactions, data access, control flow in a layered structure.

More information

Securing Your Amazon Web Services Virtual Networks

Securing Your Amazon Web Services Virtual Networks Securing Your Amazon Web Services s IPS security for public cloud deployments It s no surprise that public cloud infrastructure has experienced fast adoption. It is quick and easy to spin up a workload,

More information