Connecting NetOpen Nodes for NetOpen Resource Aggregate

Similar documents
Introduction to

An OpenFlow-enabled SDN Testbed over International SmartX Rack Sites

Building NetOpen Networking Services over OpenFlow-based Programmable Networks

& SDNenabled International Testbeds

Automated and Massive-scale CCNx Experiments with Software-Defined SmartX Boxes

ITU Kaleidoscope 2016 ICTs for a Sustainable World Multi-path Chunked Video Exchanges over SDN Cloud Playground

OFELIA. Intercontinental Cooperation

Can the Production Network Be the Testbed?

APII/TEIN4 (APII) Asia Pacific Information Infrastructure (KR-JP)

KREONET-S: Software-Defined Wide Area Network Design and Deployment on KREONET

A SDN-like Loss Recovery Solution in Application Layer Multicast Wenqing Lei 1, Cheng Ma 1, Xinchang Zhang 2, a, Lu Wang 2

VIRTUALIZATION IN OPENFLOW NETWORKS

Next Generation Emergency Communication Systems via Software Defined Networks

Centralization of Network using Openflow Protocol

HY436: Network Virtualization

Slicing a Network. Software-Defined Network (SDN) FlowVisor. Advanced! Computer Networks. Centralized Network Control (NC)

Toward Inter-Connection on OpenFlow Research Networks

Software Defined Networks and OpenFlow

Advanced Computer Networks. RDMA, Network Virtualization

Delay Controlled Elephant Flow Rerouting in Software Defined Network

Switching and Routing projects description

Server Load Balancing with Path Selection in Virtualized Software Defined Networks

Measuring MPLS overhead

Software Defined Networks and OpenFlow. Courtesy of: AT&T Tech Talks.

Design and Implementation of Virtual TAP for Software-Defined Networks

Performance of an Adaptive Routing Overlay under Dynamic Link Impairments

Advanced Computer Networks. Network Virtualization

Extending Dijkstra s Shortest Path Algorithm for Software Defined Networking

Network Virtualization Based on Flows

Building a Fast, Virtualized Data Plane with Programmable Hardware. Bilal Anwer Nick Feamster

A Software-Defined Networking Security Controller Architecture. Fengjun Shang, Qiang Fu

46PaQ. Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL. 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep

VXLAN Overview: Cisco Nexus 9000 Series Switches

Software Defined Networking

OpenFlow and Xen-Based Virtual Network Migration

NetFPGA Update at GEC4

Available online at ScienceDirect. Procedia Computer Science 34 (2014 )

BTSDN: BGP-based Transition for the Existing Networks to SDN

6GN Activities & KOREAv6 project

VXLAN Functionality Cubro EXA48600 & EXA32100

QoS Featured Wireless Virtualization based on Hardware

CC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments

WIRELESS mesh networks (WMNs) is formed as distributed

Developing Applications with Networking Capabilities via End-to-End Software Defined Networking (DANCES)

Chapter 4 Network Layer: The Data Plane. Part A. Computer Networking: A Top Down Approach

National Taiwan University. Software-Defined Networking

KRLight and GLIF Activities

Lab - Using Wireshark to Examine a UDP DNS Capture

Open Access Mobile Management Method for SDN-Based Wireless Networks

Host Identifier and Local Locator for Mobile Oriented Future Internet: Implementation Perspective

Lab - Using Wireshark to Examine a UDP DNS Capture

Mobile Management Method for SDN-based Wireless Networks

HD Video Activities in Korea

Header Space Analysis: Static Checking For Networks

Managing and Securing Computer Networks. Guy Leduc. Chapter 2: Software-Defined Networks (SDN) Chapter 2. Chapter goals:

EXAM TCP/IP NETWORKING Duration: 3 hours

The Load Balancing Research of SDN based on Ant Colony Algorithm with Job Classification Wucai Lin1,a, Lichen Zhang2,b

Federated Experimentation Infrastructure Interconnecting Sites from Both Europe and South Korea (SmartFIRE)

UNIT IV -- TRANSPORT LAYER

Cybersecurity was nonexistent for most network data exchanges until around 1994.

Available online at ScienceDirect. Procedia Computer Science 56 (2015 )

Implementing an OpenFlow Switch With QoS Feature on the NetFPGA Platform

Adaptation Problems and Solutions. MARCOM 97, Dipl.-Ing. Kai-Oliver Detken, BIBA Bremen, Germany, October the 16th, 1997

Programmable Software Switches. Lecture 11, Computer Networks (198:552)

ADVANCED COMPUTER NETWORKS Assignment 9: Introduction to OpenFlow

ATM in TCP/IP environment: Adaptations and Effectiveness

A Hybrid Architecture for Video Transmission

BTSDN: BGP-Based Transition for the Existing Networks to SDN

CSCI Computer Networks

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

Performance benchmarking of SDN experimental platforms

CS519: Computer Networks. Lecture 1 (part 2): Jan 28, 2004 Intro to Computer Networking

Deliverable D6.2 VNF/SDN/EPC: integration and system testing

Using Flexibility as a Measure to Evaluate Softwarized Networks

Software-Defined Networking (SDN) Overview

APPENDIX F THE TCP/IP PROTOCOL ARCHITECTURE

Experience with MultiAgent-Based Distributed Service Composition

Experimental Extensions to RSVP Remote Client and One-Pass Signalling

A Hybrid Hierarchical Control Plane for Software-Defined Network

PacketShader: A GPU-Accelerated Software Router

Proposal of cooperative operation framework for M2M systems

ZigBee Security Using Attribute-Based Proxy Re-encryption

Evaluation of OpenFlow s Enhancements

IPv6 Deployment Experience Sharing and Current Strategy in Korea

Random Neural Networks for the Adaptive Control of Packet Networks

Decision Forest: A Scalable Architecture for Flexible Flow Matching on FPGA

II. Principles of Computer Communications Network and Transport Layer

TCP Bandwidth Allocation for Virtual Networks

Network and Security: Introduction

Very Tight Coupling between LTE and WiFi: a Practical Analysis

P4 for an FPGA target

6.9. Communicating to the Outside World: Cluster Networking

Available online at ScienceDirect. Procedia Computer Science 98 (2016 )

UNIVERSITY OF CAGLIARI

Intruduction of KOREN (KOrea advanced REsearch Network)

Network Management & Monitoring Network Delay

NAT Support for Multiple Pools Using Route Maps

Motivation to Teach Network Hardware

OpenFlow Performance Testing

estadium Project Lab 2: Iperf Command

Transcription:

Proceedings of the Asia-Pacific Advanced Network 2012 v. 34, p. 11-16. http://dx.doi.org/10.7125/apan.34.2 ISSN 2227-3026 Connecting NetOpen Nodes for NetOpen Resource Aggregate Namgon Kim and JongWon Kim Networked Computing Systems Lab., School of Information and Communications, Gwangju Institute of Science and Technology (GIST), Gwangju, 500-712, Korea E-Mails: {ngkim, jongwon}@nm.gist.ac.kr * Tel.: +82-(0)62-715-2219; Fax: +82-(0)62-715-3155 Abstract: NetOpen RA (Resource Aggregate) is the programmable network substrate containing resources supporting OpenFlow-based programmable networking. A number of experiments can be performed by sharing resources built by connecting NetOpen nodes. As today's Internet does not support layer-2 (L2) data-plane connectivity required for OpenFlow-based programmable networking, tunneling techniques are important in building NetOpen RA with NetOpen nodes. In this paper, we present several tunneling techniques and evaluate the performance of the connections according to tunneling techniques. We also present demonstration experiences with the NetOpen RA. Keywords: NetOpen; Resource Aggregate; OpenFlow; Networking service; Tunneling. 1. Introduction With the introduction of Software-Defined Networking (SDN) [1], network innovations are enabled by separating the control plane from the data-path processing. OpenFlow [2], an open source tool for SDN, presents a remotely programmable interface that allows network switches to be controlled from a logically centralized location. It is now easy to set up a programmable network switch with OpenFlow. However, as today s Internet is not compatible with OpenFlowbased programmable networking, it requires tunneling techniques to build programmable network substrates with geographically distributed OpenFlow-enabled nodes.

In this paper, we present how we build NetOpen RA (Resource Aggregate) by connecting NetOpen nodes supporting OpenFlow-based programmable networking. We first present NetOpen node, a container for computing/networking resources with OpenFlow-based programmability. To build NetOpen RA with NetOpen nodes, we present several tunneling techniques to keep layer-2 (L2) data-plane connectivity among NetOpen nodes. We verify the NetOpen RA by evaluating the performance of tunneled connections and presenting our demonstration experience with it. 2. NetOpen Node We envision the needs for programmable nodes containing various computing/networking (albeit with media processing capabilities) resources. NetOpen nodes are networking-centric milestones towards the envisioned nodes. The NetOpen node serves as a network switch which is programmable (and partially virtualized via flow space) through the OpenFlow protocol [2]. Service (Service Code) User Application User User Application Modules Platform (OS + Library) OpenFlow v.1.0 (User-space) OpenFlow v.1.0 (Kernel-space) Click (+OpenFlow v.1.0 element) OpenFlow v.1.0 (NetFPGA) Infrastructure (Hardware) CPU Ethernet Wi-Fi NetFPGA v2.1.2 Figure 1. NetOpen node supporting data-plane programmability. Basically, a NetOpen node is a container for computing/networking resources such as generalpurpose CPU and NICs (Network Interface Cards) including Ethernet, Wi-Fi and NetFPGA supporting hardware-accelerated programmable networking. With the OpenFlow-enabled NetOpen node, we can deeply program its networking resources. By linking Click [3] with OpenFlow, a software-based extensible modular router, we can support data-plane programmability and customized processing on each packet via the combination of Click elements. 3. NetOpen RA NetOpen RA (Resource Aggregate) is the programmable substrate containing computing/networking resources built by connecting a set of NetOpen nodes. There are two requirements for building a NetOpen RA. First, to enable OpenFlow-based network programmability, participating NetOpen nodes should have L2 data-plane connectivity with each

other. Next, to be shared among multiple experimenters, a NetOpen RA should support flowspace-level network virtualization. For keeping L2 data-plane connectivity among NetOpen nodes connected through layer-3 (L3) networks, we selectively use three tunneling techniques. We can use 1) software-based or 2) hardware-accelerated EoIP (Ethernet-over-IP) tunneling solutions to create EoIP tunnel connecting them. Also we can setup 3) VLANs along the path between NetOpen nodes. To support multiple VLANs with a single VLAN, we use 802.1Q tunneling that expands VLAN space by using a VLAN-in-VLAN hierarchy and tagging the tagged packets. The software-based EoIP tunneling solution is the easiest one to use, but we cannot achieve good networking performance. On the other hand, with the hardware-accelerated EoIP tunneling solution, we can get better networking performance. However, both parties should be equipped with the hardware-accelerated EoIP tunneling solution and it could cost much. Although there is an open source implementation on NetFPGA, it has limited port scalability and its stability has not yet been confirmed. Thus configuring VLAN is the best solution in stability and performance perspective. However, it takes time to configure as it needs to be done manually by the network operators managing the networks between NetOpen RA and BEN. For connecting NetOpen nodes, we selectively use one of the three methods according to their network configurations. To support flowspace-level network virtualization, we use FlowVisor [4], a tool that provides flowspace-level network virtualization in which the same hardware forwarding plane can be shared among multiple experimenters. A flowspace is defined by a s ubspace of the entire geometric space of possible packet headers. Based on flowspaces allocated to experimenters, FlowVisor performs relay of OpenFlow messages between control nodes of the experimenters and NetOpen nodes. POSTECH (Pohang) CNU end01 (netopen) end02 (vlan) Daejeon Research Network (KOREN) Research Network (KREONET) KHU (Suwon) GIST (Gwangju) Gwangju Seoul End host (with HD camera) End host (traffic generator) End host Networked Tiled Display OpenFlow Switch OpenFlow Production Switch OpenFlow Switch with NetFPGA OpenFlow Controller FlowVisor Figure 2. NetOpen RA. We configure a NetOpen RA by connecting four participating sites (CNU, POSTECH, KHU, and GIST) equipped with NetOpen nodes as depicted in Figure 2. As participating sites basically

have L3 network connections via research networks such as KOREN and KREONET, we use NetFPGA-based hardware-accelerated EoIP (Ethernet-over-IP) tunneling solutions (called NetOpen tunneling, hereafter) to enable OpenFlow-based network connectivity among NetOpen nodes. By supporting NetOpen tunneling solutions, we can dynamically configure links (depicted as dotted line) between sites according to networking needs. We also enable L2-VLAN based static connections between GIST and CNU (depicted as solid line) through KREONET. It also supports flowspace-level network virtualization by connecting all the NetOpen nodes in the NetOpen RA with FlowVisor. 4. Verification 4.1. Throughput Result Figure 3. TCP throughput of tunneled connections. To verify the performance of tunneled connections among NetOpen nodes, we perform TCP throughput measurement of tunneled connections by using a well-known network performance measurement tool, iperf. We measure TCP throughput of the connections by placing an iperf server at GIST and running an iperf client at each institute. Figure 3 shows the throughput measurement result. The throughput result of VLAN and NetOpen tunneling of CNU shows that the throughput of VLAN is higher than NetOpen tunneling. This is due to the encapsulation overhead of NetOpen tunneling (42bytes) is bigger than that of VLAN (4bytes) and NetOpen tunneling does not guarantee lost packets as it uses UDP (User Datagram Protocol) transport. Note that NetOpen tunneling of CNU is using KOREN showing 3ms RTT (round trip time) and VLAN of CNU is connected through KREONET with 3.33ms RTT which affects the time to reach their maximum throughput. 4.2. OF@Korea: Linking OpenFlow-enabled Resources in Korea

Stanford (USA) GIST Visual VoD OF@KOREN KPU(Daegu) NetOpen UI OMX UI NIA(seoul) SKKU (Suwon) KAIST ICC PKU (Busan) JNU (Gwangju) POSTECH ( Pohang) KHU (Suwon) Live HD Video Live HD Video End host (with HD camera) End host (traffic generator) End host OpenFlow Switch OF@FIRST KOREN OpenFlow Controller FlowVisor CNU OpenFlow Production Switch OpenFlow Switch with NetFPGA Networked Tiled Display Live HD Video KOREN GIST (Gwangju) Gwangju Daejeon OF@KREONET ETRI VoD VoD Seoul Figure 4. QoS control experiment over OF@Korea. With the NetOpen RA, we build OF@Korea, an integrated OpenFlow testbed by linking OpenFlow-enabled RAs managed by several research institutes and universities in Korea as depicted in Fig. 4. The goal of OF@Korea is giving an experimental environment that experimenters can run their services through a portal-like interface on the integrated OF@Korea. Three major entities collaborate for this work with their own OpenFlow-enabled network substrates. First, OF@FIRST (Future Internet Research for Sustainable Testbed) RA came out of the FIRST project, jointly carried out by ETRI (Electronics and Telecommunications Research Institute) and a university team (FIRST@PC: GIST/CNU/Postech/KHU). Inside OF@FIRST RA, the university team is working on p rototyping a PC-based virtualized computing/networking substrates using NetFPGA/OpenFlow while ETRI is contributing an OpenFlow-enabled NPaccelerated ACTA-based virtualization platform, denoted as the FIRST platform. The NetOpen RA participates as part of OF@FIRST. Next, KREONET (Korea Research Environment Open NETwork) and KOREN (Korea Advanced Research Network) RAs are respectively providing OF@KREONET and OF@KOREN resources together with domestic R&D network connections for this integration. In this demonstration, we show a QoS (Quality of Service) control experiment that assists the transport of flows on the integrated OpenFlow-enabled resources of OF@Korea, starting from OF@FIRST (denoted as NetOpen RA). In this demo, as shown in Fig. 4, we display multiple videos from all distributed video senders on the Networked Tiled Display (NeTD) at GIST while meeting the QoS requirements of each flow via the proposed combination of NetOpen QoS-

related software. Initially, we start by providing shortest-path connections for all video flows that connect them using minimum-hop paths. Then, by adding disturbance traffic flows, we emulate possible changes in the underlying OpenFlow-enabled network and at the same time we turn on the NetOpen QoS control. If the invoked change breaks the QoS requirements of some flows, the NetOpen QoS control will automatically change the paths of QoS-affected flows to exploit alternative flow connection possibilities. 4. Conclusions In this paper, we present how we build NetOpen RA by connecting NetOpen nodes. We show three tunneling techniques for connecting NetOpen nodes over L3-based Internet. We verify the NetOpen RA by measuring the throughput of tunneled connections and show our demonstration with the NetOpen RA integrated with other OpenFlow-enabled RAs. Acknowledgements This research was one of KOREN projects supported in part by National Information Society Agency (11-951-00-001) and in part supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(mest) (No. 2011-0027558). References 1. N. McKeown. Keynote talk: Software-defined networking. In Proc. of IEEE Conference on Computer Communications (INFOCOM 09), Apr. 2009. 2. OpenFlow Switch Specification, Version 1.0.0 ( Wire Protocol 0x01), http://www.openflow.org/documents/openflow-spec-v1.0.0.pdf, Dec. 2009 3. E. Kohler; R. Morris; B. Chen; J. Jannotti; M. F. Kaashoek. The click modular router. ACM Transactions on Computer Systems, 2000; Volume 18, Number 3, pp. 263 297. 4. R. Sherwood; M. Chan; G. Gibb; N. Handigol; T.-Y. Huang; P. Kazemian; M. Kobayashi; D. Underhill; K.-K. Yap; G. Appenzeller; N. McKeown. Carving research slices out of your production networks with OpenFlow. In Proc. of the ACM SIGCOMM 2009 (Demo), Aug. 2009. 2012 by the authors; licensee Asia Pacific Advanced Network. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).