AmLight supports wide-area network demonstrations in Super Computing 2013 (SC13)

Similar documents
GLIF September 2017 Sydney, Australia Gerben van Malenstein, SURFnet & John Hess, Pacific Wave

18th WRNP Workshop RNP May Belém, Brasil Gerben van Malenstein

South American Astronomy Coordination Committee (SAACC) Meeting

AmLight ExP & AtlanticWave-SDX: new projects supporting Future Internet Research between U.S. & South America

IRNC Kickoff Meeting

IRNC PI Meeting. Internet2 Global Summit May 6, 2018

Network Services Interface. OGF NSI standards development progress:

SURFnet network developments 10th E-VLBI workshop 15 Nov Wouter Huisman SURFnet

Pacific Wave: Building an SDN Exchange

Integration of Network Services Interface version 2 with the JUNOS Space SDK

FELIX project : Overview and the results. Tomohiro Kudoh (The University of Tokyo / AIST) on behalf of all FELIX partners

Inter-domain SDN Data Plane Validation: Next Steps at AmLight

Next Generation Networking and The HOPI Testbed

IRNC:RXP SDN / SDX Update

WELCOME TO GLIF Technical Working Group Summer 2015 meeting. Prague, Czech Republic September 2015

Alex Soares de Moura

SDN AmLight: One Year Later

Deploying Standards-based, Multi-domain, Bandwidth-on-Demand

Regional Networks, Communities and Ecosystems Americas Africa Research and education Lightpaths (AARCLight) Study: Year 1 findings

SouthernLight: New GOLE for Latin America

International Exchanges Current and Future

Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S.

the world with light

New International Connectivities of SINET5

RNP Events. About the Brazilian National Research and Education Network (RNP)

AmLight s SDN Looking Glass A Network Monitoring System for SDN Networks

Design & Deployment of a Future Internet Testbed Brazil-EU cooperation in ICT Research and Development

Dave McGaugh, Pacific Northwest Gigapop NANOG 39 Toronto, Canada

Innovation and the Internet

The Evolution of Exchange Points

The New Internet2 Network

Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures

DYNES: DYnamic NEtwork System

International Research Network. Connections (IRNC)

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS

Virtual Circuits Landscape

Global Lambda Integrated Facility. Connecting research worldwide with lightpaths

National R&E Networks: Engines for innovation in research

FEDERICA Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures

5 August 2010 Eric Boyd, Internet2 Deputy CTO

Update on National LambdaRail

Internet2 DCN and Dynamic Circuit GOLEs. Eric Boyd Deputy Technology Officer Internet2 GLIF Catania March 5, 2009

Optical Networking Activities in NetherLight

Grid Tutorial Networking

JGN2plus. Presenter: Munhwan Choi

GÉANT Open Service Description. High Performance Interconnectivity to Support Advanced Research

GÉANT Open Service Description. High Performance Interconnectivity to Support Advanced Research

Evolution of OSCARS. Chin Guok, Network Engineer ESnet Network Engineering Group. Winter 2012 Internet2 Joint Techs. Baton Rouge, LA.

Global Research and Education Networking Ecosystem

Presentation of the LHCONE Architecture document

Network Testbeds at AmLight: Eight Months Later

Experience of the RISE Testbed Deployment

GLIF CERN Oct John Vollbrecht. 10/13/10 Automate GOLE Pilot 1

Public Cloud Connection for R&E Network. Jin Tanaka APAN-JP/KDDI

PacificWave Update and Future Directions

US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community

Physicists Set New Record for Network Data Transfer 13 December 2006

ICN for Cloud Networking. Lotfi Benmohamed Advanced Network Technologies Division NIST Information Technology Laboratory

TERENA E2E Workshop

The FELIX project Sustainability. Bartosz Belter, PSNC

CMS Data Transfer Challenges and Experiences with 40G End Hosts

A large-scale International IPv6 Network. A large-scale International IPv6 Network.

AutoBAHN Provisioning guaranteed capacity circuits across networks

COLLABORATIVE TRIALS OF INTERNET2 AND NTT

Connectivity Services, Autobahn and New Services

Network Virtualization for Future Internet Research

Brent Sweeny GRNOC at Indiana University APAN 32 (Delhi), 25 August 2011

FP7 EU-JP Project : STRAUSS. Ken ichi Kitayama (JP Project Coordinator) Osaka University

Testbeds as a Service Building Future Networks A view into a new GN3Plus Service. Jerry Sobieski (NORDUnet) GLIF Oct 2013 Singapore

Canadian Networks for Particle Physics Research 2011 Report to the Standing Committee on Interregional Connectivity, ICFA Panel January 2012

Connecting the e-infrastructure chain

Changing the Voice of

Heterogeneous Interconnection between SDN and Layer2 Networks based on NSI

1. Introduction. 2. Purpose of this paper and audience. Best Practices 1 for Cloud Provider Connectivity for R&E Users

GLIF, the Global Lambda Integrated Facility. Kees Neggers Managing Director SURFnet. CCIRN 4 June 2005, Poznan, Poland

Enrique Lozoya VP Engineering Sales C Company Overview. and Case Studies. padtec.com

LHC Open Network Environment (LHCONE)

NCIT*net 2 General Information

TransPAC3- Asia US High Performance International Networking (Award # ) Quarterly Report 1-March-2014 through 31-May-2014

T0-T1-T2 networking. Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group

International Research Networking GÉANT2, ORIENT and TEIN2: Projects Connecting China and Europe

GÉANT and other projects Update

Future Internet Experiments over National Research & Education Networks: The Use Cases of FEDERICA & NOVI over European NRENs - GÉANT

igeni: International Global Environment for Network Innovations

Net Edu Romanian Education Network

OFELIA. Intercontinental Cooperation

Building a Digital Bridge from Ohio

International Networking in support of Extremely Large Astronomical Data-centric Operations

The Future of the Internet

Tryfon Chiotis, GRNET

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April

TransPAC3- Asia US High Performance International Networking (Award # ) Quarterly Report 1-September-2014 through 30-November-2014

University of Amsterdam

GÉANT Plus Service Description. High Performance Cost-effective Connectivity

StarLight Connects Global Research Community Using Force10 TeraScale E-Series

UHD (8K) Television Coverage of Large Sports Events in Brazil

Implementation of the Pacific Research Platform over Pacific Wave

USE OF TELERADIOLOGY IN DISTANCE EDUCATION IN BRAZIL, LATIN AMERICA.

PTTMetro - PTT.br The Brazilian Metropolitan IXP Project

GÉANT L3VPN Service Description. Multi-point, VPN services for NRENs

Transcription:

PRESS RELEASE Media Contact: Heidi Alvarez, Director Center for Internet Augmented Research and Assessment (CIARA) Florida International University 305-348-2006 heidi@fiu.edu AmLight supports wide-area network demonstrations in Super Computing 2013 (SC13) Miami, Florida, February 3, 2014 Florida International University FIU s AmLight project supported two long-haul network experiments that spanned the continents of South and North America. AmLight allocated up to 20Gbps of bandwidth capacity between Miami and São Paulo. With support from Florida LambdaRail (FLR), National LambdaRail (NLR) and Internet2, the 20Gbps of bandwidth capacity was extended from Miami to the SC13 show floor in Denver, Colorado. Layer2 lightpaths were provisioned from São Paulo to the show floor in Denver, overlayed on the AmLight, FLR, NLR and Internet2 AL2S backbone networks. The network diagram in Figure 1 illustrates the wide area and local area networks for High-Energy Physics (HEP) deployed by the Caltech team and partners, including the connections for the LHC CMS Tier2 Computing Centers in Rio de Janeiro (HEPGRID) and São Paulo (SPRACE), provided respectively by RNP and ANSP through the International Exchange Points in São Paulo (SouthernLight) and Miami (AMPATH), FLR, NLR and Internet2 AL2S. Figure 1 - HEP Terabit WAN and LAN for the Caltech Booth at SC13 1

The wide area network shown in Figure 1 represents the collaboration of many geographically distributed research and education networks, and computation and storage facilities, to form a global testbed environment, in order to conduct at-scale network experiments. The two experiments AmLight supported were (a) the GLIF Automated GOLE Network Services Interface (NSI) version 2.0 protocol; and (b) Openflowbased Multipath Networks in the WAN. The following is a description of these experiments and their results. GLIF Automated GOLE NSI version 2.0 protocol demonstration Both, the Americas Pathway (AMPATH) International Exchange Point in Miami, Florida, and the Southern Light International Exchange Point in São Paulo, each participated in the Automated GLIF Optical Lightpath Exchanges (AutoGOLE) trial at SC13. The Global Lambda Integrated Facility (GLIF) Automated GOLE pilot dynamically connects GLIF Open Lightpath Exchanges (GOLEs) and networks across the globe. The demonstration at SC13 was an inquirybased experiment that aimed to observe how the Network Service Interface Connection Service (NSI CS 2.0) behaved in a global multi-domain environment. NSI is an architectural standard being developed as a cooperative project between the GLIF community and the Open Grid Forum (OGF), a standards development organization. This experiment involved the set up of dynamic Layer2 services between 20 network domains on 4 continents to demonstrate real-world compatibility of version 2.0 of the NSI. Pre-staging of the SC13 Denver demonstration took place during the Annual GLIF meeting in October, in Singapore. Participating GOLEs, networks and partners during the GLIF and SC13 demonstrations were AIST (JP), SINET (JP), KDDI R&D Labs (JP), JGN-X (JP), KISTI (KR), SingAREN (SG), SouthernLight (BR), RNP (BR), AMPATH (US), ESnet (US), Internet2 (US), StarLight (US), PSNC (PL), NORDUnet (Scandinavia), GEANT (EU), GRnet (GR), CESNET (CZ), University of Amsterdam (NL), SURFnet (NL) and NetherLight (NL). Each GOLE ran the NSI version 2 protocol, and replied to requests to establish Layer2 circuits dynamically. Three data plane demonstrations were successfully performed, with one demonstration involving AMPATH and SouthernLight. An NSI aggregator was created to request the creation of dynamic circuits to all participating GOLEs, including AMPATH and SouthernLight. The experiment installed a large video display in the SC13 exhibit show floor. Video sources worldwide used the Layer2 lightpaths created to each GOLE to stream video to a receiver at SC13 (Figure 2). Multiple domains installed a VideoLAN Client (VLC) video-streaming source that were used to stream videos to various booths at the SC13 show floor. RNP participated by installing a video player in Rio de Janeiro, Brazil, in its own facilities. Building the experiment, coordinating a global deployment of the new NSI v2.0 protocol, then testing its stability and scalability were challenges of this demonstration at SC13. 2

Figure 2 - AutoGOLE Demo at SC13 Bandwidth tests and OpenFlow-based Multipath Networks in the WAN The SPRACE/UNESP team in São Paulo, together with the HEPGRID/UERJ team in Rio de Janeiro, conducted a series of bandwidth tests to stress the network links between Brazil and Denver established during SC13. Two high-end Intel Xeon servers with two-port 10GbE network cards, installed at the ANSP POP in the NAP of Brazil (http://www.terremark.com/data-centers/americas/nap-brazil.aspx), were connected to two Brocade OpenFlow-enabled switches MLXe-4 and XMR-8K. These servers were used to generate traffic between Brazil and the Colorado Convention Center in Denver, through AmLight links (connections between SouthernLight and AMPATH in Figure 3), with traffic generated using Caltech s Fast Data Transfer tool (http://monalisa.cern.ch/fdt/). A server installed at the SC13 show floor with two 40G Ethernet NIC was configured as an FDT server, and the two servers showed in Figure 3, together with a set of servers installed at HEPGRID in Rio de Janeiro State University, were configured as FDT clients. 3

Figure 3 SPRACE / ANSP / AMPATH setup for OpenFlow tests at SC13 The SPRACE team also participated in the OpenFlow-based multipath experiment conducted by Caltech, which uses the SDN paradigm to investigate the use of multipath connectivity in wide-area networks. To this end, physics data was transferred over the network using OpenFlow link layer multipath switching to efficiently utilize the multiple network paths to form a loop free topology. The multipath utilization is completely transparent to the end hosts of the network and accomplished using the Caltech developed OLiMPS OpenFlow controller that resided at CERN in Switzerland. This methodology allows an automated innetwork load balancing of various flows to raise the network utilization and efficiency. Moreover, multiple network links can be arbitrarily added to a network to increase capacity and redundancy without the traditional challenges of Layer-2 link aggregation using LACP. During the demonstration we were able to establish perfectly balanced network traffic across the multiple links of the testbed in a configuration that would normally cause a network loop. A similar experiment was first conducted at the TERENA 2013 meeting, in Maastricht, Netherlands Openflow-based Multipath in Wide Area Networks. The two Intel servers, installed at the NAP of Brazil, connected to the Brocade switches, were extensively exercised during these experiments. The SPRACE team also had the opportunity to make some extra experiments using different OpenFlow controllers (POX, RYU, as well as FloodLight, the one used as a basis for Caltech s OLiMPS controller). In order to support the experiments bandwidth requirements, two 10G circuits over diverse paths were made available from São Paulo to the Denver SC13 show floor, supported by AmLight, AMPATH, FLR, NLR and Internet2. The experiment was organized to use two VLAN ranges: one range used a dedicated NLR 10G wave and another used AtlanticWave. Figure 4 shows the network traffic as captured by Caltech s monitoring tool known as MonALISA (http://monalisa.cern.ch/monalisa.html) during the bandwidth tests. 4

Figure 4 Bandwidth tests from Rio de Janeiro and São Paulo to Denver during SC13 On the international links that connect São Paulo to Miami, one VLAN range was configured in the AmLight Atlantic link, the other one in the AmLight Pacific link. The total available bandwidth was set to 20 Gbps. Using these two ranges, it was possible to perform the tests described above. The graphs and descriptions below characterize the traffic generated by the experiments. Figure 5 - Dedicated Wave to SC13 As explained, the experiment was defined to use two VLAN ranges. The first range was configured to use the dedicated SC13 10G wave. Along this path, a set of experiments was conducted, starting at November 19th 11 AM EDT and ending at November 20th 02 AM EDT. As this wave was allocated for exclusive use for SC13, all traffic in Figure 5 represents the bandwidth utilization for the Openflow Multipath experiments. It is possible to see up to 5.5 Gbps of Incoming Traffic (Download traffic from Brazil s point of view) and up to 2.5 Gbps of Outgoing traffic (Traffic sent from Brazil to the SC13 show flow). 5

Figure 6 - Secondary VLAN range through AtlanticWave The second VLAN range was configured to use the AtlanticWave link. This link was not a dedicated circuit to the SC13 experiment, but it was possible to see the network utilization in Figure 6 (November 19 th, from 13:00 to 20:00), using up to 5.5 Gbps for Incoming Traffic (traffic flowing from the SC13 show flow to Brazil) and around 3 Gbps for Outgoing Traffic (traffic flowing from Brazil to the SC13 show floor). In Figures 7 and 8, it is possible to see all traffic flowing from and to Brazil using the international AmLight links, from Miami to Brazil: Figure 7 as the first VLAN range and Figure 8 as the second VLAN Range. Start and end times are indicated in the graphs to ease visualization. Figure 7 - AMLIGHT Pacific Link First VLAN range It is possible to see in Figure 7 the same 5.5 Gbps of Incoming traffic and 2.5Gbps of Outgoing Traffic of the SC13 dedicated wave (Figure 5). Figure 8 - AMLIGHT Atlantic Link Second VLAN range Figure 8 shows not only the SC13 traffic, it also shows production traffic for AmLight. But it is possible to see that between 13:00 and 20:00 the traffic pattern changed with all traffic generated by the FDT peers. It is possible to visualize that the traffic flowed from the AtlanticWave link (Figure 6) to the AMLIGHT Atlantic Link, shown here in Figure 8. 6

About CIARA: Florida International University s Center for Internet Augmented Research and Assessment (CIARA), in the Division of IT, has developed an international, high-performance research connection point in Miami, Florida, called AMPATH (AMericasPATH; www.ampath.fiu.edu). AMPATH extends participation to underrepresented groups in Latin America and the Caribbean, in science and engineering research and education through the use of high-performance network connections. AMPATH is home to the Americas Lightpaths (AmLight) high-performance network links connecting Latin America to the U.S., funded by the National Science Foundation (NSF), award #ACI-0963053 and the Academic Network of São Paulo (award #2003/13708-0). AmLight aims to enhance science research and education in the Americas by interconnecting key points of aggregation, providing operation of production infrastructure, engaging U.S. and western hemisphere science and engineering research and education communities, creating an open instrument for collaboration, and maximizing benefits of all investors. About ANSP: The Academic Network of São Paulo (ANSP) provides connectivity to the top R&E institutions, facilities and researchers in the State of São Paulo, Brazil, including the University of São Paulo, the largest research university in South America. ANSP directly connects to AmLight in Miami at 20G. ANSP also provides connectivity to Kyatera, a 9-city dark-fiber-based optical network infrastructure linking 20 research institutions in the state and a number of special infrastructure projects like GridUNESP, one of the largest computational clusters in Latin America, supporting interdisciplinary grid-based science. About RNP: The Brazilian Education and Research Network (RNP), qualified as a Social Organization (OS) by the Brazilian government, is supervised by the Ministry of Science, Technology and Innovation (MCTI), through the inter-ministerial RNP program, which also includes the Ministries of Education (MEC), Health (MS) and Culture (MinC). The first Internet provider in Brazil with national coverage, RNP operates a highperformance nationwide network, with points of presence in all 26 states and the national capital, providing service to over800 distinct locations. RNP s more than two million users are making use of an advanced network infrastructure for communication, computation and experimentation, which contributes to the integration of the system of Science and Technology, Higher Education, Health and Culture. 7