ESnet4: Networking for the Future of DOE Science

Size: px
Start display at page:

Download "ESnet4: Networking for the Future of DOE Science"

Transcription

1 ESnet4: Networking for the Future of DOE Science International ICFA Workshop on Digital Divide Issues for Global e-science October 25, 2007 Joe Metzger Senior Engineer Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of Science 1

2 DOE s Office of Science: Enabling Large-Scale Science The Office of Science (SC) is the single largest supporter of basic research in the physical sciences in the United States, providing more than 40 percent of total funding for the Nation s research programs in high-energy physics, nuclear physics, and fusion energy sciences. ( SC funds 25,000 PhDs and PostDocs A primary mission of SC s National Labs is to build and operate very large scientific instruments - particle accelerators, synchrotron light sources, very large supercomputers - that generate massive amounts of data and involve very large, distributed collaborations ESnet - the Energy Sciences Network - is an SC program whose primary mission is to enable the large-scale science of the Office of Science that depends on: Sharing of massive amounts of data Supporting thousands of collaborators world-wide Distributed data processing Distributed data management Distributed simulation, visualization, and computational steering Collaboration with the US and International Research and Education community In addition to the National Labs, ESnet serves much of the rest of DOE, including NNSA about 75, ,000 users in total 2

3 Lawrence Berkeley National Laboratory Pacific Northwest National Laboratory Office of Science US Community Drives ESnet Design for Domestic Connectivity Idaho National Laboratory Ames Laboratory Argonne National Laboratory Fermi National Accelerator Laboratory Brookhaven National Laboratory Stanford Linear Accelerator Center Princeton Plasma Physics Laboratory Lawrence Livermore National Laboratory General Atomics Institutions supported by SC Sandia National Major User Facilities DOE Specific-Mission Laboratories DOE Program-Dedicated Laboratories DOE Multiprogram Laboratories Laboratories Los Alamos National Laboratory Thomas Jefferson National Accelerator Facility Oak Ridge National Laboratory National Renewable Energy Laboratory

4 ESnet as a Large-Scale Science Data Transport Network A few hundred of the 25,000,000 hosts that the Labs connect to every month account for 50% of all ESnet traffic (which is all science data - primarily high energy physics, nuclear physics, and climate at this point) = the R&E source or destination of ESnet s top 100 sites (all R&E) (the DOE Lab destination or source of each flow is not shown)

5 What ESnet Is A large-scale IP network built on a national circuit infrastructure with high-speed connections to all major US and international research and education (R&E) networks An organization of 30 professionals structured for the service An operating entity with an FY06 budget of $26.6M A tier 1 ISP (direct peerings will all major networks) The primary DOE network provider Provides production Internet service to all of the major DOE Labs* and most other DOE sites Based on DOE Lab populations, it is estimated that between 50, ,000 users depend on ESnet for global Internet access additionally, each year more than 18,000 non-doe researchers from universities, other government agencies, and private industry use Office of Science facilities * PNNL supplements its ESnet service with commercial service 5

6 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 CA*net4 MREN France StarTap GLORIAD Taiwan (TANet2, (Russia, China) ASCC) Korea (Kreonet2 SINet (Japan) Russia (BINP) CERN (USLHCnet DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc JGI LBNL NERSC SLAC SNV SDN AU AU SEA SDSC general Internet2 NASA Ames PacWave NLR PacificWave NLR PNNL IARC LIGO LNOC LLNL SNLL SNV PAIX-PA Equinix MAE-W KAREN / REANNZ Internet2 SINGAREN ODN Japan Telecom America YUCCA MT GA AMPATH (S. America) CLARA CUDI INEEL BECHTEL-NV ALB NREL UNM Denver LANL SNLA DOE-ALB Allied Signal ELP PANTEX Starlight USLHCNet Internet2 NLR FNAL ANL AMES ARM CHI-SL Equinix KCP CHI OSTI NOAA NETL OSC GTN NNSA ORAU ORNL ATL Lab DC Offices DC SRS NYC NSF/IRNC funded MAN LAN Internet2 PPPL JLAB MIT BNL MAE-E Equinix MAXGPoP general AMPATH (S. America) CLARA ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (2007) ~45 end sites: Office Of Science Sponsored Other Sponsored R&E commercial peering points Specific R&E network peers network s ESnet core hubs Other R&E peering points high-speed peering points with Internet2 IP Internet2 SNV International (high speed) 10 Gb/s SDN core 10G/s IP core MAN rings ( 10 G/s) Lab supplied links OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less

7 ESnet s Place in U. S. and International Science ESnet, Internet2/Abilene, and National Lambda Rail (NLR) provide most of the nation s transit networking for basic science Abilene provides national transit networking for most of the US universities by interconnecting the regional networks (mostly via the GigaPoPs) ESnet provides national transit networking and ISP service for the DOE Labs NLR provides various science-specific and network R&D circuits GÉANT plays a role in Europe similar to Abilene and ESnet in the US it interconnects the European National Research and Education Networks (NRENs), to which the European R&E sites connect A GÉANT operated, NSF funded link currently carries all non-lhc ESnet traffic to Europe, and this is a significant fraction of all ESnet traffic 7

8 Operating Science Mission Critical Infrastructure ESnet is a visible and critical piece of DOE science infrastructure if ESnet fails,10s of thousands of DOE and University users know it within minutes if not seconds Requires high reliability and high operational security in the systems that are integral to the operation and management of the network Secure and redundant mail and Web systems are central to the operation and security of ESnet trouble tickets are by engineering communication by engineering database interfaces are via Web Secure network access to Hub routers Backup secure telephone modem access to Hub equipment 24x7 help desk and 24x7 on-call network engineer trouble@es.net (end-to-end problem resolution) 8

9 A Changing Science Environment is the Key Driver of the Next Generation ESnet Large-scale collaborative science big facilities, massive data, thousands of collaborators is now a significant aspect of the Office of Science ( SC ) program SC science community is almost equally split between Labs and universities SC facilities have users worldwide Very large international (non-us) facilities (e.g. LHC and ITER) and international collaborators are now a key element of SC science Distributed systems for data analysis, simulations, instrument operation, etc., are essential and are now common (in fact dominate data analysis that now generates 50% of all ESnet traffic) 9

10 Planning and Building ESnet4 Requirements are primary drivers for ESnet science focused Sources of Requirements 1. Office of Science (SC) Program Managers The Program Offices Requirements Workshops 2007» Basic Energy Science» Biological & Environmental Research 2008» Fusion Energy Science» Nuclear Physics 2009» High Energy Physics» Advance Scientific Computing Research 2. Direct gathering through interaction with science users of the network Example case studies (from 2005/2006 update) Magnetic Fusion Large Hadron Collider (LHC) Climate Modeling Spallation Neutron Source 3. Observation of the network 10

11 Science Networking Requirements Aggregation Summary Science Drivers Science Areas / Facilities End2End Reliability Connectivity Today End2End Band width 5 years End2End Band width Traffic Characteristics Network Services Magnetic Fusion Energy % (Impossible without full redundancy) DOE sites US Universities Industry 200+ Mbps 1 Gbps Bulk data Remote control Guaranteed bandwidth Guaranteed QoS Deadline scheduling NERSC and ACLF - DOE sites US Universities International Other ASCR supercomputers 10 Gbps 20 to 40 Gbps Bulk data Remote control Remote file system sharing Guaranteed bandwidth Guaranteed QoS Deadline Scheduling PKI / Grid NLCF - DOE sites US Universities Industry Backbone Band width parity Backbone band width parity Bulk data Remote file system sharing International Nuclear Physics (RHIC) - DOE sites US Universities International 12 Gbps 70 Gbps Bulk data Guaranteed bandwidth PKI / Grid Spallation Neutron Source High (24x7 operation) DOE sites 640 Mbps 2 Gbps Bulk data

12 Science Network Requirements Aggregation Summary Science Drivers Science Areas / Facilities End2End Reliability Connectivity Today End2End Band width 5 years End2End Band width Traffic Characteristics Network Services Advanced Light Source - DOE sites US Universities Industry 1 TB/day 300 Mbps 5 TB/day 1.5 Gbps Bulk data Remote control Guaranteed bandwidth PKI / Grid Bioinformatics - DOE sites US Universities 625 Mbps 12.5 Gbps in two years 250 Gbps Bulk data Remote control Point-tomultipoint Guaranteed bandwidth High-speed multicast Chemistry / Combustion - DOE sites US Universities Industry - 10s of Gigabits per second Bulk data Guaranteed bandwidth PKI / Grid Climate Science - DOE sites US Universities International - 5 PB per year 5 Gbps Bulk data Remote control Guaranteed bandwidth PKI / Grid Immediate Requirements and Drivers High Energy Physics (LHC) % (Less than 4 hrs/year) US Tier1 (FNAL, BNL) US Tier2 (Universities) International (Europe, Canada) 10 Gbps 60 to 80 Gbps (30-40 Gbps per US Tier1) Bulk data Coupled data analysis processes Guaranteed bandwidth Traffic isolation PKI / Grid

13 Are These Estimates Realistic? YES! Megabytes/sec of data traffic FNAL outbound CMS traffic for 4 months, to Sept. 1, 2007 Max= 8.9 Gb/s (1064 MBy/s of data), Average = 4.1 Gb/s (493 MBy/s of data) Destinations: Gigabits/sec of network traffic 13

14 Large-Scale Data Analysis Systems (Typified by the LHC) have Several Characteristics that Result in Requirements for the Network and its Services The systems are data intensive and high-performance, typically moving terabytes a day for months at a time The system are high duty-cycle, operating most of the day for months at a time in order to meet the requirements for data movement The systems are widely distributed typically spread over continental or inter-continental distances Such systems depend on network performance and availability, but these characteristics cannot be taken for granted, even in well run networks, when the multi-domain network path is considered The applications must be able to get guarantees from the network that there is adequate bandwidth to accomplish the task at hand The applications must be able to get information from the network that allows graceful failure and auto-recovery and adaptation to unexpected network conditions that are short of outright failure These requirements are generally true for systems with widely distributed components to be reliable and consistent in performing the sustained, complex tasks of large-scale science This slide drawn from [ICFA SCIC]

15 Enabling Large-Scale Science These requirements are generally true for systems with widely distributed components to be reliable and consistent in performing the sustained, complex tasks of large-scale science Networks must provide communication capability that is service-oriented: configurable, schedulable, predictable, reliable, and informative and the network and its services must be scalable 15

16 3. Observed Evolution of Historical ESnet Traffic Patterns Terabytes / month ESnet total traffic passed 2 Petabytes/mo about mid-april, 2007 top 100 sites to site workflows 0 Jan, 00 Jan, 01 Jan, 02 Jan, 03 Jan, 04 Jan, 05 Jan, 06 site to site workflow data not available Jan, 07 ESnet Monthly Accepted Traffic, January, 2000 June, 2007 ESnet is currently transporting more than1 petabyte (1000 terabytes) per month More than 50% of the traffic is now generated by the top 100 sites large-scale science dominates all ESnet traffic

17 ESnet Traffic has Increased by 10X Every 47 Months, on Average, Since Nov., TBy/mo. Apr., PBy/mo Jul., TBy/mo. Terabytes / month Aug., MBy/mo. Oct., TBy/mo. 57 months Jan, 90 Jan, 91 Jan, 92 Jan, 93 Jan, 94 Jan, 95 Jan, 96 Jan, 97 Jan, 98 Jan, 99 Jan, 00 Jan, 01 Jan, 02 Jan, 03 Jan, 04 Jan, 05 Jan, 06 Jan, months 53 months 38 months Log Plot of ESnet Monthly Accepted Traffic, January, 1990 June, 2007

18 What is the High-Level View of all ESnet Traffic ESnet Inter-Sector Traffic Summary, May % 4 % Commercial DOE sites ~10% 41 % 16 % ESnet 54 % 17 % Research and Education Inter-Lab traffic 20 % 38 % International (almost entirely R&E sites) Traffic notes more than 90% of all traffic is Office of Science Traffic coming into ESnet = Green Traffic leaving ESnet = Blue Traffic between ESnet sites 18

19 Requirements from Network Utilization Observation In 4 years, we can expect a 10x increase in traffic over current levels without the addition of production LHC traffic Nominal average load on busiest backbone links is ~1.5 Gbps today In 4 years that figure will be ~15 Gbps based on current trends Measurements of this type are science-agnostic It doesn t matter who the users are, the traffic load is increasing exponentially Predictions based on this sort of forward projection tend to be conservative estimates of future requirements because they cannot predict new uses 19

20 Requirements from Traffic Flow Observations Most of ESnet science traffic has a source or sink outside of ESnet Drives requirement for high-bandwidth peering Reliability and bandwidth requirements demand that peering be redundant Multiple 10 Gbps peerings today, must be able to add more bandwidth flexibly and cost-effectively Bandwidth and service guarantees must traverse R&E peerings Collaboration with other R&E networks on a common framework is critical Seamless fabric Large-scale science is now the dominant user of the network Satisfying the demands of large-scale science traffic into the future will require a purpose-built, scalable architecture Traffic patterns are different than commodity Internet 20

21 Summary of All Requirements To-Date Requirements from SC Programs: 1A) Provide consulting on system / application network tuning Requirements from science case studies: 2A) Build the ESnet core up to 100 Gb/s within 5 years 2B) Deploy network to accommodate LHC collaborator footprint 2C) Implement network to provide for LHC data path loadings 2D) Provide the network as a service-oriented capability Requirements from observing traffic growth and change trends in the network: 3A) Provide 15 Gb/s core within four years and 150 Gb/s core within eight years 3B) Provide a rich diversity and high bandwidth for R&E peerings 3C) Economically accommodate a very large volume of circuit-like traffic 21

22 Estimated Aggregate Link Loadings, Portland Seattle Boise Existing site supplied circuits unlabeled links are 10 Gb/s 13 9 Boston Sunnyvale Salt Lake City Denver KC Chicago Indianapolis Clev. Pitts. NYC Philadelphia Wash DC 2.5 LA San Diego (1) Albuq. Tulsa Nashville Atlanta OC48 Raleigh El Paso Jacksonville ESnet IP switch/router hubs ESnet IP switch only hubs ESnet SDN switch hubs Houston Layer 1 optical nodes at eventual ESnet Points of Presence Layer 1 optical nodes not currently in ESnet plans Lab site 2.5 Committed bandwidth, Gb/s Baton Rouge 2.5 ESnet IP core (1λ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections 22

23 Estimated Aggregate Link Loadings, Portland Seattle (>1 λ) unlabeled links are 10 Gb/s labeled links are in Gb/s Sunnyvale Boise Salt Lake City (16) 50 4λ Denver 5λ KC Chicago Indianapolis 50 Clev Pitts. Boston 50 NYC Philadelphia 50 Wash. DC 10 LA San Diego ESnet IP switch/router hubs ESnet IP switch only hubs ESnet SDN switch hubs El Paso Albuq. Tulsa Houston Layer 1 optical nodes at eventual ESnet Points of Presence Layer 1 optical nodes not currently in ESnet plans Lab site Nashville Baton Rouge 30 Atlanta 40 OC Committed bandwidth, Gb/s 40 link capacity, Gb/s Raleigh Jacksonville 20 ESnet IP core (1λ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections 23

24 ESnet4 - The Response to the Requirements I) A new network architecture and implementation strategy Provide two networks: IP and circuit-oriented Science Data Network Reduces cost of handling high bandwidth data flows Highly capable routers are not necessary when every packet goes to the same place Use lower cost (factor of 5x) switches to relatively route the packets Rich and diverse network topology for flexible management and high reliability Dual connectivity at every level for all large-scale science sources and sinks A partnership with the US research and education community to build a shared, large-scale, R&E managed optical infrastructure a scalable approach to adding bandwidth to the network dynamic allocation and management of optical circuits II) Development and deployment of a virtual circuit service Develop the service cooperatively with the networks that are intermediate between DOE Labs and major collaborators to ensure and-to-end interoperability III) Develop and deploy service-oriented, user accessable network monitoring systems IV) Provide consulting on system / application network performance tuning 24

25 Next Generation ESnet: I) Architecture and Configuration Main architectural elements and the rationale for each element 1) A High-reliability IP core (e.g. the current ESnet core) to address General science requirements Lab operational requirements Backup for the SDN core Vehicle for science services Full service IP routers 2) Metropolitan Area Network (MAN) rings to provide Dual site connectivity for reliability Much higher site-to-core bandwidth Support for both production IP and circuit-based traffic Multiply connecting the SDN and IP cores 2a) Loops off of the backbone rings to provide For dual site connections where MANs are not practical 3) A Science Data Network (SDN) core for Provisioned, guaranteed bandwidth circuits to support large, high-speed science data flows Very high total bandwidth Multiply connecting MAN rings for protection against hub failure Alternate path for production IP traffic Less expensive router/switches Initial configuration targeted at LHC, which is also the first step to the general configuration that will address all SC requirements Can meet other unknown bandwidth requirements by adding lambdas 25

26 ESnet Target Architecture: IP WAN+Science Data Network WAN+Metro Area Rings international connections international connections international connections 1625 miles / 2545 km international connections Sunnyvale Seattle LA San Diego IP core hubs SDN hubs Primary DOE Labs Possible hubs Denver Metropolitan Area Rings international connections Loop off Backbone SDN Core IP Core Albuquerque 2700 miles / 4300 km Chicago Atlanta Cleveland New York Washington DC international connections Gbps circuits Production IP Wide Area Network Science Data Network WAN Metropolitan Area Networks or backbone loops for Lab access International connections 26

27 ESnet Metropolitan Area Network Ring Architecture for High Reliability Sites US LHCnet switch SDN core west US LHCnet switch SDN core switch SDN core switch SDN core east ESnet production IP core hub IP core west T320 IP core router IP core east ESnet SDN core hub ESnet IP core hub Large Science Site MAN site switch MAN fiber ring: 2-4 x 10 Gbps channels provisioned initially, with expansion capacity to ESnet MAN switch Independent port card supporting multiple 10 Gb/s line interfaces ESnet managed λ / circuit services ESnet switch ESnet production IP service ESnet managed virtual circuit services tunneled through the IP backbone Site SDN circuits to site systems Site router MAN site switch Site LAN Virtual Circuit to Site Site edge router Virtual Circuits to Site Site gateway router 27

28 ESnet4 Internet2 has partnered with Level 3 Communications Co. and Infinera Corp. for a dedicated optical fiber infrastructure with a national footprint and a rich topology - the Internet2 Network The fiber will be provisioned with Infinera Dense Wave Division Multiplexing equipment that uses an advanced, integrated opticalelectrical design Level 3 will maintain the fiber and the DWDM equipment The DWDM equipment will initially be provisioned to provide10 optical circuits (lambdas - λs) across the entire fiber footprint (40/80 λs is max.) ESnet has partnered with Internet2 to: Share the optical infrastructure Develop new circuit-oriented network services Explore mechanisms that could be used for the ESnet Network Operations Center (NOC) and the Internet2/Indiana University NOC to back each other up for disaster recovery purposes 28

29 ESnet4 ESnet is building its next generation IP network and its new circuit-oriented Science Data Network primarily on the Internet2 circuits (λs) that are dedicated to ESnet, together with a few National Lambda Rail and other circuits ESnet will provision and operate its own routing and switching hardware that is installed in various commercial telecom hubs around the country, as it has done for the past 20 years ESnet s peering relationships with the commercial Internet, various US research and education networks, and numerous international networks will continue and evolve as they have for the past 20 years 29

30 ESnet4 ESnet4 will also involve an expansion of the multi- 10Gb/s Metropolitan Area Rings in the San Francisco Bay Area, Chicago, Long Island, Newport News (VA/Washington, DC area), and Atlanta provide multiple, independent connections for ESnet sites to the ESnet core network expandable Several 10Gb/s links provided by the Labs that will be used to establish multiple, independent connections to the ESnet core currently PNNL and ORNL 30

31 Internet2 and ESnet Optical Node ESnet metro-area networks support devices: measurement out-of-band access monitoring security SDN core switch ESnet IP core M320 Network Testbed Implemented as a Optical Overlay dynamically allocated and routed waves (future) T640 grooming Cienadevice CoreDirector optical interface to R&E Regional Nets Internet2 support devices: measurement out-of-band access monitoring. various equipment and experimental control plane management systems BMM BMM fiber west fiber east Infinera DTN Internet2/Level3 National Optical Infrastructure fiber north/south 31

32 ESnet 3 Backbone as of January 1, Seattle Sunnyvale Chicago New York City Washington DC San Diego Albuquerque Atlanta El Paso Future ESnet Hub ESnet Hub 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) MAN rings ( 10 G/s) Lab supplied links

33 ESnet 4 Backbone Target September 30, 2007 Seattle Boise Clev. Boston New York City Sunnyvale Denver Chicago Washington DC Los Angeles San Diego Albuquerque Kansas City Nashville Atlanta Future ESnet Hub ESnet Hub El Paso Houston 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings ( 10 G/s) Lab supplied links 33

34 ESnet4 Metro Area Rings, 2007 Configurations Seattle West Chicago MAN 600 W. Chicago Long Island MAN USLHCNet (28) Portland (8) Starlight USLHCNet 32 AoA, NYC BNL Sunnyvale (29) (23) LA San Diego (24) Boise Boston (9) (7) Chicago (11) Clev. (10) FNAL ANL NYC Ames (32) (13) (25) Denver Salt KC Philadelphia (15) (26) Lake Pitts. San Francisco City Wash DC (21) (0) (22) (30) Bay Area MAN Raleigh JGI Tulsa Nashville Albuq. OC48 (1(3)) (3) LBNL (4) (1) Atlanta Newport News - Elite SLAC (19) NERSC (20) (2) Jacksonville Wash., El Paso DC (17) (6) MATP ESnet IP switch/router hubs LLNL Atlanta MAN ORNL ESnet IP switch only hubs SNLL Nashville ESnet SDN switch hubs All circuits are 10Gb/s. 56 Marietta Wash., DC (SOX) Layer 1 optical nodes at eventual ESnet Points of Presence Layer 1 optical nodes not currently in ESnet plans Lab site (16) Houston (12) Indianapolis 180 Peachtree (14) (27) ESnet IP core JLab ESnet Science Data ELITE Network core ESnet SDN core, NLR links (existing) Lab supplied ODU link LHC related link MAN link International IP Connections 34

35 ESnet4 IP + SDN, 2008 Configuration (Estimated) Seattle Portland (? λ) Sunnyvale 1λ 1λ LA UCSD San Diego 1λ 2λ 2λ Boise Salt Lake City (24) 1λ 1λ 2λ 1λ Denver Albuq. 2λ Tulsa KC 1λ StarLight 2λ Indianapolis 2λ Nashville Chicago 2λ 2λ Atlanta Clev. 1λ 2λ OC48 2λ Pitts. Boston 2λ NYC Philadelphia 2λ Wash. DC Raleigh ESnet IP router hubs El Paso ESnet IP internal switch hubs ESnet SDN OSCARS/MPLS switch hubs ESnet SDN internal switch hubs 1λ Houston Layer 1 optical nodes at eventual ESnet Points of Presence Layer 1 optical nodes not currently in ESnet plans Lab site Corrected Map Baton Rouge 1λ Jacksonville ESnet IP network (Internet2 circuits) ESnet Science Data Network (Internet2) ESnet SDN (NLR circuits) Lab supplied link LHC related link MAN link International IP Connections Status indefinite / not installed 35

36 Estimated ESnet Configuration (Some of the circuits may be allocated dynamically from shared a pool.) Seattle (28) Portland (8) (? λ) Sunnyvale (29) (23) 2λ 2λ 3λ (32) 3λ Boise (7) Salt Lake City (16) 3λ 2λ Denver (0) (15) 3λ (22) KC Boston Chicago (9) 3λ (11) Clev. 3λ (10) NYC (13) 3λ (25) Philadelphia 2λ 3λ (26) (21) Wash. DC 3λ (12) 2λ Indianapolis (14) (27) Pitts. (30) San Diego LA 2λ (24) 2λ 2λ Tulsa Albuq. (1) 1λ (19) (20) El Paso (17) 2λ ESnet IP switch/router hubs ESnet IP switch only hubs ESnet SDN switch hubs Houston Layer 1 optical nodes at eventual ESnet Points of Presence Layer 1 optical nodes not currently in ESnet plans Lab site (5) Nashville Baton Rouge 2λ Atlanta (3) 2λ (6) (20) OC48 3λ (2) (4) Raleigh Jacksonville ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number

37 ESnet4 IP + SDN, 2011 Configuration Seattle (28) Portland (8) (>1 λ) Sunnyvale (29) (23) 4λ 4λ 5λ (32) Boise (7) Salt Lake City (16) 5λ 4λ Denver (0) (15) 5λ (22) KC Boston Chicago (9) 5λ (11) Clev. 5λ (10) NYC (13) 5λ (25) Philadelphia 4λ 5λ (26) (21) Wash. DC 5λ (12) 3λ Indianapolis (14) (27) Pitts. (30) San Diego LA 4λ (24) 4λ 4λ Tulsa Albuq. (1) 3λ (19) (20) El Paso (17) 4λ ESnet IP switch/router hubs ESnet IP switch only hubs ESnet SDN switch hubs Houston Layer 1 optical nodes at eventual ESnet Points of Presence Layer 1 optical nodes not currently in ESnet plans Lab site (5) Nashville Baton Rouge 3λ Atlanta (3) 4λ (6) (20) OC48 5λ (2) (4) Raleigh Jacksonville ESnet IP core (1λ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number 37

38 1625 miles / 2545 km Asia-Pacific Australia Seattle Sunnyvale Australia LA San Diego Boise Canada (CANARIE) South America (AMPATH) ESnet4 Planed Configuration Gbps in , Gbps in Canada Denver Asia Pacific Science Data Network Core IP Core El Paso Kansas City Albuquerque (CANARIE) GLORIAD (Russia and China) Tulsa Houston Asia- Pacific Chicago Atlanta Cleveland CERN (30 Gbps) Jacksonville CERN (30 Gbps) Europe (GÉANT) Boston New York Washington DC South America (AMPATH) IP core hubs SDN (switch) hubs Primary DOE Labs High speed cross-connects with Ineternet2/Abilene Possible hubs Core network fiber path is ~ 14,000 miles / 24,000 km 2700 miles / 4300 km Production IP core (10Gbps) SDN core ( Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections

39 New Network Service: Virtual Circuits Guaranteed bandwidth service User specified bandwidth - requested and managed in a Web Services framework Traffic isolation and traffic engineering Provides for high-performance, non-standard transport mechanisms that cannot co-exist with commodity TCP-based transport Enables the engineering of explicit paths to meet specific requirements e.g. bypass congested links, using lower bandwidth, lower latency paths End-to-end (cross-domain) connections between Labs and collaborating institutions Secure connections Provides end-to-end connections between Labs and collaborator institutions The circuits are secure to the edges of the network (the site boundary) because they are managed by the control plane of the network which is isolated from the general traffic Reduced cost of handling high bandwidth data flows Highly capable routers are not necessary when every packet goes to the same place Use lower cost (factor of 5x) switches to relatively route the packets 39

40 Virtual Circuit Service Functional Requirements Support user/application VC reservation requests Source and destination of the VC Bandwidth, latency, start time, and duration of the VC Traffic characteristics (e.g. flow specs) to identify traffic designated for the VC Manage allocations of scarce, shared resources Authentication to prevent unauthorized access to this service Authorization to enforce policy on reservation/provisioning Gathering of usage data for accounting Provide virtual circuit setup and teardown mechanisms and security Widely adopted and standard protocols (such as MPLS and GMPLS) are well understood within a single domain Cross domain interoperability is the subject of ongoing, collaborative development secure and-to-end connection setup is provided by the network control plane accommodate heterogeneous circuit abstraction (e.g.. MPLS, GMPLS, VLANs, VCAT/LCAS) Enable the claiming of reservations Traffic destined for the VC must be differentiated from regular traffic Enforce usage limits Per VC admission control polices usage, which in turn facilitates guaranteed bandwidth Consistent per-hop QoS throughout the network for transport predictability 40

41 Environment of Science is Inherently Multi-Domain End points will be at independent institutions campuses or research institutes - that are served by ESnet, Abilene, GÉANT, and their regional networks Complex inter-domain issues typical circuit will involve five or more domains - of necessity this involves collaboration with other networks For example, a connection between FNAL and DESY involves five domains, traverses four countries, and crosses seven time zones FNAL (AS3152) [US] GEANT (AS20965) [Europe] DESY (AS1754) [Germany] ESnet (AS293) [US] DFN (AS680) [Germany] 41

42 OSCARS [6] Inter-domain virtual circuit reservation involves brokers in each domain that present a uniform view of the different domain circuit management mechanisms (e.g. OSCARS in ESnet) (Diagram by Tom Lehman, ISI) 42

43 ESnet Virtual Circuit Service Roadmap Dedicated virtual circuits Dynamic virtual circuit allocation Generalized MPLS (GMPLS) Initial production service Full production service Interoperability between GMPLS circuits, VLANs, and MPLS circuits (layer 1-3) Interoperability between VLANs and MPLS circuits (layer 2 & 3) Dynamic provisioning of Multi-Protocol Label Switching (MPLS) circuits in IP nets (layer 3) and in VLANs for Ethernets (layer 2) 43

44 Monitoring as a Service-Oriented Communications Service perfsonar is a community effort to define network management data exchange protocols, and standardized measurement data gathering and archiving Path performance monitoring is an example of a perfsonar application provide users/applications with the end-to-end, multi-domain traffic and bandwidth availability provide real-time performance such as path utilization and/or packet drop Multi-domain path performance monitoring tools are in development based on perfsonar protocols and infrastructure One example Traceroute Visualizer [TrViz] has been deployed in about 10 R&E networks in the US and Europe that have deployed at least some of the required perfsonar measurement archives to support the tool

45 Federated Trust Services Support for Large-Scale Collaboration Remote, multi-institutional, identity authentication is critical for distributed, collaborative science in order to permit sharing widely distributed computing and data resources, and other Grid services Public Key Infrastructure (PKI) is used to formalize the existing web of trust within science collaborations and to extend that trust into cyber space The function, form, and policy of the ESnet trust services are driven entirely by the requirements of the science community and by direct input from the science community International scope trust agreements that encompass many organizations are crucial for large-scale collaborations ESnet has lead in negotiating and managing the cross-site, crossorganization, and international trust relationships to provide policies that are tailored for collaborative science This service, together with the associated ESnet PKI service, is the basis of the routine sharing of HEP Grid-based computing resources between US and Europe 45

46 ESnet Conferencing Service (ECS) A highly successful ESnet Science Service that provides audio, video, and data teleconferencing service to support human collaboration of DOE science Seamless voice, video, and data teleconferencing is important for geographically dispersed scientific collaborators Provides the central scheduling essential for global collaborations ESnet serves more than a thousand DOE researchers and collaborators worldwide H.323 (IP) videoconferences (4000 port hours per month and rising) audio conferencing (2500 port hours per month) (constant) data conferencing (150 port hours per month) Web-based, automated registration and scheduling for all of these services Very cost effective (saves the Labs a lot of money) 46

47 Summary ESnet is currently satisfying its mission by enabling SC science that is dependant on networking and distributed, large-scale collaboration: The performance of ESnet over the past year has been excellent, with only minimal unscheduled down time. The reliability of the core infrastructure is excellent. Availability for users is also excellent - DOE 2005 annual review of LBL ESnet has put considerable effort into gathering requirements from the DOE science community, and has a forward-looking plan and expertise to meet the five-year SC requirements A Lehman review of ESnet (Feb, 2006) has strongly endorsed the plan presented here 47

48 References 1. High Performance Network Planning Workshop, August Science Case Studies Update, 2006 (contact 3. DOE Science Networking Roadmap Meeting, June DOE Workshop on Ultra High-Speed Transport Protocols and Network Provisioning for Large-Scale Science Applications, April Science Case for Large Scale Simulation, June Workshop on the Road Map for the Revitalization of High End Computing, June (public report) 7. ASCR Strategic Planning Workshop, July Planning Workshops-Office of Science Data-Management Strategy, March & May For more information contact Chin Guok Also see - [LHC/CMS] [ICFA SCIC] Networking for High Energy Physics. International Committee for Future Accelerators (ICFA), Standing Committee on Inter-Regional Connectivity (SCIC), Professor Harvey Newman, Caltech, Chairperson. - [E2EMON] Geant2 E2E Monitoring System developed and operated by JRA4/WI3, with implementation done at DFN [TrViz] ESnet PerfSONAR Traceroute Visualizer 48

ESnet Update Winter 2008 Joint Techs Workshop

ESnet Update Winter 2008 Joint Techs Workshop 1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager Energy Sciences Network Lawrence Berkeley National Laboratory January 21, 2008 Networking for the Future of Science ESnet

More information

ESnet Update Summer 2008 Joint Techs Workshop

ESnet Update Summer 2008 Joint Techs Workshop 1 ESnet Update Summer 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager Energy Sciences Network Lawrence Berkeley National Laboratory July 22, 2008 Networking for the Future of Science AU AU

More information

SubOptic 2007 May 15, 2007 Baltimore, MD

SubOptic 2007 May 15, 2007 Baltimore, MD High Performance Hybrid Optical-Packet Networks: Developments and Potential Impacts SubOptic 2007 May 15, 2007 Baltimore, MD Dr. Don Riley, Professor, University of Maryland SURA IT Fellow; Chair, IEEAF

More information

ESnet Planning, Status, and Future Issues

ESnet Planning, Status, and Future Issues ESnet Planning, Status, and Future Issues ASCAC, August 2008 William E. Johnston, ESnet Department Head and Senior Scientist Joe Burrescia, General Manager Mike Collins, Chin Guok, and Eli Dart, Engineering

More information

The Science DMZ: Evolution

The Science DMZ: Evolution The Science DMZ: Evolution Eli Dart, ESnet CC-NIE PI Meeting Washington, DC May 1, 2014 Why Are We Doing This? It s good to build high-quality infrastructure As network engineers, we like building networks

More information

Programmable Information Highway (with no Traffic Jams)

Programmable Information Highway (with no Traffic Jams) Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab Exponential Growth ESnet Accepted Traffic: Jan

More information

ESnet Status Update. ESCC, July Networking for the Future of Science

ESnet Status Update. ESCC, July Networking for the Future of Science ESnet Status Update ESCC, July 2008 William E. Johnston, ESnet Department Head and Senior Scientist Joe Burrescia, General Manager Mike Collins, Chin Guok, and Eli Dart Engineering Jim Gagliardi, Operations

More information

Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S.

Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S. Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S. Steve Corbató corbato@internet2.edu Director, Network Initiatives, Internet2 & Visiting Fellow, Center for

More information

ESnet s primary mission is to enable the largescale science that is the mission of the Office of Science (SC) and that depends on:

ESnet s primary mission is to enable the largescale science that is the mission of the Office of Science (SC) and that depends on: ESnet Status Update ESCC January 23, 2008 (Aloha!) William E. Johnston ESnet Department Head and Senior Scientist Energy Sciences Network Lawrence Berkeley National Laboratory wej@es.net, www.es.net This

More information

Network Architecture and Services to Support Large-Scale Science: An ESnet Perspective

Network Architecture and Services to Support Large-Scale Science: An ESnet Perspective Network Architecture and Services to Support Large-Scale Science: An ESnet Perspective Joint Techs January, 2008 William E. Johnston ESnet Department Head and Senior Scientist Energy Sciences Network Lawrence

More information

Next Generation Networking and The HOPI Testbed

Next Generation Networking and The HOPI Testbed Next Generation Networking and The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 CANS 2005 Shenzhen, China 2 November 2005 Agenda Current Abilene Network

More information

Implementation of the Pacific Research Platform over Pacific Wave

Implementation of the Pacific Research Platform over Pacific Wave Implementation of the Pacific Research Platform over Pacific Wave 21 September 2015 CANS, Chengdu, China Dave Reese (dave@cenic.org) www.pnw-gigapop.net A Brief History of Pacific Wave n Late 1990 s: Exchange

More information

Connectivity Services, Autobahn and New Services

Connectivity Services, Autobahn and New Services Connectivity Services, Autobahn and New Services Domenico Vicinanza, DANTE EGEE 09, Barcelona, 21 st -25 th September 2009 Agenda Background GÉANT Connectivity services: GÉANT IP GÉANT Plus GÉANT Lambda

More information

Issues and Challenges in Optical Network Design

Issues and Challenges in Optical Network Design Issues and Challenges in Optical Network Design Biswanath Mukherjee Distinguished Professor Department of Computer Science University of California, Davis mukherje@cs.ucdavis.edu http://networks.cs.ucdavis.edu/~mukherje

More information

Internet2: Presentation to Astronomy Community at Haystack. T. Charles Yun April 2002

Internet2: Presentation to Astronomy Community at Haystack. T. Charles Yun April 2002 Internet2: Presentation to Astronomy Community at Haystack T. Charles Yun INTRODUCTION & OVERVIEW Presentation Outline Applications 5 min Examples of current projects About Internet2 5 min Organization,

More information

Abilene: An Internet2 Backbone Network

Abilene: An Internet2 Backbone Network Abilene: An Internet2 Backbone Network Greg Wood Director of Communications Internet2 ORAP Workshop 26 October 1999 Paris, France Internet Development Spiral Commercialization Privatization Today s Internet

More information

Flow Data at 10 GigE and Beyond What can (or should) we do?

Flow Data at 10 GigE and Beyond What can (or should) we do? Flow Data at 10 GigE and Beyond What can (or should) we do? Sco$ Pinkerton pinkerton@anl.gov Argonne Na6onal Laboratory www.anl.gov About me. Involved in network design, network opera6on & network security

More information

PacificWave Update and Future Directions

PacificWave Update and Future Directions PacificWave Update and Future Directions John Silvester University of Southern California Chair, CENIC APAN20, Taipei, 2005.08.24 Overview Introduction to Pacific Wave Current Status including recent additions

More information

New International Connectivities of SINET5

New International Connectivities of SINET5 Mar 28 th, 2018 at APAN45 New International Connectivities of SINET5 SINET 100G Global Ring Motonori Nakamura National Institute of Informatics (NII), Japan Academic Infrastructure operated by NII Our

More information

Choosing the Right. Ethernet Solution. How to Make the Best Choice for Your Business

Choosing the Right. Ethernet Solution. How to Make the Best Choice for Your Business Choosing the Right Ethernet Solution How to Make the Best Choice for Your Business TABLE OF CONTENTS Introduction 3 CH. 1 Why Do Organizations Choose Ethernet? 4 CH. 2 What Type of Ethernet Solutions Will

More information

The New Internet2 Network

The New Internet2 Network The New Internet2 Network Director Network Research, Architecture, and Technologies Internet2 GLIF Meeting 11 September 2006 Tokyo, Japan Agenda Basic Ideas Design Ideas Topology Optical nodes Control

More information

TMC-Rice-UH data communication needs for Research and Education

TMC-Rice-UH data communication needs for Research and Education TMC-Rice-UH data communication needs for Research and Education Lennart Johnsson Director TLC2 Cullen Professor of Computer Science, Mathematics and Electrical and Computer Engineering Background - Infrastructure

More information

Internet2 and IPv6: A Status Update

Internet2 and IPv6: A Status Update Internet2 and IPv6: A Status Update Rick Summerhill Associate Director, Network Infrastructure, Internet2 North American IPv6 Summit Santa Monica, CA 15 June 2004 Outline Internet2 Goals Participation

More information

Grid Tutorial Networking

Grid Tutorial Networking Grid Tutorial Networking Laukik Chitnis Sanjay Ranka Outline Motivation Key Issues and Challenges Emerging protocols DWDM MPLS, GMPLS Network Infrastructure Internet2 Abilene and HOPI NLR and FLR Gloriad

More information

International Exchanges Current and Future

International Exchanges Current and Future International Exchanges Current and Future Jim Dolgonas President and Chief Operating Officer CENIC April 4, 2006 CENIC Originally formed in 1997 to bring high speed networking to all higher education

More information

Innovation and the Internet

Innovation and the Internet Innovation and the Internet 29 April 2005 Christopher Buja, Deputy Director Academic Research & Technology Initiatives, Cisco cbuja@cisco.com 2004, Cisco Systems, Inc. All rights reserved. 1 Agenda Introduction

More information

Modeling Internet Application Traffic for Network Planning and Provisioning. Takafumi Chujo Fujistu Laboratories of America, Inc.

Modeling Internet Application Traffic for Network Planning and Provisioning. Takafumi Chujo Fujistu Laboratories of America, Inc. Modeling Internet Application Traffic for Network Planning and Provisioning Takafumi Chujo Fujistu Laboratories of America, Inc. Traffic mix on converged IP networks IP TRAFFIC MIX - P2P SCENARIO IP TRAFFIC

More information

The Evolution of Exchange Points

The Evolution of Exchange Points The Evolution of Exchange Points John Silvester Professor and Executive Director of the Center for Scholarly Technology, USC Chair, CENIC APAN21, January 24, 2006, Tokyo, Japan In the beginning Things

More information

Experiences with Dynamic Circuit Creation in a Regional Network Testbed

Experiences with Dynamic Circuit Creation in a Regional Network Testbed This paper was presented as part of the High-Speed Networks 2011 (HSN 2011) Workshop at IEEE INFOCOM 2011 Experiences with Dynamic Circuit Creation in a Regional Network Testbed Pragatheeswaran Angu and

More information

DYNES: DYnamic NEtwork System

DYNES: DYnamic NEtwork System DYNES: DYnamic NEtwork System Artur Barczyk California Institute of Technology / US LHCNet TERENA e2e Workshop Prague, November 29 th, 2010 1 OUTLINE What is DYNES Status Deployment Plan 2 DYNES Overview

More information

By establishing IGTF, we are seeing

By establishing IGTF, we are seeing www.es.net Volume 1, Issue 1 news ESnet Plays Key Role in International Effort of Building Trust on Grids When the International Grid Trust Federation (IGTF) was established in October 2005 during the

More information

Research Network Challenges and Opportunities. Matt Zekauskas, Senior Engineer, Internet

Research Network Challenges and Opportunities. Matt Zekauskas, Senior Engineer, Internet Research Network Challenges and Opportunities Matt Zekauskas, matt@internet2.edu Senior Engineer, Internet2 2006-06-29 Outline Internet2? Current network issues Abilene, a 10Gbps IP backbone New network

More information

US West-Coast Future Internet Infrastructure Pacific Wave Update Pacific Research Platform International Routing Research Collaboration

US West-Coast Future Internet Infrastructure Pacific Wave Update Pacific Research Platform International Routing Research Collaboration SwitchON Workshop 16 October 2015 São Paulo, Brasil US West-Coast Future Internet Infrastructure Pacific Wave Update Pacific Research Platform International Routing Research Collaboration John Silvester,

More information

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are:

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are: E Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are: Native support for all SAN protocols including ESCON, Fibre Channel and Gigabit

More information

Next Generation Broadband Networks

Next Generation Broadband Networks Next Generation Broadband Networks John Harper Vice President, IOS Routing Apricot 2005 1 Next Generation Broadband Networks The broadband opportunity, in Asia and worldwide Cisco R&D for the broadband

More information

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April Wide-Area Networking at SLAC Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April 6 2001. Overview SLAC s Connections to WANs Utilization End-to-end Performance The Future Note:

More information

ESnet Update. Summer 2010 Joint Techs Columbus, OH. Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab

ESnet Update. Summer 2010 Joint Techs Columbus, OH. Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab ESnet Update Summer 2010 Joint Techs Columbus, OH Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab Changes @ ESnet New additions: Greg Bell Chief Information Strategist - Formerly Chief Technology

More information

UltraScience Net Update: Network Research Experiments

UltraScience Net Update: Network Research Experiments UltraScience Net Update: Network Research Experiments Nagi Rao, Bill Wing, Susan Hicks, Paul Newman, Steven Carter Oak Ridge National Laboratory raons@ornl.gov https://www.csm.ornl.gov/ultranet February

More information

NLR Update: Backbone Upgrade Joint Techs July 2008

NLR Update: Backbone Upgrade Joint Techs July 2008 NLR Update: Backbone Upgrade Joint Techs July 2008 Dave Reese 1 National LambdaRail Infrastructure 2006 2004 National LambdaRail, Inc. 2 NLR Backbone Upgrade Upgrade Phase 1 portion of NLR backbone Jacksonville

More information

N-Wave Overview. Westnet Boulder, CO 5 June 2012

N-Wave Overview. Westnet Boulder, CO 5 June 2012 N-Wave Overview Mark Mutz Westnet Boulder, CO 5 June 2012 mark.mutz@noaa.gov Outline Background Network Design & Operation Uptime and Traffic Stats Futures NOAA s Mission To understand and predict changes

More information

Brent Sweeny GRNOC at Indiana University APAN 32 (Delhi), 25 August 2011

Brent Sweeny GRNOC at Indiana University APAN 32 (Delhi), 25 August 2011 Brent Sweeny GRNOC at Indiana University APAN 32 (Delhi), 25 August 2011 A high-performance connection between the North American research & education networks to the Asia- Pacific research & education

More information

DICE Diagnostic Service

DICE Diagnostic Service DICE Diagnostic Service Joe Metzger metzger@es.net Joint Techs Measurement Working Group January 27 2011 Background Each of the DICE collaborators are defining and delivering services to their users. A

More information

The Future of the Internet

The Future of the Internet The Future of the Internet CERNET 10 th Anniversary 25 December 2004 Douglas Van Houweling, President & CEO Internet2 Congratulations! China has been an important contributor to the global high performance

More information

NCIT*net 2 General Information

NCIT*net 2 General Information NCIT*net 2 General Information NCIT*net 2 An Introduction A component of NCCT research team activities is the development of a next generation network to promote the interaction of architecture, technologies,

More information

Practical Optical Networking

Practical Optical Networking 1 of 22 11/15/2007 6:04 PM Practical Optical Networking Outline 1. Evolution of national R&E infrastructure 2. Physical and architectural aspects of optical nets 3. Hybrid networking 4. Testing 2 of 22

More information

5 August 2010 Eric Boyd, Internet2 Deputy CTO

5 August 2010 Eric Boyd, Internet2 Deputy CTO 5 August 2010 Eric Boyd, Internet2 Deputy CTO Extending IDC based Dynamic Circuit Networking Services Internet2 and Dynamic Circuit Networking Internet2 has been working with partners on dynamic circuit

More information

Dave McGaugh, Pacific Northwest Gigapop NANOG 39 Toronto, Canada

Dave McGaugh, Pacific Northwest Gigapop NANOG 39 Toronto, Canada Supporting Hybrid Services at an International Exchange Facility: The Experience of Pacific Wave Dave McGaugh, Pacific Northwest Gigapop NANOG 39 Toronto, Canada About Pacific Wave A joint project between

More information

MRV Communications Inspiring Optical Networks for over 20 Years

MRV Communications Inspiring Optical Networks for over 20 Years Product Overview MRV Communications Inspiring Optical Networks for over 20 Years MRV Communications is a global provider of optical communications systems, providing Carrier Ethernet, WDM transport and

More information

The Abilene Observatory and Measurement Opportunities

The Abilene Observatory and Measurement Opportunities The Abilene Observatory and Measurement Opportunities Rick Summerhill, Director Network Research, Architecture, and Technology, Internet2 CONMI Workshop / PAM 2005 Boston, MA March 30, 05 Outline History

More information

Virtual Circuits Landscape

Virtual Circuits Landscape Virtual Circuits Landscape Summer 2010 ESCC Meeting Columbus, OH Evangelos Chaniotakis, ESnet Network Engineer Lawrence Berkeley National Lab Context and Goals Guaranteed bandwidth services are maturing.

More information

GÉANT Mission and Services

GÉANT Mission and Services GÉANT Mission and Services Vincenzo Capone Senior Technical Business Development Officer CREMLIN WP2 Workshop on Big Data Management 15 February 2017, Moscow GÉANT - Networks Manages research & education

More information

Changing the Voice of

Changing the Voice of Changing the Voice of Telecommunications Level 3 Solutions for Voice Service Providers Competitive: It is a word you know well. As a voice services provider, you face a unique set of challenges that originate

More information

3.4 NON-DOMESTIC SERVICES (L )(C.2.1.9)

3.4 NON-DOMESTIC SERVICES (L )(C.2.1.9) 3.4 NON-DOMESTIC SERVICES (L.34.1.3.4)(C.2.1.9) Qwest uses a combination of owned assets and strategic international partners to deliver non-domestic services ensuring that providers with local knowledge

More information

REANNZ THE NREN FOR NEW ZEALAND RICHARD TUMALIUAN NETWORK ENGINEER TEIN4 NOC ANNUAL CONFERENCE 2015

REANNZ THE NREN FOR NEW ZEALAND RICHARD TUMALIUAN NETWORK ENGINEER TEIN4 NOC ANNUAL CONFERENCE 2015 REANNZ THE NREN FOR NEW ZEALAND RICHARD TUMALIUAN NETWORK ENGINEER TEIN4 NOC ANNUAL CONFERENCE 2015 REANNZ (Research, Education Advanced Network New Zealand) builds and operates the nationwide high-capacity,

More information

Internet : Next Steps

Internet : Next Steps Softbank Internet : Next Steps 2001. 3. 23 chon@cosmos.kaist.ac.kr http://cosmos.kaist.ac.kr The Internet will almost certainly have a stronger impact than PC... a reasonable guess might put it ahead of

More information

GÉANT L3VPN Service Description. Multi-point, VPN services for NRENs

GÉANT L3VPN Service Description. Multi-point, VPN services for NRENs GÉANT L3VPN Service Description Multi-point, VPN services for NRENs Issue Date: 1 November 2017 GÉANT L3VPN Overview The GÉANT L3VPN service offers the National Research and Education Networks (NRENs)

More information

Achieving the Science DMZ

Achieving the Science DMZ Achieving the Science DMZ Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 22, 2012 Outline of the Day Motivation Services Overview Science DMZ

More information

ESnet5 Deployment Lessons Learned

ESnet5 Deployment Lessons Learned ESnet5 Deployment Lessons Learned Joe Metzger, Network Engineer ESnet Network Engineering Group TIP January 16 2013 Outline ESnet5 Overview Transport Network Router Network Transition Constraints Deployment

More information

IRNC:RXP SDN / SDX Update

IRNC:RXP SDN / SDX Update 30 September 2016 IRNC:RXP SDN / SDX Update John Hess Darrell Newcomb GLIF16, Miami Pacific Wave: Overview Joint project of CENIC (California regional research and

More information

America Connects to Europe (ACE) (Award # ) Year 7 Annual Report 1- Mar through 31- May Jennifer Schopf Principal Investigator

America Connects to Europe (ACE) (Award # ) Year 7 Annual Report 1- Mar through 31- May Jennifer Schopf Principal Investigator America Connects to Europe (ACE) (Award #0962973) Year 7 Annual Report 1- Mar- 2016 through 31- May- 2017 Jennifer Schopf Principal Investigator Summary During Year 7 of the America Connects to Europe

More information

BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer. Wenji Wu Internet2 Global Summit May 8, 2018

BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer. Wenji Wu Internet2 Global Summit May 8, 2018 BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer Wenji Wu wenj@fnal.gov Internet2 Global Summit May 8, 2018 BigData Express Funded by DOE s office of Advanced Scientific

More information

ACCELERATING RESEARCH AND EDUCATION NETWORKS WITH INFINERA CLOUD XPRESS

ACCELERATING RESEARCH AND EDUCATION NETWORKS WITH INFINERA CLOUD XPRESS CASE STUDY ACCELERATING RESEARCH AND EDUCATION NETWORKS WITH INFINERA CLOUD XPRESS CUSTOMER Research and education networks connect across nations and around the world to enable collaborative research

More information

Presentation of the LHCONE Architecture document

Presentation of the LHCONE Architecture document Presentation of the LHCONE Architecture document Marco Marletta, GARR LHCONE Meeting Paris, Tuesday 5th April 2011 Agenda Background Design Definitions Architecture Services Policy Next steps 2 Background

More information

BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer. Fermilab, May 2018

BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer. Fermilab, May 2018 BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer Fermilab, May 2018 BigData Express Research Team FNAL Wenji Wu (PI) Qiming Lu Liang Zhang Amy Jin Sajith Sasidharan

More information

ICN for Cloud Networking. Lotfi Benmohamed Advanced Network Technologies Division NIST Information Technology Laboratory

ICN for Cloud Networking. Lotfi Benmohamed Advanced Network Technologies Division NIST Information Technology Laboratory ICN for Cloud Networking Lotfi Benmohamed Advanced Network Technologies Division NIST Information Technology Laboratory Information-Access Dominates Today s Internet is focused on point-to-point communication

More information

TransLight/Pacific Wave NSF-Award #

TransLight/Pacific Wave NSF-Award # TransLight/Pacific Wave NSF-Award #0441119 Annual Report (revised) 03/01/2005-02/28/2006 Program Manager: Kevin L. Thompson NSF/OCI Office of Cyber-Infrastructure Principal Investigator John Silvester

More information

Industry Perspectives on Optical Networking. Joe Berthold 28 September 2004

Industry Perspectives on Optical Networking. Joe Berthold 28 September 2004 Industry Perspectives on Optical Networking Joe Berthold 28 September 2004 CIENA LightWorks Architecture Vision Benefits for Network Operators Reduce OpEx by Process Automation Reduce CapEx by Functional

More information

SCA19 APRP. Update Andrew Howard - Co-Chair APAN APRP Working Group. nci.org.au

SCA19 APRP. Update Andrew Howard - Co-Chair APAN APRP Working Group. nci.org.au SCA19 APRP Update Andrew Howard - Co-Chair APAN APRP Working Group 1 What is a Research Platform Notable Research Platforms APRP History Participants Activities Overview We live in an age of rapidly expanding

More information

Vasilis Maglaris. Chairman, NREN Policy Committee - GÉANT Consortium Coordinator, NOVI FIRE Project

Vasilis Maglaris. Chairman, NREN Policy Committee - GÉANT Consortium Coordinator, NOVI FIRE Project Federated Testbeds for Future Internet Research The NOVI Experience & NREN/GÉANT Potential Vasilis Maglaris Professor of Electrical & Computer Engineering, NTUA Chairman, NREN Policy Committee - GÉANT

More information

IST ATRIUM. A testbed of terabit IP routers running MPLS over DWDM. TF-NGN meeting

IST ATRIUM. A testbed of terabit IP routers running MPLS over DWDM. TF-NGN meeting IST 1999-20675 ATRIUM A testbed of terabit IP routers running MPLS over DWDM TF-NGN meeting 18-06-2001 http://www.alcatel.be/atrium The objectives of the presentation Present the project : objectives partners

More information

NOAA N-Wave Update N-WAVE V. NOAA Research Network. Joint Techs Columbus 13 July 2010

NOAA N-Wave Update N-WAVE V. NOAA Research Network. Joint Techs Columbus 13 July 2010 NOAA N-Wave Update N-WAVE V NOAA Research Network Joint Techs Columbus 13 July 2010 Mark Mutz, NOAA mark.mutz@noaa.gov Caren Litvanyi, N-Wave NOC litvanyi@grnoc.iu.edu NOAA s Mission To understand and

More information

1.264 Lecture 23. Telecom Enterprise networks MANs, WANs

1.264 Lecture 23. Telecom Enterprise networks MANs, WANs 1.264 Lecture 23 Telecom Enterprise networks MANs, WANs Enterprise networks Connections within enterprise External connections Remote offices Employees Customers Business partners, supply chain partners

More information

NSF Networking Briefing. Peter O Neil August 13, 2004

NSF Networking Briefing. Peter O Neil August 13, 2004 NSF Networking Briefing Peter O Neil August 13, 2004 Theme Desktop, Campus, Metro, Regional, and National networking interfaces must evolve together for e2e performance as demand warrants, emerging capabilities

More information

BW Protection. 2002, Cisco Systems, Inc. All rights reserved.

BW Protection. 2002, Cisco Systems, Inc. All rights reserved. BW Protection 2002, Cisco Systems, Inc. All rights reserved. 1 Cisco MPLS - Traffic Engineering for VPNs Amrit Hanspal Sr. Product Manager MPLS & QoS Internet Technologies Division 2 Agenda MPLS Fundamentals

More information

Network Futures: Pacific Wave Glimpses of a New Internet at igrid2005 and S C2005 etc.?

Network Futures: Pacific Wave Glimpses of a New Internet at igrid2005 and S C2005 etc.? Ron Johnson 2006 Network Futures: Pacific Wave Glimpses of a New Internet at igrid2005 and S C2005 etc.? Ron Johnson, Vice President University of Washington and Pacific NorthWest GigaPoP 24 January 2006

More information

400G: Deployment at a National Lab

400G: Deployment at a National Lab 400G: Deployment at a National Lab Chris Tracy (Esnet) *Jason R. Lee (NERSC) June 30, 2016-1 - Concept - 2 - Concept: Use case This work originally began as a white paper in December 2013, in which Esnet

More information

Global Leased Line Service

Global Leased Line Service SERVICE BROCHURE Global Leased Line Service from NTT Communications Predictable, low latency global networking with the highest quality A global managed bandwidth solution from NTT Communications (NTT

More information

Internet2 Network Service Descriptions DRAFT December 4, 2006

Internet2 Network Service Descriptions DRAFT December 4, 2006 DRAFT December 4, 2006 Table of Contents 1. Network Infrastructure and Services... 1 1.1 Introduction... 1 1.2 Architectural Overview... 3 1.3 Circuit Service Descriptions... 5 1.4 IP Services... 9 2.

More information

ISPs, Backbones and Peering

ISPs, Backbones and Peering ISPs, Backbones and Peering 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 6 th edition. J.F. Kurose and K.W. Ross Administrivia Norton2010

More information

The Current Status of Taipei GigaPoP

The Current Status of Taipei GigaPoP The Current Status of Taipei GigaPoP Simon C. Lin Computing Centre, Academia Sinica Taipei, Taiwan January 2000 sclin@sinica.edu.tw Copyright 2000 by Simon C. Lin 1 Current Internet Status Internet has

More information

One Planet. One Network. Infinite Possibilities.

One Planet. One Network. Infinite Possibilities. One Planet. One Network. Infinite Possibilities. IPv6 in the Global Crossing IP Network May 26, 2005 Ed Bursk, Vice President Government Global Crossing Overview Global Crossing was founded seven years

More information

Pacific Wave: Building an SDN Exchange

Pacific Wave: Building an SDN Exchange Pacific Wave: Building an SDN Exchange Will Black, CENIC - Pacific Wave Internet2 TechExchange San Francisco, CA Pacific Wave: Overview Joint project between CENIC and PNWGP Open Exchange supporting both

More information

IRNC Kickoff Meeting

IRNC Kickoff Meeting ! IRNC Kickoff Meeting Internet2 Global Summit Washington DC April 26, 2015 Julio Ibarra Florida International University Principal Investigator julio@fiu.edu ! Outline Backbone: AmLight Express and Protect

More information

Metro Ethernet for Government Enhanced Connectivity Drives the Business Transformation of Government

Metro Ethernet for Government Enhanced Connectivity Drives the Business Transformation of Government Metro Ethernet for Government Enhanced Connectivity Drives the Business Transformation of Government Why You Should Choose Cox Metro Ethernet To meet the critical demands of better supporting local emergency

More information

JGN2plus. Presenter: Munhwan Choi

JGN2plus. Presenter: Munhwan Choi JGN2plus Presenter: Munhwan Choi JGN2plus Type of Services Network Structure R&D with SPARC Status of Utilization Overview JGN2plus JGN2plus looks for the visions of the future ICT society through activities

More information

Network Connectivity and Mobility

Network Connectivity and Mobility Network Connectivity and Mobility BSAD 141 Dave Novak Topics Covered Lecture is structured based on the five elements of creating a connected world from the text book (with additional content) 1. Network

More information

AWS Pilot Report M. O Connor, Y. Hines July 2016 Version 1.3

AWS Pilot Report M. O Connor, Y. Hines July 2016 Version 1.3 AWS Pilot Report M. O Connor, Y. Hines July 2016 Version 1.3 Ernest Orlando Lawrence Berkeley National Laboratory 1 Cyclotron Road, Berkeley, CA 94720 8148 This work was supported by the Director, Office

More information

Optical considerations for nextgeneration

Optical considerations for nextgeneration Optical considerations for nextgeneration network Inder Monga Executive Director, ESnet Division Director, Scientific Networking Lawrence Berkeley National Lab 9 th CEF Networks Workshop 2017 September

More information

Cisco and the Internet - and innovation. October 29, 2004

Cisco and the Internet - and innovation. October 29, 2004 Cisco and the Internet - and innovation October 29, 2004 Robert J. (Bob) (raiken@cisco.com) 1 Why am I a here? I m not a father of the Internet I m not a grandfather of the Internet I am not even a young

More information

BigData Express: Toward Predictable, Schedulable, and High-Performance Data Transfer. BigData Express Research Team November 10, 2018

BigData Express: Toward Predictable, Schedulable, and High-Performance Data Transfer. BigData Express Research Team November 10, 2018 BigData Express: Toward Predictable, Schedulable, and High-Performance Data Transfer BigData Express Research Team November 10, 2018 Many people s hard work FNAL: ESnet: icair/starlight: KISTI: Qiming

More information

Broadband Networks in Asia

Broadband Networks in Asia 2002.9.9v5 Broadband Networks in Asia 2002. 9. 26 Kilnam Chon KAIST http://cosmos.kaist.ac.kr Broadband Internet is Asian Phenomenon ( So is Wireless Internet ) Contents 1. Broadband Networks 2. Current

More information

TRANSWORLD IPLC. Another variant of IPLC is International MPLS.

TRANSWORLD IPLC. Another variant of IPLC is International MPLS. IPLC TRANSWORLD IPLC Supported by our technically advanced core network and peering relationships with Tier 1 bandwidth providers across the globe, Transworld IPLC service provides clear channel bandwidth

More information

Building Interconnection 2017 Steps Taken & 2018 Plans

Building Interconnection 2017 Steps Taken & 2018 Plans Building Interconnection 2017 Steps Taken & 2018 Plans 2017 Equinix Inc. 2017 Key Highlights Expansion - new markets Launch - Flexible DataCentre Hyperscaler edge Rollout - IXEverywhere - SaaS, IoT & Ecosystems

More information

Strategy for SWITCH's next generation optical network

Strategy for SWITCH's next generation optical network Strategy for SWITCH's next generation optical network Terena Network Architecture Workshop 2012 Felix Kugler, Willi Huber felix.kugler@switch.ch willi.huber@switch.ch 21.11.2012 SWITCH 2012 Introduction!

More information

SD-WAN Transform Your Agency

SD-WAN Transform Your Agency Federal SD-WAN Transform Your Agency 1 Overview Is your agency facing network traffic challenges? Is migration to the secured cloud hogging scarce bandwidth? How about increased mobile computing that is

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

The National Center for Genome Analysis Support as a Model Virtual Resource for Biologists

The National Center for Genome Analysis Support as a Model Virtual Resource for Biologists The National Center for Genome Analysis Support as a Model Virtual Resource for Biologists Internet2 Network Infrastructure for the Life Sciences Focused Technical Workshop. Berkeley, CA July 17-18, 2013

More information

IPv6 in Internet2. Rick Summerhill Associate Director, Backbone Network Infrastructure, Internet2

IPv6 in Internet2. Rick Summerhill Associate Director, Backbone Network Infrastructure, Internet2 IPv6 in Internet2 Rick Summerhill Associate Director, Backbone Network Infrastructure, Internet2 North American IPv6 Global Summit Arlington, VA 8 December 2003 Outline General Internet2 Infrastructure

More information

Public Cloud Connection for R&E Network. Jin Tanaka APAN-JP/KDDI

Public Cloud Connection for R&E Network. Jin Tanaka APAN-JP/KDDI Public Cloud Connection for R&E Network Jin Tanaka APAN-JP/KDDI 45th APAN Meeting in Singapore 28th March 2018 Hyper Scale Public cloud and research & science data NASA EOSDIS(Earth Observing System Data

More information

Minimizing Collateral Damage by Proactive Surge Protection

Minimizing Collateral Damage by Proactive Surge Protection Minimizing Collateral Damage by Proactive Surge Protection Jerry Chou, Bill Lin University of California, San Diego Subhabrata Sen, Oliver Spatscheck AT&T Labs-Research ACM SIGCOMM LSAD Workshop, Kyoto,

More information