Programmable Information Highway (with no Traffic Jams)

Size: px
Start display at page:

Download "Programmable Information Highway (with no Traffic Jams)"

Transcription

1 Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab

2 Exponential Growth ESnet Accepted Traffic: Jan Aug 2012 (Log Scale) % CAGR growth From PB/month 0 PB/month in Is there an impedance mismatch? Petabytes Source: Wikipedia Source: NERSC Jul 2011 Jan 2011 Jul 20 Jan 20 Jul 2009 Jan 2009 Jul 2008 Jan 2008 Jul 2007 Jan 2007 Jul 2006 Jan 2006 Jul 2005 Jan 2005 Jul 2004 Jan 2004 Jul 2003 Jan 2003 Jul 2002 Jan 2002 Jul 2001 Jan 2001 Jul 2000 Jan 2000 Jul 1999 Jan 1999 Jul 1998 Jan 1998 Jul 1997 Jan 1997 Jul 1996 Jan 1996 Jul 1995 Jan 1995 Jul 1994 Jan 1994 Jul 1993 Jan 1993 Jul 1992 Jan 1992 Jul 1991 Jan 1991 Jul 1990 Jan 1990 Jul 2012 Jan 2012

3 HEP as a Prototype for Data-Intensive Science Data courtesy of Harvey Newman, Caltech, and Richard Mount, SLAC and Belle II CHEP 2012 presentation

4 Moving data is going to be a way of life

5 SEAT 0 PNNL LBNL JGI 0 ANL AMES 0 SUNN Salt Lake 0 0 Network: SNLL an integral 0 part 0 of the application LLNL LOSA SDSC 1 LASV ALBU 0 LANL SNLA workflow STAR 0 0 EQCH 0 CLEV PPPL GFDL PU Physics JLAB BNL 0 0

6 A Network-Centric View of LHC CERN T1 mile s kms France Italy UK Netherlands Germany Spain Nordic USA New York USA - Chicago Canada BC Taiwan Source: Bill Johnston The LHC Open Network Environment (LHCONE) The LHC Optical Private Network (LHCOPN) O(1-) meter O(-0) meters O(1) km 500-,000 km CERN Computer Center detector Level 1 and 2 triggers Level 3 trigger ~50 Gb/s (25Gb/s ATLAS, 25Gb/s CMS) 1 PB/s LHC Tier 0 Deep archive and send data to Tier 1 centers LHC Tier 1 Data Centers LHC Tier 2 Analysis Centers

7 A Network Centric View of the SKA Receptors/sensors ~200km, avg. ~15,000 Tb/s aggregate correlator / data processor Source: Bill Johnston ~00 km 400 Tb/s aggregate from SKA RFI ~25,000 km (Perth to London via USA) or ~13,000 km (South Africa to London) Hypothetical (based on the LHC experience) National tier 1 supercomputer European distribution point National tier Tb/s (0 Gb/s) aggregate 1 fiber data path per tier 1 data center.03 Tb/s (30 Gb/s) each National tier 1 astronomy Lawrence Berkeley astronomy National astronomy Laboratory astronomy astronomy astronomy U.S. Department astronomy of Energy astronomy Office of Science astronomy

8 New thinking changes the language of interaction Infrastructure Provides best-effort IP dialtone Average end-to-end performance, packet loss is fine How much bandwidth do you need? (1G/G/0G) Ping works, you are all set, go away Instrument Adapts to the requirements of the experiment, science, end-to-end flow Highly calibrated, zero packet loss end-to-end What s your sustained end-to-end performance in bits/sec? Can I get the same performance anytime? Tuned to meet the application s workflow needs, across network domains

9 Adjectives for Network as a Instrument (NaaI) End-to-End Programmable Simple Predictable

10 Science DMZ: remove roadblocks towards end-to-end performance Science DMZ a well-configured location for high-performance WAN-facing science services Located at or near site perimeter on dedicated infrastructure Dedicated, high-performance data movers Highly capable network devices (wire-speed, deep queues) Virtual circuit connectivity Security policy and enforcement are specific to science workflows perfsonar

11 Programmable networks (1): Service Interfaces Intelligent Network Services Reservation Scheduled Standard Service Interface NSI (OGF) Slide courtesy: ARCHSTONE project

12 Programmable networks (2): Software-Defined Networking (SDN) Flexible, Programmable, separation of flows Demonstrated at Joint Techs 2011

13 Eric Pouyoul, Inder Monga, Brian Tierney (ESnet), Martin Swany (Indiana) & Ezra Kissel (U. of Delaware) Programmable networks (3): New Protocols for high-speed data transfer Bridging end-site dynamic flows with WAN dynamic tunnels - Zero-configuration virtual circuit from end-host to end-host - Automated discovery of circuit end-points Cross-country RDMA-over-Ethernet high-performance data transfers Mb/s SC11 Demonstration 0 time (s) Gbps on Gbps,78 ms RTT link between Brookhaven in NY to Seattle, WA - <4% CPU load compared to 1 stream TCP w/80% util. - No special host hardware other than NIC with RoCE support RDMA TCP CPU core utilization 70-90% - 1 stream TCP 3-4% - RDMA Seattle OSCARS/ESnet4 BNL Fully Automated, End to End, Dynamically Stitched, Virtual Connection

14 Simple abstractions How can Storage leverage this simple network abstraction? SDN Ctrl. Program flows A Virtual Switch SDN Ctrl. Wide Area Network SRS Demo with Ciena, in Ciena s booth 2437

15 Conclusion Moving data fast(er) is a 21 st century reality Distributed Science Collaborations, Large Instruments, Cloud Computing Network is not an infrastructure, but an instrument Think Different, do not set your expectations about the network as your traffic highway Simple, Programmable, Network abstractions with a Service Interface How will storage workflows leverage that?

16 Inder Monga fasterdata.es.net Thank You!

The Science DMZ: Evolution

The Science DMZ: Evolution The Science DMZ: Evolution Eli Dart, ESnet CC-NIE PI Meeting Washington, DC May 1, 2014 Why Are We Doing This? It s good to build high-quality infrastructure As network engineers, we like building networks

More information

ESnet Update Winter 2008 Joint Techs Workshop

ESnet Update Winter 2008 Joint Techs Workshop 1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager Energy Sciences Network Lawrence Berkeley National Laboratory January 21, 2008 Networking for the Future of Science ESnet

More information

ESnet Update Summer 2008 Joint Techs Workshop

ESnet Update Summer 2008 Joint Techs Workshop 1 ESnet Update Summer 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager Energy Sciences Network Lawrence Berkeley National Laboratory July 22, 2008 Networking for the Future of Science AU AU

More information

ESnet5 Deployment Lessons Learned

ESnet5 Deployment Lessons Learned ESnet5 Deployment Lessons Learned Joe Metzger, Network Engineer ESnet Network Engineering Group TIP January 16 2013 Outline ESnet5 Overview Transport Network Router Network Transition Constraints Deployment

More information

ESnet Update. Summer 2010 Joint Techs Columbus, OH. Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab

ESnet Update. Summer 2010 Joint Techs Columbus, OH. Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab ESnet Update Summer 2010 Joint Techs Columbus, OH Steve Cotter, ESnet Dept. Head Lawrence Berkeley National Lab Changes @ ESnet New additions: Greg Bell Chief Information Strategist - Formerly Chief Technology

More information

Transport SDN: The What, How and the Future!

Transport SDN: The What, How and the Future! Transport SDN: The What, How and the Future! Inder Monga Chief Technologist & Area Lead, ESnet ONF Research Associate Transport SDN Panel Agenda Topics What is transport SDN? Use-cases Transport SDN demonstration

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

Data Intensive Science Impact on Networks

Data Intensive Science Impact on Networks Data Intensive Science Impact on Networks Eli Dart, Network Engineer ESnet Network Engineering g Group IEEE Bandwidth Assessment Ad Hoc December 13, 2011 Outline Data intensive science examples Collaboration

More information

SLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED

SLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED SLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED SLIDE 2 - COPYRIGHT 2015 Do you know what your campus network is actually capable of? (i.e. have you addressed your

More information

Optical considerations for nextgeneration

Optical considerations for nextgeneration Optical considerations for nextgeneration network Inder Monga Executive Director, ESnet Division Director, Scientific Networking Lawrence Berkeley National Lab 9 th CEF Networks Workshop 2017 September

More information

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS Artur Barczyk/Caltech Internet2 Technology Exchange Indianapolis, October 30 th, 2014 October 29, 2014 Artur.Barczyk@cern.ch 1 HEP context - for this

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Zhengyang Liu University of Virginia. Oct 29, 2012

Zhengyang Liu University of Virginia. Oct 29, 2012 SDCI Net: Collaborative Research: An integrated study of datacenter networking and 100 GigE wide-area networking in support of distributed scientific computing Zhengyang Liu University of Virginia Oct

More information

Gigabyte Bandwidth Enables Global Co-Laboratories

Gigabyte Bandwidth Enables Global Co-Laboratories Gigabyte Bandwidth Enables Global Co-Laboratories Prof. Harvey Newman, Caltech Jim Gray, Microsoft Presented at Windows Hardware Engineering Conference Seattle, WA, 2 May 2004 Credits: This represents

More information

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan Philippe Laurens, Michigan State University, for USATLAS Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan ESCC/Internet2 Joint Techs -- 12 July 2011 Content Introduction LHC, ATLAS,

More information

Virtual Circuits Landscape

Virtual Circuits Landscape Virtual Circuits Landscape Summer 2010 ESCC Meeting Columbus, OH Evangelos Chaniotakis, ESnet Network Engineer Lawrence Berkeley National Lab Context and Goals Guaranteed bandwidth services are maturing.

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April

Wide-Area Networking at SLAC. Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April Wide-Area Networking at SLAC Warren Matthews and Les Cottrell (SCS Network Group) Presented at SLAC, April 6 2001. Overview SLAC s Connections to WANs Utilization End-to-end Performance The Future Note:

More information

The Software Journey: from networks to visualization

The Software Journey: from networks to visualization Discovery, unconstrained by geography. The Software Journey: from networks to visualization Inder Monga Executive Director, ESnet Division Director, Scientific Networking Lawrence Berkeley National Laboratory

More information

Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams

Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams Achieving 98Gbps of Crosscountry TCP traffic using 2.5 hosts, 10 x 10G NICs, and 10 TCP streams Eric Pouyoul, Brian Tierney ESnet January 25, 2012 ANI 100G Testbed ANI Middleware Testbed NERSC To ESnet

More information

ESnet Planning, Status, and Future Issues

ESnet Planning, Status, and Future Issues ESnet Planning, Status, and Future Issues ASCAC, August 2008 William E. Johnston, ESnet Department Head and Senior Scientist Joe Burrescia, General Manager Mike Collins, Chin Guok, and Eli Dart, Engineering

More information

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # 1659348) Klara Jelinkova Joseph Ghobrial NSF Campus Cyberinfrastructure PI and Cybersecurity Innovation

More information

CMS Data Transfer Challenges and Experiences with 40G End Hosts

CMS Data Transfer Challenges and Experiences with 40G End Hosts CMS Data Transfer Challenges and Experiences with 40G End Hosts NEXT Technology Exchange Advanced Networking / Joint Techs Indianapolis, October 2014 Azher Mughal, Dorian Kcira California Institute of

More information

Research Cyberinfrastructure Upgrade Proposal - CITI

Research Cyberinfrastructure Upgrade Proposal - CITI 10/02/2015 Research Cyberinfrastructure Upgrade Proposal - CITI Bill Labate, Director Research Technology Group RCI Upgrade Executive Summary REQUEST Support for the funding request for upgrades to UCLA

More information

Connectivity Services, Autobahn and New Services

Connectivity Services, Autobahn and New Services Connectivity Services, Autobahn and New Services Domenico Vicinanza, DANTE EGEE 09, Barcelona, 21 st -25 th September 2009 Agenda Background GÉANT Connectivity services: GÉANT IP GÉANT Plus GÉANT Lambda

More information

SENSE: SDN for End-to-end Networked Science at the Exascale

SENSE: SDN for End-to-end Networked Science at the Exascale SENSE: SDN for End-to-end Networked Science at the Exascale SENSE Research Team INDIS Workshop, SC18 November 11, 2018 Dallas, Texas SENSE Team Sponsor Advanced Scientific Computing Research (ASCR) ESnet

More information

LHC and LSST Use Cases

LHC and LSST Use Cases LHC and LSST Use Cases Depots Network 0 100 200 300 A B C Paul Sheldon & Alan Tackett Vanderbilt University LHC Data Movement and Placement n Model must evolve n Was: Hierarchical, strategic pre- placement

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

UNIVERSITY OF CALIFORNIA

UNIVERSITY OF CALIFORNIA UNIVERSITY OF CALIFORNIA Evaluating switch buffers for high BDP flows (10G x 50ms) Michael Smitasin Network Engineer LBLnet Services Group Lawrence Berkeley National Laboratory Brian Tierney Staff Scientist

More information

DYNES: DYnamic NEtwork System

DYNES: DYnamic NEtwork System DYNES: DYnamic NEtwork System Artur Barczyk California Institute of Technology / US LHCNet TERENA e2e Workshop Prague, November 29 th, 2010 1 OUTLINE What is DYNES Status Deployment Plan 2 DYNES Overview

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Traffic engineering and GridFTP log analysis. Jan 17, 2013 Project web site:

Traffic engineering and GridFTP log analysis. Jan 17, 2013 Project web site: Traffic engineering and GridFTP log analysis Zhenzhen Yan, Z. Liu, M. Veeraraghavan University of Virginia mvee@virginia.edu Chris Tracy, Chin Guok ESnet ctracy@es.net Jan 17, 2013 Project web site: http://www.ece.virginia.edu/mv/research/doe09/index.html

More information

Network and Host Design to Facilitate High Performance Data Transfer

Network and Host Design to Facilitate High Performance Data Transfer Network and Host Design to Facilitate High Performance Data Transfer Jason Zurawski - ESnet Engineering & Outreach engage@es.net globusworld 2014 April 15 th 2014 With contributions from S. Balasubramanian,

More information

Achieving the Science DMZ

Achieving the Science DMZ Achieving the Science DMZ Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 22, 2012 Outline of the Day Motivation Services Overview Science DMZ

More information

Enabling a SuperFacility with Software Defined Networking

Enabling a SuperFacility with Software Defined Networking Enabling a SuperFacility with Software Defined Networking Shane Canon Tina Declerck, Brent Draney, Jason Lee, David Paul, David Skinner May 2017 CUG 2017-1 - SuperFacility - Defined Combining the capabilities

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

ESnet s (100G) SDN Testbed

ESnet s (100G) SDN Testbed ESnet s (100G) SDN Testbed Inder Monga and ESnet SDN team Interna:onal SDN Testbed, March 2015 Outline Testbeds in ESnet Mo:va:on: Building a Scalable SDN WAN testbed Hardware and Deployment Status First

More information

ALICE Grid Activities in US

ALICE Grid Activities in US ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions

More information

Enhancing Infrastructure: Success Stories

Enhancing Infrastructure: Success Stories Enhancing Infrastructure: Success Stories Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 24, 2012 Outline Motivation for strategic investments

More information

SubOptic 2007 May 15, 2007 Baltimore, MD

SubOptic 2007 May 15, 2007 Baltimore, MD High Performance Hybrid Optical-Packet Networks: Developments and Potential Impacts SubOptic 2007 May 15, 2007 Baltimore, MD Dr. Don Riley, Professor, University of Maryland SURA IT Fellow; Chair, IEEAF

More information

WLCG Network Throughput WG

WLCG Network Throughput WG WLCG Network Throughput WG Shawn McKee, Marian Babik for the Working Group HEPiX Tsukuba 16-20 October 2017 Working Group WLCG Network Throughput WG formed in the fall of 2014 within the scope of WLCG

More information

A Brief Overview of the Science DMZ

A Brief Overview of the Science DMZ With contributions from S. Balasubramanian, G. Bell, E. Dart, M. Hester, B. Johnston, A. Lake, E. Pouyoul, L. Rotman, B. Tierney and others @ ESnet A Brief Overview of the Science DMZ Jason Zurawski -

More information

ClearStream. Prototyping 40 Gbps Transparent End-to-End Connectivity. Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)!

ClearStream. Prototyping 40 Gbps Transparent End-to-End Connectivity. Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)! ClearStream Prototyping 40 Gbps Transparent End-to-End Connectivity Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)! University of Amsterdam! more data! Speed! Volume! Internet!

More information

Experiences with 40G/100G Applications

Experiences with 40G/100G Applications Experiences with 40G/100G Applications Brian L Tierney ESnet, Internet2 Global Summit, April 2014 Outline Review of packet loss Overview SC13 high-bandwidth demos ESnet s 100G testbed Sample of results

More information

Disk-to-Disk network transfers at 100 Gb/s

Disk-to-Disk network transfers at 100 Gb/s Journal of Physics: Conference Series Disk-to-Disk network transfers at 100 Gb/s To cite this article: Artur Barczyk et al 2012 J. Phys.: Conf. Ser. 396 042006 View the article online for updates and enhancements.

More information

Engagement With Scientific Facilities

Engagement With Scientific Facilities Engagement With Scientific Facilities Eli Dart, Network Engineer ESnet Science Engagement Lawrence Berkeley National Laboratory Global Science Engagement Panel Internet2 Technology Exchange San Francisco,

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

Extending InfiniBand Globally

Extending InfiniBand Globally Extending InfiniBand Globally Eric Dube (eric@baymicrosystems.com) com) Senior Product Manager of Systems November 2010 Bay Microsystems Overview About Bay Founded in 2000 to provide high performance networking

More information

AWS Pilot Report M. O Connor, Y. Hines July 2016 Version 1.3

AWS Pilot Report M. O Connor, Y. Hines July 2016 Version 1.3 AWS Pilot Report M. O Connor, Y. Hines July 2016 Version 1.3 Ernest Orlando Lawrence Berkeley National Laboratory 1 Cyclotron Road, Berkeley, CA 94720 8148 This work was supported by the Director, Office

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Discovery, Unconstrained by Geography

Discovery, Unconstrained by Geography Discovery, Unconstrained by Geography ACAT 2016 Valparaiso, Chile January 21, 2016 Gregory Bell, Ph.D. Director, Energy Sciences Network (ESnet) Director, ScienKfic Networking Division Lawrence Berkeley

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Canadian Networks for Particle Physics Research 2011 Report to the Standing Committee on Interregional Connectivity, ICFA Panel January 2012

Canadian Networks for Particle Physics Research 2011 Report to the Standing Committee on Interregional Connectivity, ICFA Panel January 2012 Canadian Networks for Particle Physics Research 2011 Report to the Standing Committee on Interregional Connectivity, ICFA Panel January 2012 This report describes the status and plans of the Canadian network

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

400G: Deployment at a National Lab

400G: Deployment at a National Lab 400G: Deployment at a National Lab Chris Tracy (Esnet) *Jason R. Lee (NERSC) June 30, 2016-1 - Concept - 2 - Concept: Use case This work originally began as a white paper in December 2013, in which Esnet

More information

Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16

Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16 Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16 Przemek Lenkiewicz, Researcher@IBM Netherlands Bernard

More information

Presentation of the LHCONE Architecture document

Presentation of the LHCONE Architecture document Presentation of the LHCONE Architecture document Marco Marletta, GARR LHCONE Meeting Paris, Tuesday 5th April 2011 Agenda Background Design Definitions Architecture Services Policy Next steps 2 Background

More information

Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer

Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer International Center for Advanced Internet Research Northwestern University Se-young Yu Jim Chen, Joe Mambretti, Fei

More information

Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech

Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science Joint Genome Institute LHC Beyond the Higgs Boson LSST SKA Harvey Newman, Caltech NSF CC*/CICI Workshop: Data Integration

More information

HEP Science Network Requirements

HEP Science Network Requirements HEP Science Network Requirements Office of High Energy Physics Network Requirements Workshop Conducted August 27 and 28, 2009 Final Report HEP Network Requirements Workshop Office of High Energy Physics,

More information

Distributed e-infrastructures for data intensive science

Distributed e-infrastructures for data intensive science Distributed e-infrastructures for data intensive science Bob Jones CERN Bob.Jones CERN.ch Overview What is CERN The LHC accelerator and experiments The Computing needs of the LHC The World wide LHC

More information

Zhengyang Liu! Oct 25, Supported by NSF Grant OCI

Zhengyang Liu! Oct 25, Supported by NSF Grant OCI SDCI Net: Collaborative Research: An integrated study of datacenter networking and 100 GigE wide-area networking in support of distributed scientific computing Zhengyang Liu! Oct 25, 2013 Supported by

More information

IRNC Kickoff Meeting

IRNC Kickoff Meeting ! IRNC Kickoff Meeting Internet2 Global Summit Washington DC April 26, 2015 Julio Ibarra Florida International University Principal Investigator julio@fiu.edu ! Outline Backbone: AmLight Express and Protect

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

Deployment of a WLCG network monitoring infrastructure based on the perfsonar-ps technology

Deployment of a WLCG network monitoring infrastructure based on the perfsonar-ps technology Deployment of a WLCG network monitoring infrastructure based on the perfsonar-ps technology S Campana 1, A Brown 2, D Bonacorsi 3, V Capone 4, D De Girolamo 5, A F Casani 6, J Flix 7,11, A Forti 8, I Gable

More information

perfsonar Update Jason Zurawski Internet2 March 5, 2009 The 27th APAN Meeting, Kaohsiung, Taiwan

perfsonar Update Jason Zurawski Internet2 March 5, 2009 The 27th APAN Meeting, Kaohsiung, Taiwan perfsonar Update Jason Zurawski Internet2 March 5, 2009 The 27th APAN Meeting, Kaohsiung, Taiwan perfsonar Update Introduction & Overview Development Status Authentication & Authorization GUI Status Deployment

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Building a Digital Bridge from Ohio

Building a Digital Bridge from Ohio Building a Digital Bridge from Ohio Marcio Faerman, PhD and Alex Berryman Internet2 Global Summit April 8 th, 2014 Slide 1 The OH-TECH Consortium Ohio Supercomputer Center provides high performance computing,

More information

5 August 2010 Eric Boyd, Internet2 Deputy CTO

5 August 2010 Eric Boyd, Internet2 Deputy CTO 5 August 2010 Eric Boyd, Internet2 Deputy CTO Extending IDC based Dynamic Circuit Networking Services Internet2 and Dynamic Circuit Networking Internet2 has been working with partners on dynamic circuit

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 21 March 2011 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium Storage on the Lunatic Fringe Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium tmruwart@dtc.umn.edu Orientation Who are the lunatics? What are their requirements?

More information

Center for Advanced Computing Research

Center for Advanced Computing Research Center for Advanced Computing Research DANSE Kickoff Meeting Mark Stalzer stalzer@caltech.edu August 15, 2006 CACR Mission and Partners Creating advanced computing methods to accelerate scientific discovery

More information

SC17 - Overview

SC17 - Overview HPSS @ SC17 - Overview High Performance Storage System The value and benefits of the HPSS service offering http://www.hpss-collaboration.org 1 We are storage industry thought leaders HPSS is a development

More information

New International Connectivities of SINET5

New International Connectivities of SINET5 Mar 28 th, 2018 at APAN45 New International Connectivities of SINET5 SINET 100G Global Ring Motonori Nakamura National Institute of Informatics (NII), Japan Academic Infrastructure operated by NII Our

More information

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research Storage Platforms with Aspera Overview A growing number of organizations with data-intensive

More information

Pacific Wave: Building an SDN Exchange

Pacific Wave: Building an SDN Exchange Pacific Wave: Building an SDN Exchange Will Black, CENIC - Pacific Wave Internet2 TechExchange San Francisco, CA Pacific Wave: Overview Joint project between CENIC and PNWGP Open Exchange supporting both

More information

Update on National LambdaRail

Update on National LambdaRail GLIF 2007, Prague, September 17 th, 2007 Update on National LambdaRail John Silvester Special Advisor to CIO for High Performance Networking, Professor of Electrical Engineering, University of Southern

More information

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products What is big data? Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Surveying the Industry For a Higher Speed Ethernet

Surveying the Industry For a Higher Speed Ethernet Surveying the Industry For a Higher Speed Ethernet Presented by Mike Bennett, LBNL 802.3 Working Group Interim Knoxville, TN September 20, 2006 High Speed Study Group 1 Background CFI marketing team prepared

More information

Grid Computing at the IIHE

Grid Computing at the IIHE BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke

More information

Progress Report. Project title: Resource optimization in hybrid core networks with 100G systems

Progress Report. Project title: Resource optimization in hybrid core networks with 100G systems Progress Report DOE award number: DE-SC0002350 Name of the recipient: University of Virginia Project title: Resource optimization in hybrid core networks with 100G systems Principal investigator: Malathi

More information

HTC/HPC Russia-EC. V. Ilyin NRC Kurchatov Institite Moscow State University

HTC/HPC Russia-EC. V. Ilyin NRC Kurchatov Institite Moscow State University HTC/HPC Russia-EC V. Ilyin NRC Kurchatov Institite Moscow State University some slides, with thanks, used available by Ian Bird (CERN) Alexey Klimentov (CERN, BNL) Vladimir Voevodin )MSU) V. Ilyin meeting

More information

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN) A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN) 10-15 PetaByte/year The LHC challenge Data analysis requires at least ~100k typical PC processor cores Scientists in tens

More information

Performance Benchmarking/Testing of CARC Network, and Connectivity (CRF<- >RSC<- >Penguin)

Performance Benchmarking/Testing of CARC Network, and Connectivity (CRF<- >RSC<- >Penguin) Performance Benchmarking/Testing of CARC Network, and Connectivity (CRFRSCPenguin) CARC Hussein Al- Azzawi June 2, 2014 Introduction The purpose of this document is to summarize testing and benchmarking

More information

AmLight supports wide-area network demonstrations in Super Computing 2013 (SC13)

AmLight supports wide-area network demonstrations in Super Computing 2013 (SC13) PRESS RELEASE Media Contact: Heidi Alvarez, Director Center for Internet Augmented Research and Assessment (CIARA) Florida International University 305-348-2006 heidi@fiu.edu AmLight supports wide-area

More information

Network Analytics. Hendrik Borras, Marian Babik IT-CM-MM

Network Analytics. Hendrik Borras, Marian Babik IT-CM-MM Network Analytics Hendrik Borras, Marian Babik IT-CM-MM perfsonar Infrastructure perfsonar has been widely deployed in WLCG 249 active instances, deployed at 120 sites including major network hubs at ESNet,

More information

November 1 st 2010, Internet2 Fall Member Mee5ng Jason Zurawski Research Liaison

November 1 st 2010, Internet2 Fall Member Mee5ng Jason Zurawski Research Liaison November 1 st 2010, Internet2 Fall Member Mee5ng Jason Zurawski Research Liaison Agenda DYNES (Eric/Jason) SC10 Ac5vi5es (All) LHCOPN Update (Jason) Other Hot Topics? 2 11/1/10, 2010 Internet2 DYNES Background

More information

perfsonar Deployment on ESnet

perfsonar Deployment on ESnet perfsonar Deployment on ESnet Brian Tierney ESnet ISMA 2011 AIMS-3 Workshop on Active Internet Measurements Feb 9, 2011 Why does the Network seem so slow? Where are common problems? Source Campus Congested

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Monitoring and Accelera/ng GridFTP

Monitoring and Accelera/ng GridFTP Monitoring and Accelera/ng GridFTP Ezra Kissel & Mar/n Swany Indiana University Dan Gunter Lawrence Berkeley Na/onal Laboratory Jason Zurawski Internet2 GlobusWORLD 2013 Globus XIO Modular drivers that

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

Scaling Across the NRP Ecosystem From Campus to Regional to National - What Support Is There? 2NRP Workshop Bozeman, Montana Tuesday, August 7, 2018

Scaling Across the NRP Ecosystem From Campus to Regional to National - What Support Is There? 2NRP Workshop Bozeman, Montana Tuesday, August 7, 2018 Scaling Across the NRP Ecosystem From Campus to Regional to National - What Support Is There? 2NRP Workshop Bozeman, Montana Tuesday, August 7, 2018 1 Panel Lars Fischer NORDUnet Kate Mace ESnet Marla

More information

ESNET Requirements for Physics Reseirch at the SSCL

ESNET Requirements for Physics Reseirch at the SSCL SSCLSR1222 June 1993 Distribution Category: 0 L. Cormell T. Johnson ESNET Requirements for Physics Reseirch at the SSCL Superconducting Super Collider Laboratory Disclaimer Notice I This report was prepared

More information

Wide-Area InfiniBand RDMA: Experimental Evaluation

Wide-Area InfiniBand RDMA: Experimental Evaluation Wide-Area InfiniBand RDMA: Experimental Evaluation Nagi Rao, Steve Poole, Paul Newman, Susan Hicks Oak Ridge National Laboratory High-Performance Interconnects Workshop August 31, 2009, New Orleans, LA

More information

The perfsonar Project at 10 Years: Status and Trajectory

The perfsonar Project at 10 Years: Status and Trajectory With contributions from S. Balasubramanian, G. Bell, E. Dart, M. Hester, B. Johnston, A. Lake, E. Pouyoul, L. Rotman, B. Tierney and others @ ESnet The perfsonar Project at 10 Years: Status and Trajectory

More information

User s Perspective for Ten Gigabit

User s Perspective for Ten Gigabit User s Perspective for Ten Gigabit Ethernet Michael Bennett Lawrence Berkeley National Lab IEEE HSSG meeting Coer d Alene, Idaho 1-4 June 1999 Background About LBNL Leading edge research in the biological,

More information

Optical Networking Activities in NetherLight

Optical Networking Activities in NetherLight Optical Networking Activities in NetherLight TERENA Networking Conference 2003 Zagreb, May 19-22, 2003 Erik Radius Manager Network Services, SURFnet Outline NetherLight What is it Why: the rationale From

More information