Engagement With Scientific Facilities
|
|
- Anissa Pitts
- 5 years ago
- Views:
Transcription
1 Engagement With Scientific Facilities Eli Dart, Network Engineer ESnet Science Engagement Lawrence Berkeley National Laboratory Global Science Engagement Panel Internet2 Technology Exchange San Francisco, CA October 18, 2017
2 Outline The central role of the HPC center Data portals past, present, and future Value of cyberinfrastructure in Engagement 2
3 NCAR RDA Data Portal Let s say I have a nice compute allocation at the ALCF climate science Let s say I need some data from NCAR for my project Data sets (there are many more, but here are two): (1.5TB) (430GB) Download to ALCF (could also do NCSA or NERSC or OLCF) 3
4 HPC Centers Matter Computing centers are special Centers of excellence / expertise Data repositories Computing for simulation, data analysis Really though the people + cyberinfrastructure combo is what matters People who know how computers, networking, and storage work Enough resources to make things happen Computing facilities are anchors for many collaborations Common pattern: multi-institution team with access to one HPC center Shared data, analysis, simulation platform 4
5 HPC Centers Scale Up With Data My laptop isn t a big computer 4 CPU cores, 16GB RAM 1TB of storage (before installing the OS) Makes no sense for large-scale data processing HPC center is where large-scale work gets done Campus computing center National center (NSF, DOE, etc.) HPC centers should be a focus for large-scale data operations Also, making one HPC facility work well helps a lot of scientists 5
6 The Petascale Project Built on top of the Science DMZ model Effort to improve data transfer performance between the DOE ASCR HPC facilities at ANL, LBNL, and ORNL, and also NCSA. Multiple current and future science projects need to transfer data between HPC facilities Performance goal is 15 gigabits per second (equivalent to 1PB/week) Realize performance goal for routine Globus transfers without special tuning Reference data set is 4.4TB of cosmology simulation data 6
7 Cluster Performance HPC Facilities alcf#dtn_mira ALCF June 2017 L380 Data Set 22.9 Gbps 25.7 Gbps 27.2 Gbps 19.4 Gbps 23.0 Gbps nersc#dtn NERSC 20.6 Gbps 19.7 Gbps olcf#dtn_atlas OLCF 20.2 Gbps 15.1 Gbps 11.8 Gbps 21.2 Gbps ncsa#bluewaters NCSA 15.2 Gbps Data set: L380 Files: Directories: 211 Other files: 0 Total bytes: (4.4T bytes) Smallest file: 0 bytes (0 bytes) Largest file: bytes (11G bytes) Size distribution: 1-10 bytes: 7 files bytes: 1 files 100-1K bytes: 59 files 1K - 10K bytes: 3170 files 10K - 100K bytes: 1560 files 100K - 1M bytes: 2817 files 1M - 10M bytes: 3901 files 10M - 100M bytes: 3800 files 100M - 1G bytes: 2295 files 1G - 10G bytes: 1647 files 10G - 100G bytes: 3 files 7 ESnet Science Engagement (engage@es.net) , Energy Sciences Network
8 Science Data Portals Large repositories of scientific data Climate data Sky surveys (astronomy, cosmology) Many others Data search, browsing, access Many scientific data portals were designed 15+ years ago Single-web-server design Data browse/search, data access, user awareness all in a single system All the data goes through the portal server In many cases by design E.g. embargo before publication (enforce access control) 8
9 Legacy Portal Design Border Router perfsonar Firewall WAN Enterprise perfsonar Very difficult to improve performance without architectural change Software components all tangled together Difficult to put the whole portal in a Science DMZ because of security Even if you could put it in a DMZ, many components aren t scalable Can we do better? Browsing path Query path Data path Portal server applications: web server search database authentication data service Portal Server Filesystem (data store) 9
10 Next-Generation Portal Leverages Science DMZ Border Router perfsonar Firewall WAN Enterprise perfsonar Data Path Browsing path Query path perfsonar Science DMZ Switch/Router Portal server applications: web server search database authentication Portal Server Data Transfer Path Portal Query/Browse Path Filesystem (data store) API s (data access governed by portal) 10
11 Put The Data On Dedicated Infrastructure We have separated the data handling from the portal logic Portal is still its normal self, but enhanced Portal GUI, database, search, etc. all function as they did before Query returns pointers to data objects in the Science DMZ Portal is now freed from ties to the data servers (run it on Amazon if you want!) Data handling is separate, and scalable High-performance s in the Science DMZ Scale as much as you need to without modifying the portal software Outsource data handling to computing centers Computing centers are set up for large-scale data Let them handle the large-scale data, and let the portal do the orchestration of data placement 11
12 Data Portal Implications Portals hold a lot of valuable data Observations (sky surveys, satellite data, genomes, etc.) Many have been in place for years Most are inadequate to support large-scale analysis Legacy search/query interfaces Legacy access protocols/tools This is in the process of changing The technology exists to radically improve the utility of data portals What should the performance expectation be? HPC facilities can do 1PB/week if data portals could do this 12
13 NCAR RDA Performance to DOE HPC Facilities 1.5TB data set 1121 files NCAR RDA rda#datashare 13.9 Gbps 16.6 Gbps 11.9 Gbps nersc#dtn NERSC alcf#dtn_mira ALCF olcf#dtn_atlas OLCF 13
14 Central Cyberinfrastructure Is Key Many users are concentrated at major cyberinfrastructure sites HPC centers Gene sequencers Cryo-EM instruments Light sources, neutron sources, FELs Lots of leverage If you are engaging with a science team, it s very helpful to have access to stuff that works well Demonstration Best practice Design patterns Scientists can be your advocates as well as your customers They can ask for upgrades/improvements Now your goals are aligned everybody wins 14
15 In conclusion ESnet s vision: Scientific progress will be completely unconstrained by the physical location of instruments, people, computational resources, or data. 15
16 Links and Lists ESnet fasterdata knowledge base Science DMZ paper Science DMZ list Send mail to with subject "subscribe esnet-sciencedmz perfsonar Globus 16 ESnet Science Engagement , Energy Sciences Network
17 Thanks! Eli Dart Energy Sciences Network (ESnet) Lawrence Berkeley National Laboratory
The Science DMZ Design Pattern
The Science DMZ Design Pattern Eli Dart Network Engineer ESnet Science Engagement Lawrence Berkeley National Laboratory NASA Mountain View, CA October 1, 2016 Overview Science DMZ Motivation and Introduction
More informationThe Best Defense is a Good Offense Creating Networks That Work (the first time)
The Best Defense is a Good Offense Creating Networks That Work (the first time) Lauren Rotman, ESnet Jennifer Schopf, Indiana University April 24, 2017 Internet2 Global Summit 2017 Presentation Overview
More informationScience DMZ Security Architecture
Science DMZ Security Architecture Michael Sinatra (presenting) Eli Dart (data portal content) Nick Buraglio (Bro content) Network, Outreach and, Security Engineers Energy Sciences Network Lawrence Berkeley
More informationSLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED
SLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED SLIDE 2 - COPYRIGHT 2015 Do you know what your campus network is actually capable of? (i.e. have you addressed your
More informationInternational Climate Network Working Group (ICNWG) Meeting
International Climate Network Working Group (ICNWG) Meeting Eli Dart ESnet Science Engagement Lawrence Berkeley National Laboratory Workshop on Improving Data Mobility & Management for International Climate
More informationA Brief Overview of the Science DMZ
With contributions from S. Balasubramanian, G. Bell, E. Dart, M. Hester, B. Johnston, A. Lake, E. Pouyoul, L. Rotman, B. Tierney and others @ ESnet A Brief Overview of the Science DMZ Jason Zurawski -
More informationAchieving the Science DMZ
Achieving the Science DMZ Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 22, 2012 Outline of the Day Motivation Services Overview Science DMZ
More informationInternational Big Science Coming to Your Campus Soon (Sooner Than You Think )
International Big Science Coming to Your Campus Soon (Sooner Than You Think ) Lauren Rotman ESnet Science Engagement Group Lead April 7, 2014 ESnet Supports DOE Office of Science Office of Science provides
More informationNetwork Support for Data Intensive Science
Network Support for Data Intensive Science Eli Dart, Network Engineer ESnet Network Engineering Group ARN2 Workshop Washington, DC April 18, 2013 Overview Drivers Sociology Path Forward 4/19/13 2 Exponential
More informationThe Science DMZ: Evolution
The Science DMZ: Evolution Eli Dart, ESnet CC-NIE PI Meeting Washington, DC May 1, 2014 Why Are We Doing This? It s good to build high-quality infrastructure As network engineers, we like building networks
More informationDesign patterns for data-driven research acceleration
Design patterns for data-driven research acceleration Rachana Ananthakrishnan, Kyle Chard, and Ian Foster The University of Chicago and Argonne National Laboratory Contact: rachana@globus.org Introduction
More informationEnhancing Infrastructure: Success Stories
Enhancing Infrastructure: Success Stories Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 24, 2012 Outline Motivation for strategic investments
More informationLeveraging the Globus Platform in your Web Applications
Leveraging the Globus Platform in your Web Applications Steve Tuecke tuecke@uchicago.edu NCAR September 5, 2018 Globus serves as A platform for building science gateways, web portals and other applications
More informationGlobus Research Data Management: Campus Deployment and Configuration. Steve Tuecke Vas Vasiliadis
Globus Research Data Management: Campus Deployment and Configuration Steve Tuecke Vas Vasiliadis Presentations and other useful information available at globus.org/events/xsede15/tutorial 2 Agenda Globus
More informationIntroduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill
Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center
More informationClimate Data Management using Globus
Climate Data Management using Globus Computation Institute Rachana Ananthakrishnan (ranantha@uchicago.edu) Data Management Challenges Transfers often take longer than expected based on available network
More informationData publication and discovery with Globus
Data publication and discovery with Globus Questions and comments to outreach@globus.org The Globus data publication and discovery services make it easy for institutions and projects to establish collections,
More informationResearch Cyberinfrastructure Upgrade Proposal - CITI
10/02/2015 Research Cyberinfrastructure Upgrade Proposal - CITI Bill Labate, Director Research Technology Group RCI Upgrade Executive Summary REQUEST Support for the funding request for upgrades to UCLA
More informationThe Pacific Research Platform (PRP)
The Pacific Research Platform (PRP) John Silvester CENIC International Relations University of Southern California Professor of Electrical Engineering Many slides courtesy of Professor Larry Smarr., CALIT2,
More informationUser s Perspective for Ten Gigabit
User s Perspective for Ten Gigabit Ethernet Michael Bennett Lawrence Berkeley National Lab IEEE HSSG meeting Coer d Alene, Idaho 1-4 June 1999 Background About LBNL Leading edge research in the biological,
More informationNetwork and Host Design to Facilitate High Performance Data Transfer
Network and Host Design to Facilitate High Performance Data Transfer Jason Zurawski - ESnet Engineering & Outreach engage@es.net globusworld 2014 April 15 th 2014 With contributions from S. Balasubramanian,
More informationProduction Petascale Climate Data Replication at NCI Lustre and our engagement with the Earth Systems Grid Federation (ESGF)
Joseph Antony, Andrew Howard, Jason Andrade, Ben Evans, Claire Trenham, Jingbo Wang Production Petascale Climate Data Replication at NCI Lustre and our engagement with the Earth Systems Grid Federation
More informationCyberinfrastructure Framework for 21st Century Science & Engineering (CIF21)
Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation, Integration Alan Blatecky Director OCI 1 1 Framing the
More informationImproving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial
Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # 1659348) Klara Jelinkova Joseph Ghobrial NSF Campus Cyberinfrastructure PI and Cybersecurity Innovation
More informationScaling Across the NRP Ecosystem From Campus to Regional to National - What Support Is There? 2NRP Workshop Bozeman, Montana Tuesday, August 7, 2018
Scaling Across the NRP Ecosystem From Campus to Regional to National - What Support Is There? 2NRP Workshop Bozeman, Montana Tuesday, August 7, 2018 1 Panel Lars Fischer NORDUnet Kate Mace ESnet Marla
More informationData Management Components for a Research Data Archive
Data Management Components for a Research Data Archive Steven Worley and Bob Dattore Scientific Computing Division Computational and Information Systems Laboratory National Center for Atmospheric Research
More informationSteven Carter. Network Lead, NCCS Oak Ridge National Laboratory OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY 1
Networking the National Leadership Computing Facility Steven Carter Network Lead, NCCS Oak Ridge National Laboratory scarter@ornl.gov 1 Outline Introduction NCCS Network Infrastructure Cray Architecture
More informationTHE NATIONAL DATA SERVICE(S) & NDS CONSORTIUM A Call to Action for Accelerating Discovery Through Data Services we can Build Ed Seidel
THE NATIONAL DATA SERVICE(S) & NDS CONSORTIUM A Call to Action for Accelerating Discovery Through Data Services we can Build Ed Seidel National Center for Supercomputing Applications University of Illinois
More informationFile Transfer: Basics and Best Practices. Joon Kim. Ph.D. PICSciE. Research Computing 09/07/2018
File Transfer: Basics and Best Practices Joon Kim. Ph.D. PICSciE Research Computing Workshop @Chemistry 09/07/2018 Our goal today Learn about data transfer basics Pick the right tool for your job Know
More informationCornell Red Cloud: Campus-based Hybrid Cloud. Steven Lee Cornell University Center for Advanced Computing
Cornell Red Cloud: Campus-based Hybrid Cloud Steven Lee Cornell University Center for Advanced Computing shl1@cornell.edu Cornell Center for Advanced Computing (CAC) Profile CAC mission, impact on research
More informationBUCKNELL S SCIENCE DMZ
BUCKNELL S SCIENCE #Bisonet Param Bedi VP for Library and Information Technology Principal Investigator Initial Science Design Process Involving Bucknell faculty researchers Library and Information Technology
More informationCollaborative data-driven science. Collaborative data-driven science. Mike Rippin
Collaborative data-driven science Mike Rippin Background and History of SciServer Major Objectives Current System SciServer Compute Now SciServer Compute Future Q&A 2 Collaborative data-driven science
More informationProgrammable Information Highway (with no Traffic Jams)
Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab Exponential Growth ESnet Accepted Traffic: Jan
More informationSENSE: SDN for End-to-end Networked Science at the Exascale
SENSE: SDN for End-to-end Networked Science at the Exascale SENSE Research Team INDIS Workshop, SC18 November 11, 2018 Dallas, Texas SENSE Team Sponsor Advanced Scientific Computing Research (ASCR) ESnet
More informationEnabling Science Through Cyber Security At 100G
Enabling Science Through Cyber Security At 100G Submitted by: Rosio Alvarez, Ph.D. Chief Information Officer, Berkeley Lab RAlvarez@lbl.gov Project team: IT Division, Cyber Security Team Aashish Sharma
More informationMagellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009
Magellan Project Jeff Broughton NERSC Systems Department Head October 7, 2009 1 Magellan Background National Energy Research Scientific Computing Center (NERSC) Argonne Leadership Computing Facility (ALCF)
More informationALICE Grid Activities in US
ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions
More informationirods at TACC: Secure Infrastructure for Open Science Chris Jordan
irods at TACC: Secure Infrastructure for Open Science Chris Jordan What is TACC? Texas Advanced Computing Center Cyberinfrastructure Resources for Open Science University of Texas System 9 Academic, 6
More informationOverview. Idea: Reduce CPU clock frequency This idea is well suited specifically for visualization
Exploring Tradeoffs Between Power and Performance for a Scientific Visualization Algorithm Stephanie Labasan & Matt Larsen (University of Oregon), Hank Childs (Lawrence Berkeley National Laboratory) 26
More informationSeeking Supernovae in the Clouds: A Performance Study
Seeking Supernovae in the Clouds: A Performance Study Keith R. Jackson, Lavanya Ramakrishnan, Karl J. Runge, Rollin C. Thomas Lawrence Berkeley National Laboratory Why Do I Care About Supernovae? The rate
More informationThe Materials Data Facility
The Materials Data Facility Ben Blaiszik (blaiszik@uchicago.edu), Kyle Chard (chard@uchicago.edu) Ian Foster (foster@uchicago.edu) materialsdatafacility.org What is MDF? We aim to make it simple for materials
More informationGeorgia State University Cyberinfrastructure Plan
Georgia State University Cyberinfrastructure Plan Summary Building relationships with a wide ecosystem of partners, technology, and researchers are important for GSU to expand its innovative improvements
More informationAllowing Users to Run Services at the OLCF with Kubernetes
Allowing Users to Run Services at the OLCF with Kubernetes Jason Kincl Senior HPC Systems Engineer Ryan Adamson Senior HPC Security Engineer This work was supported by the Oak Ridge Leadership Computing
More informationSLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017
SLATE Services Layer at the Edge Rob Gardner University of Chicago Shawn McKee University of Michigan Joe Breen University of Utah First Meeting of the National Research Platform Montana State University
More informationParallel File Systems. John White Lawrence Berkeley National Lab
Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation
More informationDiscovery, Unconstrained by Geography
Discovery, Unconstrained by Geography ACAT 2016 Valparaiso, Chile January 21, 2016 Gregory Bell, Ph.D. Director, Energy Sciences Network (ESnet) Director, ScienKfic Networking Division Lawrence Berkeley
More informationClouds: An Opportunity for Scientific Applications?
Clouds: An Opportunity for Scientific Applications? Ewa Deelman USC Information Sciences Institute Acknowledgements Yang-Suk Ki (former PostDoc, USC) Gurmeet Singh (former Ph.D. student, USC) Gideon Juve
More informationGlobus Online and HPSS. KEK, Tsukuba Japan October 16 20, 2017 Guangwei Che
Globus Online and HPSS KEK, Tsukuba Japan October 16 20, 2017 Guangwei Che Agenda (1) What is Globus and Globus Online? How Globus Online works? Globus DSI module for HPSS Globus Online setup DSI module
More informationJourney Towards Science DMZ. Suhaimi Napis Technical Advisory Committee (Research Computing) MYREN-X Universiti Putra Malaysia
Malaysia's Computational Journey Towards Science DMZ Suhaimi Napis Technical Advisory Committee (Research Computing) MYREN-X Universiti Putra Malaysia suhaimi@upm.my In the Beginning... Research on parallel/distributed
More information2017 Resource Allocations Competition Results
2017 Resource Allocations Competition Results Table of Contents Executive Summary...3 Computational Resources...5 CPU Allocations...5 GPU Allocations...6 Cloud Allocations...6 Storage Resources...6 Acceptance
More informationXSEDE s Campus Bridging Project Jim Ferguson National Institute for Computational Sciences
January 3, 2016 XSEDE s Campus Bridging Project Jim Ferguson National Institute for Computational Sciences jwf@utk.edu What is XSEDE? extreme Science and Engineering Discovery Environment $121M project
More informationZhengyang Liu University of Virginia. Oct 29, 2012
SDCI Net: Collaborative Research: An integrated study of datacenter networking and 100 GigE wide-area networking in support of distributed scientific computing Zhengyang Liu University of Virginia Oct
More informationThe National Fusion Collaboratory
The National Fusion Collaboratory A DOE National Collaboratory Pilot Project Presented by David P. Schissel at ICC 2004 Workshop May 27, 2004 Madison, WI PRESENTATION S KEY POINTS Collaborative technology
More informationOSIsoft Technologies for the Industrial IoT and Industry 4.0 Chris Felts, Sr. Product Manager Houston Regional Seminar, October 4, 2017
OSIsoft Technologies for the Industrial IoT and Industry 4. Chris Felts, Sr. Product Manager Houston Regional Seminar, October 4, 27 Copyright 27 OSIsoft, LLC Introduction Copyright 27 OSIsoft, LLC 2 Industry
More informationQlik Sense Enterprise architecture and scalability
White Paper Qlik Sense Enterprise architecture and scalability June, 2017 qlik.com Platform Qlik Sense is an analytics platform powered by an associative, in-memory analytics engine. Based on users selections,
More informationShort Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy
Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy François Tessier, Venkatram Vishwanath Argonne National Laboratory, USA July 19,
More informationIntroduction to Grid Computing
Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able
More informationEnabling a SuperFacility with Software Defined Networking
Enabling a SuperFacility with Software Defined Networking Shane Canon Tina Declerck, Brent Draney, Jason Lee, David Paul, David Skinner May 2017 CUG 2017-1 - SuperFacility - Defined Combining the capabilities
More informationCampus Bridging: What is it and how do we do it? Rich Knepper
September 11, 2014 Campus Bridging: What is it and how do we do it? Rich Knepper rich@iu.edu What is Campus Bridging? Bridging the gap between local researcher cyberinfrastructure, campus CI, and national
More informationThe Fusion Distributed File System
Slide 1 / 44 The Fusion Distributed File System Dongfang Zhao February 2015 Slide 2 / 44 Outline Introduction FusionFS System Architecture Metadata Management Data Movement Implementation Details Unique
More informationPrivate Cloud at IIT Delhi
Private Cloud at IIT Delhi Success Story Engagement: Long Term Industry: Education Offering: Private Cloud Deployment Business Challenge IIT Delhi, one of the India's leading educational Institute wanted
More informationEastern Regional Network (ERN) Barr von Oehsen Internet2 Tech Exchange 10/16/2018
Eastern Regional Network (ERN) Barr von Oehsen Internet2 Tech Exchange 10/16/2018 Eastern Regional Network (ERN) Vision: To simplify multi-campus collaborations and partnerships that advance the frontiers
More informationETSI FUTURE Network SDN and NFV for Carriers MP Odini HP CMS CT Office April 2013
ETSI FUTURE Network SDN and NFV for Carriers MP Odini HP CMS CT Office April 2013 Challenges and Opportunities Reduce Capex/Opex Challenges Innovation at devices and OTT side Number of devices explode
More informationBeyond File Transfer. Steve Tuecke NCAR September 5, 2018
Beyond File Transfer Steve Tuecke tuecke@uchicago.edu NCAR September 5, 2018 Unify access to data across tiers Research Computing HPC National Resources Personal Resources Desktop Workstations Public Cloud
More informationXSEDE Campus Bridging Tools Rich Knepper Jim Ferguson
April 3, 2014 XSEDE Campus Bridging Tools Rich Knepper (rich@iu.edu) Jim Ferguson (jwf@utk.edu) What is Campus Bridging? Bridging the gap between local researcher cyberinfrastructure, campus CI, and national
More informationWVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services
WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized
More informationThe Grid Architecture
U.S. Department of Energy Office of Science The Grid Architecture William E. Johnston Distributed Systems Department Computational Research Division Lawrence Berkeley National Laboratory dsd.lbl.gov What
More informationTraffic engineering and GridFTP log analysis. Jan 17, 2013 Project web site:
Traffic engineering and GridFTP log analysis Zhenzhen Yan, Z. Liu, M. Veeraraghavan University of Virginia mvee@virginia.edu Chris Tracy, Chin Guok ESnet ctracy@es.net Jan 17, 2013 Project web site: http://www.ece.virginia.edu/mv/research/doe09/index.html
More informationSurveying the Industry For a Higher Speed Ethernet
Surveying the Industry For a Higher Speed Ethernet Presented by Mike Bennett, LBNL 802.3 Working Group Interim Knoxville, TN September 20, 2006 High Speed Study Group 1 Background CFI marketing team prepared
More informationYour cloud solution for EO Data access and processing
powered by Your cloud solution for EO Data access and processing Stanisław Dałek VP - CloudFerro 2 About CREODIAS The platform In 2017 European Space Agency, acting on behalf of the European Commission,
More informationCHARACTERIZING HPC I/O: FROM APPLICATIONS TO SYSTEMS
erhtjhtyhy CHARACTERIZING HPC I/O: FROM APPLICATIONS TO SYSTEMS PHIL CARNS carns@mcs.anl.gov Mathematics and Computer Science Division Argonne National Laboratory April 20, 2017 TU Dresden MOTIVATION FOR
More informationScience DMZ Architecture
Science DMZ Architecture Jason Zurawski - zurawski@es.net Kate Petersen Mace kate@es.net ESnet Science Engagement engage@es.net http://fasterdata.es.net Science DMZ Overview The Science DMZ in 1 Slide
More informationNext Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech
Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science Joint Genome Institute LHC Beyond the Higgs Boson LSST SKA Harvey Newman, Caltech NSF CC*/CICI Workshop: Data Integration
More informationThe perfsonar Project at 10 Years: Status and Trajectory
With contributions from S. Balasubramanian, G. Bell, E. Dart, M. Hester, B. Johnston, A. Lake, E. Pouyoul, L. Rotman, B. Tierney and others @ ESnet The perfsonar Project at 10 Years: Status and Trajectory
More informationInfiniBand Strengthens Leadership as The High-Speed Interconnect Of Choice
InfiniBand Strengthens Leadership as The High-Speed Interconnect Of Choice Providing the Best Return on Investment by Delivering the Highest System Efficiency and Utilization Top500 Supercomputers June
More informationWelcome! Presenters: STFC January 10, 2019
Welcome! Presenters: Vas Vasiliadis vas@uchicago.edu Brendan McCollam bjmc@globus.org STFC January 10, 2019 Agenda Morning topics Introduction to the Globus SaaS Service overview & architecture Demo: A
More informationLHC and LSST Use Cases
LHC and LSST Use Cases Depots Network 0 100 200 300 A B C Paul Sheldon & Alan Tackett Vanderbilt University LHC Data Movement and Placement n Model must evolve n Was: Hierarchical, strategic pre- placement
More informationNERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber
NERSC Site Update National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory Richard Gerber NERSC Senior Science Advisor High Performance Computing Department Head Cori
More informationImplementation of the Pacific Research Platform over Pacific Wave
Implementation of the Pacific Research Platform over Pacific Wave 21 September 2015 CANS, Chengdu, China Dave Reese (dave@cenic.org) www.pnw-gigapop.net A Brief History of Pacific Wave n Late 1990 s: Exchange
More informationEnabling a SuperFacility with Software Defined Networking
Enabling a SuperFacility with Software Defined Networking Richard Shane Canon, Tina Declerck, Brent Draney, Jason Lee, David Paul, David Skinner NERSC, Lawrence Berkeley National Laboratory Berkeley, USA
More informationConnecting the e-infrastructure chain
Connecting the e-infrastructure chain Internet2 Spring Meeting, Arlington, April 23 rd, 2012 Peter Hinrich & Migiel de Vos Topics - About SURFnet - Motivation: Big data & collaboration - Collaboration
More informationEvolution of the ATLAS PanDA Workload Management System for Exascale Computational Science
Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,
More informationNSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure PI Workshop
NSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure PI Workshop Recently the National Science Foundation (NSF) held it s NSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure
More informationXSEDE Campus Bridging Tools Jim Ferguson National Institute for Computational Sciences
September 26, 2014 XSEDE Campus Bridging Tools Jim Ferguson National Institute for Computational Sciences jwf@utk.edu Quick Advertisement: Student Programs Research Experience 12-18 students per summer,
More informationVMware Overview VMware Infrastructure 3: Install and Configure Rev C Copyright 2007 VMware, Inc. All rights reserved.
VMware Overview Kontakt: jens.soeldner@netlogix.de 1-1 You Are Here Virtual Infrastructure VMware Overview ESX Server Installation ESX Server Installation Networking Networking Storage Storage ESX Server
More informationBig Data 2015: Sponsor and Participants Research Event ""
Big Data 2015: Sponsor and Participants Research Event "" Center for Large-scale Data Systems Research, CLDS! San Diego Supercomputer Center! UC San Diego! Agenda" Welcome and introductions! SDSC: Who
More informationUCLA RESEARCH INFORMATICS STRATEGIC PLAN Taking Action June, 2013
UCLA RESEARCH INFORMATICS STRATEGIC PLAN Taking Action June, 2013 1 Project Motivation Addressing Research Informatics is among the greatest strategic requirements for UCLA s future research competitiveness
More informationHigh Performance Computing Course Notes Grid Computing I
High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are
More informationToward Scalable Monitoring on Large-Scale Storage for Software Defined Cyberinfrastructure
Toward Scalable Monitoring on Large-Scale Storage for Software Defined Cyberinfrastructure Arnab K. Paul, Ryan Chard, Kyle Chard, Steven Tuecke, Ali R. Butt, Ian Foster Virginia Tech, Argonne National
More informationReport. Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution
CERN-ACC-2013-0237 Wojciech.Sliwinski@cern.ch Report Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution W. Sliwinski, I. Yastrebov, A. Dworak CERN, Geneva, Switzerland
More informationBuilding a Global Data Federation for Climate Change Science The Earth System Grid (ESG) and International Partners
Building a Global Data Federation for Climate Change Science The Earth System Grid (ESG) and International Partners 24th Forum ORAP Cite Scientifique; Lille, France March 26, 2009 Don Middleton National
More informationJetstream Overview A national research and education cloud
Jetstream Overview A national research and education cloud 9th workshop on Scientific Cloud Computing (ScienceCloud) June 11, 2018 Tempe, AZ John Michael Lowe jomlowe@iu.edu Senior Cloud Engineer, UITS
More informationBridging the Gap Between High Quality and High Performance for HPC Visualization
Bridging the Gap Between High Quality and High Performance for HPC Visualization Rob Sisneros National Center for Supercomputing Applications University of Illinois at Urbana Champaign Outline Why am I
More informationData Intensive Science Impact on Networks
Data Intensive Science Impact on Networks Eli Dart, Network Engineer ESnet Network Engineering g Group IEEE Bandwidth Assessment Ad Hoc December 13, 2011 Outline Data intensive science examples Collaboration
More informationEnabling Efficient and Scalable Zero-Trust Security
WHITE PAPER Enabling Efficient and Scalable Zero-Trust Security FOR CLOUD DATA CENTERS WITH AGILIO SMARTNICS THE NEED FOR ZERO-TRUST SECURITY The rapid evolution of cloud-based data centers to support
More informationZhengyang Liu! Oct 25, Supported by NSF Grant OCI
SDCI Net: Collaborative Research: An integrated study of datacenter networking and 100 GigE wide-area networking in support of distributed scientific computing Zhengyang Liu! Oct 25, 2013 Supported by
More informationThe Earth System Grid: A Visualisation Solution. Gary Strand
The Earth System Grid: A Visualisation Solution Gary Strand Introduction Acknowledgments PI s Ian Foster (ANL) Don Middleton (NCAR) Dean Williams (LLNL) ESG Development Team Veronika Nefedova (ANL) Ann
More informationInnovative Solutions. Trusted Performance. Intelligently Engineered. Comparison of SD WAN Solutions. Technology Brief
Innovative. Trusted Performance. Intelligently Engineered. Comparison of SD WAN Technology Brief Comparison of SD WAN SD-WAN Overview By the end of 2019, 30% of enterprises will use SD-WAN products in
More informationFederated Services for Scientists Thursday, December 9, p.m. EST
IAM Online Federated Services for Scientists Thursday, December 9, 2010 1 p.m. EST Rachana Ananthakrishnan Argonne National Laboratory & University of Chicago Jim Basney National Center for Supercomputing
More informationACI REF. Introduction to OpenFlow and Why it Matters to You. https://github.com/downloads/mininet/mininet/mininet amd64-ovf.
ACI REF Introduction to OpenFlow and Why it Matters to You https://github.com/downloads/mininet/mininet/mininet-2.0.0-113012- amd64-ovf.zip ACI REF Introduction to OpenFlow and Why it Matters to You Goals
More information