The Fermilab HEPCloud Facility: Adding 60,000 Cores for Science! Burt Holzman, for the Fermilab HEPCloud Team HTCondor Week 2016 May 19, 2016

Size: px
Start display at page:

Download "The Fermilab HEPCloud Facility: Adding 60,000 Cores for Science! Burt Holzman, for the Fermilab HEPCloud Team HTCondor Week 2016 May 19, 2016"

Transcription

1 The Fermilab HEPCloud Facility: Adding 60,000 Cores for Science! Burt Holzman, for the Fermilab HEPCloud Team HTCondor Week 2016 May 19, 2016

2 My last Condor Week talk 2 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

3 The Fermilab Facility: Today Tradi0onal Facility physical resources 3 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

4 Drivers for Evolving the Facility HEP computing needs will be x current capacity Two new programs coming online (DUNE, High-Luminosity LHC), while new physics search programs (Mu2e) will be operating Scale of industry at or above R&D Commercial clouds offering increased value for decreased cost compared to the past Price of one core-year on Commercial Cloud 4 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

5 Drivers for Evolving the Facility: Elasticity Usage is not steady-state Computing schedules driven by real-world considerations (detector, accelerator, ) but also ingenuity this is research and development of cutting-edge science NOvA jobs in the queue at FNAL Facility size 5 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

6 Classes of Resource Providers Grid Cloud HPC Virtual Organizations (VOs) of users trusted by Grid sites VOs get allocations Pledges Unused allocations: opportunistic resources Community Clouds - Similar trust federa0on to Grids Commercial Clouds - Pay-As- You-Go model Strongly accounted Near-infinite capacity Elas-city Spot price market Researchers granted access to HPC installa0ons Peer review commizees award Alloca-ons Awards model designed for individual PIs rather than large collabora0ons Things you borrow Things you rent Things you are given Trust Federa0on Economic Model Grant Alloca0on 6 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

7 The Fermilab Facility: Today Tradi0onal Facility physical resources 7 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

8 The Fermilab Facility: ++Today Fermilab HEPCloud Facility commercial clouds Provision commercial cloud resources in addi0on to physically owned resources Transparent to the user Pilot project / R&D phase 8 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

9 HEPCloud Collaborations Engage in collaboration to leverage tools and experience whenever possible HTCondor common provisioning interface Foundation underneath glideinwms Grid technologies Open Science Grid, Worldwide LHC Computing Grid Preparing communities for distributed computing CMS collaborative knowledge and tools, cloud-capable workflows BNL and ATLAS engaged in next HEPCloud phase 9 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

10 HEPCloud Collaborations Engage in collaboration to leverage tools and experience whenever possible HTCondor common provisioning interface Foundation underneath glideinwms, Panda Grid technologies Open Science Grid, Worldwide LHC Computing Grid Preparing communities for distributed computing CMS collaborative knowledge and tools, cloud-capable workflows BNL and ATLAS engaged in next HEPCloud phase 10 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

11 Fermilab HEPCloud: expanding to the Cloud Where to start? Market leader: Amazon Web Services (AWS) Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily cons0tute or imply its endorsement, recommenda0on, or favoring by the United States Government or any agency thereof /19/16 Burt Holzman Fermilab HEPCloud Facilty

12 AWS topology three US data centers ( regions ) US-West-2 US-West-1 US-East-1 Each Data Center has 3+ different zones Each zone has different instance types (analogous to different types of physical machines) 12 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

13 Pricing: using the AWS Spot Market AWS has a fixed price per hour (rates vary by machine type) Excess capacity is released to the free ( spot ) market at a fraction of the on-demand price End user chooses a bid price If (market price < bid), you pay only market price for the provisioned resource If (market price > bid), you don t get the resource If the price fluctuates while you are running and the market price exceeds your original bid price, you may get kicked off the node (with a 2 minute warning!) 13 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

14 Some HEPCloud Use Cases NoVA Processing Processing the 2014/2015 dataset 16 4-day campaigns over one year Demonstrates stability, availability, costeffectiveness Received AWS academic grant Dark Energy Survey - Gravitational Waves Search for optical counterpart of events detected by LIGO/VIRGO gravitational wave detectors (FNAL LDRD) Modest CPU needs, but want 5-10 hour turnaround Burst activity driven entirely by physical phenomena (gravitational wave events are transient) Rapid provisioning to peak CMS Monte Carlo Simulation Generation (and detector simulation, digitization, reconstruction) of simulated events in time for Moriond conference compute cores, steady-state Demonstrates scalability Received AWS academic grant 14 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

15 NOvA: Neutrino Experiment Neutrinos rarely interact with mazer. When a neutrino smashes into an atom in the NOvA detector in Minnesota, it creates dis0nc0ve par0cle tracks. Scien0sts explore these par0cle interac0ons to bezer understand the transi0on of muon neutrinos into electron neutrinos. The experiment also helps answer important scien0fic ques0ons about neutrino masses, neutrino oscilla0ons, and the role neutrinos played in the early universe /19/16 Burt Holzman Fermilab HEPCloud Facilty

16 NOvA Use Case NoVA Processing Processing the 2014/2015 dataset 16 4-day campaigns over one year Demonstrates stability, availability, costeffectiveness Received AWS academic grant CMS Monte Carlo Simulation First proof-of-concept from Oct 2014 simulated events in time for Moriond conference small run of NOvA jobs on AWS 800 Generation (and detector simulation, digitization, reconstruction) of compute cores, steady-state Demonstrates scalability Received AWS academic grant Supported by FNAL and KISTI Dark Energy Survey - Gravitational Waves Search for optical counterpart of events detected by LIGO/VIRGO gravitational wave detectors (FNAL LDRD) Modest CPU needs, but want 5-10 hour turnaround Burst activity driven entirely by physical phenomena (gravitational wave events are transient) Rapid provisioning to peak 2:21 2:28 2:35 2:42 2:49 2:56 3:03 3:10 3:17 3:24 3:31 3:38 3:45 3:52 3:59 4:06 4:13 4:20 4:27 4:34 4:41 4:48 4:55 5:02 5:09 5:16 5:23 5:30 5:37 5:44 5:51 5:58 6:05 6:12 6:19 6:26 6:33 6:40 6:47 6:54 7:01 7:08 7:15 7:22 7:29 7:36 7:43 7: /19/16 Burt Holzman Fermilab HEPCloud Facilty

17 NOvA Use Case running at 4k cores Added support for general-purpose data-handling tools (SAM, IFDH, F-FTS) for AWS Storage and used them to stage both input datasets and job outputs 17 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

18 Some HEPCloud Use Cases NoVA Processing Processing the 2014/2015 dataset 16 4-day campaigns over one year Demonstrates stability, availability, costeffectiveness Received AWS academic grant Dark Energy Survey - Gravitational Waves Search for optical counterpart of events detected by LIGO/VIRGO gravitational wave detectors (FNAL LDRD) Modest CPU needs, but want 5-10 hour turnaround Burst activity driven entirely by physical phenomena (gravitational wave events are transient) Rapid provisioning to peak CMS Monte Carlo Simulation Generation (and detector simulation, digitization, reconstruction) of simulated events in time for Moriond conference compute cores, steady-state Demonstrates scalability Received AWS academic grant 18 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

19 CMS: Large Hadron Collider Experiemnt 19 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

20 Results from the CMS Use Case All CMS simulation requests fulfilled for Moriond 2.9 million jobs, 15.1 million wall hours 9.5% badput includes preemption from spot pricing 87% CPU efficiency 518 million events generated /DYJetsToLL_M-50_TuneCUETP8M1_13TeV-amcatnloFXFX-pythia8/RunIIFall15DR76-PU25nsData2015v1_76X_mcRun2_asympto-c_v12_ext4-v1/AODSIM /DYJetsToLL_M-10to50_TuneCUETP8M1_13TeV-amcatnloFXFX-pythia8/RunIIFall15DR76-PU25nsData2015v1_76X_mcRun2_asympto-c_v12_ext3-v1/AODSIM /TTJets_13TeV-amcatnloFXFX-pythia8/RunIIFall15DR76-PU25nsData2015v1_76X_mcRun2_asympto-c_v12_ext1-v1/AODSIM /WJetsToLNu_TuneCUETP8M1_13TeV-amcatnloFXFX-pythia8/RunIIFall15DR76-PU25nsData2015v1_76X_mcRun2_asympto-c_v12_ext4-v1/AODSIM 20 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

21 Reaching ~60k slots on AWS with FNAL HEPCloud slots 10% Test 25% 21 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

22 HEPCloud AWS slots by Region/Zone Each color corresponds to a different region+zone 22 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

23 HEPCloud AWS slots by Region/Zone/Type Each color corresponds to a different region+zone+machine type 23 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

24 HEPCloud/AWS: 25% of CMS global capacity Produc-on Analysis Reprocessing Produc-on on AWS via FNAL HEPCloud 24 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

25 Fermilab HEPCloud compared to global CMS Tier /19/16 Burt Holzman Fermilab HEPCloud Facilty

26 HEPCloud: Orchestration Monitoring and Accounting Synergies with FIFE monitoring projects But also monitoring real-time expense Feedback loop into Decision Engine 7000 Cloud Instances by type 600 $/Hr 26 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

27 On-premises vs. cloud cost comparison Average cost per core-hour On-premises resource:.9 cents per core-hour Includes power, cooling, staff Off-premises at AWS: 1.4 cents per core-hour Ranged up to 3 cents per core-hour at smaller scale Benchmarks Specialized ( ttbar ) benchmark focused on HEP workflows On-premises: (higher = better) Off-premises: Raw compute performance roughly equivalent Cloud costs larger but approaching equivalence 27 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

28 HTCondor: Critical to Success All resources provisioned with HTCondor First test of EC2 GAHP at scale Worked* with HTCondor team to improve EC2 GAHP Improved stability of GAHP (less mallocs) Improved Gridmanager response to crashed GAHP Reduce number of EC2 API calls and exponential backoff (BNL request) We need a agent to speak to bulk provisioning APIs condor_annex (see next talk) We want HTCondor to provision the big three Amazon EC2 Google Cloud Engine Microsoft Azure condor_annex should be part of the HTCondor ecosystem (ClassAds, integration with condor tools, run as non-privileged user) * Todd M codes, we test 28 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

29 Thanks HTCondor team CMS and NOvA experiments The glideinwms project FNAL HEPCloud Leadership Team: Stu Fuess, Gabriele Garzoglio, Rob Kennedy, Steve Timm, Anthony Tiradani Open Science Grid Energy Sciences Network Amazon Web Services ATLAS/BNL for initiating work with AWS team (and for providing some introduction in John Hover s talk yesterday!) 29 05/19/16 Burt Holzman Fermilab HEPCloud Facilty

glideinwms: Quick Facts

glideinwms: Quick Facts glideinwms: Quick Facts glideinwms is an open-source Fermilab Computing Sector product driven by CMS Heavy reliance on HTCondor from UW Madison and we work closely with them http://tinyurl.com/glideinwms

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility

Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility S Timm 1, G Cooper, S Fuess 1, G Garzoglio 1, B Holzman 1, R Kennedy 1, D Grassano 1, A Tiradani

More information

arxiv: v1 [cs.dc] 29 Sep 2017

arxiv: v1 [cs.dc] 29 Sep 2017 Computing and Software for Big Science manuscript No. (will be inserted by the editor) arxiv:1710.00100v1 [cs.dc] 29 Sep 2017 HEPCloud, a new paradigm for HEP facilities: CMS Amazon Web Services Investigation

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

What s new in HTCondor? What s coming? HTCondor Week 2018 Madison, WI -- May 22, 2018

What s new in HTCondor? What s coming? HTCondor Week 2018 Madison, WI -- May 22, 2018 What s new in HTCondor? What s coming? HTCondor Week 2018 Madison, WI -- May 22, 2018 Todd Tannenbaum Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool D. Mason for CMS Software & Computing 1 Going to try to give you a picture of the CMS HTCondor/ glideinwms global pool What s the use case

More information

Monitoring HTCondor with the BigPanDA monitoring package

Monitoring HTCondor with the BigPanDA monitoring package Monitoring HTCondor with the BigPanDA monitoring package J. Schovancová 1, P. Love 2, T. Miller 3, T. Tannenbaum 3, T. Wenaus 1 1 Brookhaven National Laboratory 2 Lancaster University 3 UW-Madison, Department

More information

Clouds in High Energy Physics

Clouds in High Energy Physics Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service

More information

Benefits of Extending your Datacenters with Amazon Web Services

Benefits of Extending your Datacenters with Amazon Web Services Benefits of Extending your Datacenters with Amazon Web Services Xavier Prélat Business Development Manager @aws_actus How did Amazon.. get into cloud computing? What is AWS? Amazon Web Services offers

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

The Diverse use of Clouds by CMS

The Diverse use of Clouds by CMS Journal of Physics: Conference Series PAPER OPEN ACCESS The Diverse use of Clouds by CMS To cite this article: Anastasios Andronis et al 2015 J. Phys.: Conf. Ser. 664 022012 Recent citations - HEPCloud,

More information

ATLAS Experiment and GCE

ATLAS Experiment and GCE ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers CHEP 2016 - San Francisco, United States of America Gunther Erli, Frank Fischer, Georg Fleig, Manuel Giffels, Thomas

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

CLOUD ECONOMICS: HOW TO QUANTIFY THE BENEFITS OF MOVING TO THE CLOUD

CLOUD ECONOMICS: HOW TO QUANTIFY THE BENEFITS OF MOVING TO THE CLOUD CLOUD ECONOMICS: HOW TO QUANTIFY THE BENEFITS OF MOVING TO THE CLOUD Matt Johnson, Solutions Architect, Amazon Web Services 30 October 2017 2017, Amazon Web Services, Inc. or its Affiliates. All rights

More information

Evolution of Cloud Computing in ATLAS

Evolution of Cloud Computing in ATLAS The Evolution of Cloud Computing in ATLAS Ryan Taylor on behalf of the ATLAS collaboration 1 Outline Cloud Usage and IaaS Resource Management Software Services to facilitate cloud use Sim@P1 Performance

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

Clouds at other sites T2-type computing

Clouds at other sites T2-type computing Clouds at other sites T2-type computing Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are used in a variety of ways for Tier-2 type computing MC simulation, production

More information

2013 AWS Worldwide Public Sector Summit Washington, D.C.

2013 AWS Worldwide Public Sector Summit Washington, D.C. 2013 AWS Worldwide Public Sector Summit Washington, D.C. EMR for Fun and for Profit Ben Butler Sr. Manager, Big Data butlerb@amazon.com @bensbutler Overview 1. What is big data? 2. What is AWS Elastic

More information

Towards a Strategy for Data Sciences at UW

Towards a Strategy for Data Sciences at UW Towards a Strategy for Data Sciences at UW Albrecht Karle Department of Physics June 2017 High performance compu0ng infrastructure: Perspec0ves from Physics Exis0ng infrastructure and projected future

More information

ESNET Requirements for Physics Reseirch at the SSCL

ESNET Requirements for Physics Reseirch at the SSCL SSCLSR1222 June 1993 Distribution Category: 0 L. Cormell T. Johnson ESNET Requirements for Physics Reseirch at the SSCL Superconducting Super Collider Laboratory Disclaimer Notice I This report was prepared

More information

Data handling with SAM and art at the NOνA experiment

Data handling with SAM and art at the NOνA experiment Data handling with SAM and art at the NOνA experiment A Aurisano 1, C Backhouse 2, G S Davies 3, R Illingworth 4, N Mayer 5, M Mengel 4, A Norman 4, D Rocco 6 and J Zirnstein 6 1 University of Cincinnati,

More information

ATLAS & Google "Data Ocean" R&D Project

ATLAS & Google Data Ocean R&D Project ATLAS & Google "Data Ocean" R&D Project Authors: Mario Lassnig (CERN), Karan Bhatia (Google), Andy Murphy (Google), Alexei Klimentov (BNL), Kaushik De (UTA), Martin Barisits (CERN), Fernando Barreiro (UTA),

More information

Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS Journal of Physics: Conference Series PAPER OPEN ACCESS Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS To cite this article: J Balcas et al 2017 J.

More information

Cloud Computing for Science

Cloud Computing for Science Cloud Computing for Science August 2009 CoreGrid 2009 Workshop Kate Keahey keahey@mcs.anl.gov Nimbus project lead University of Chicago Argonne National Laboratory Cloud Computing is in the news is it

More information

Scientific Computing on Emerging Infrastructures. using HTCondor

Scientific Computing on Emerging Infrastructures. using HTCondor Scientific Computing on Emerging Infrastructures using HT HT Week, 20th May 2015 University of California, San Diego 1 Scientific Computing LHC probes nature at 10-17cm Weak Scale Scientific instruments:

More information

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016 HIGH ENERGY PHYSICS ON THE OSG Brian Bockelman CCL Workshop, 2016 SOME HIGH ENERGY PHYSICS ON THE OSG (AND OTHER PLACES TOO) Brian Bockelman CCL Workshop, 2016 Remind me again - WHY DO PHYSICISTS NEED

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

Cloud Computing: Making the Right Choice for Your Organization

Cloud Computing: Making the Right Choice for Your Organization Cloud Computing: Making the Right Choice for Your Organization A decade ago, cloud computing was on the leading edge. Now, 95 percent of businesses use cloud technology, and Gartner says that by 2020,

More information

Shooting for the sky: Testing the limits of condor. HTCondor Week May 2015 Edgar Fajardo On behalf of OSG Software and Technology

Shooting for the sky: Testing the limits of condor. HTCondor Week May 2015 Edgar Fajardo On behalf of OSG Software and Technology Shooting for the sky: Testing the limits of condor 21 May 2015 Edgar Fajardo On behalf of OSG Software and Technology 1 Acknowledgement Although I am the one presenting. This work is a product of a collaborative

More information

BERLIN. 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved

BERLIN. 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved BERLIN 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved Introduction to Amazon EC2 Danilo Poccia Technical Evangelist @danilop 2015, Amazon Web Services, Inc. or its affiliates. All

More information

New Directions and BNL

New Directions and BNL New Directions and HTCondor @ BNL USATLAS TIER-3 & NEW COMPUTING DIRECTIVES William Strecker-Kellogg RHIC/ATLAS Computing Facility (RACF) Brookhaven National Lab May 2016 RACF Overview 2 RHIC Collider

More information

Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands

Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands Unleash Your Data Center s Hidden Power September 16, 2014 Molly Rector CMO, EVP Product Management & WW Marketing

More information

Research at PNNL: Powered by AWS NLIT 2018

Research at PNNL: Powered by AWS NLIT 2018 Research at PNNL: Powered by AWS NLIT 2018 RALPH PERKO AND MIKE GIARDINELLI Pacific Northwest National Laboratory Reference herein to any specific commercial product, process, or service by trade name,

More information

Cloud Computing Introduction & Offerings from IBM

Cloud Computing Introduction & Offerings from IBM Cloud Computing Introduction & Offerings from IBM Gytis Račiukaitis IT Architect, IBM Global Business Services Agenda What is cloud computing? Benefits Risks & Issues Thinking about moving into the cloud?

More information

2017 Resource Allocations Competition Results

2017 Resource Allocations Competition Results 2017 Resource Allocations Competition Results Table of Contents Executive Summary...3 Computational Resources...5 CPU Allocations...5 GPU Allocations...6 Cloud Allocations...6 Storage Resources...6 Acceptance

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Cost-efficient VNF placement and scheduling in cloud networks. Speaker: Tao Gao 05/18/2018 Group Meeting Presentation

Cost-efficient VNF placement and scheduling in cloud networks. Speaker: Tao Gao 05/18/2018 Group Meeting Presentation Cost-efficient VNF placement and scheduling in cloud networks Speaker: Tao Gao 05/18/2018 Group Meeting Presentation Background Advantages of Network Function Virtualization (NFV): can be customized on-demand

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

Develop and test your Mobile App faster on AWS

Develop and test your Mobile App faster on AWS Develop and test your Mobile App faster on AWS Carlos Sanchiz, Solutions Architect @xcarlosx26 #AWSSummit 2016, Amazon Web Services, Inc. or its Affiliates. All rights reserved. The best mobile apps are

More information

Cloud Computing. What is cloud computing. CS 537 Fall 2017

Cloud Computing. What is cloud computing. CS 537 Fall 2017 Cloud Computing CS 537 Fall 2017 What is cloud computing Illusion of infinite computing resources available on demand Scale-up for most apps Elimination of up-front commitment Small initial investment,

More information

Cisco CloudCenter Solution Use Case: Application Migration and Management

Cisco CloudCenter Solution Use Case: Application Migration and Management Cisco CloudCenter Solution Use Case: Application Migration and Management Application migration and management Cloud computing is here to stay. According to recent Gartner research 1, from 2011 to 2014

More information

arxiv: v1 [cs.dc] 20 Jul 2015

arxiv: v1 [cs.dc] 20 Jul 2015 Designing Computing System Architecture and Models for the HL-LHC era arxiv:1507.07430v1 [cs.dc] 20 Jul 2015 L Bauerdick 1, B Bockelman 2, P Elmer 3, S Gowdy 1, M Tadel 4 and F Würthwein 4 1 Fermilab,

More information

Amazon Web Services 101 April 17 th, 2014 Joel Williams Solutions Architect. Amazon.com, Inc. and its affiliates. All rights reserved.

Amazon Web Services 101 April 17 th, 2014 Joel Williams Solutions Architect. Amazon.com, Inc. and its affiliates. All rights reserved. Amazon Web Services 101 April 17 th, 2014 Joel Williams Solutions Architect Amazon.com, Inc. and its affiliates. All rights reserved. Learning about Cloud Computing with AWS What is Cloud Computing and

More information

Startups and Mobile Apps on AWS. Dave Schappell, Startup Business Development Manager, AWS September 11, 2013

Startups and Mobile Apps on AWS. Dave Schappell, Startup Business Development Manager, AWS September 11, 2013 Startups and Mobile Apps on AWS Dave Schappell, Startup Business Development Manager, AWS September 11, 2013 The most radical and transformative of inventions are those that empower others to unleash their

More information

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID The WLCG Motivation and benefits Container engines Experiments status and plans Security considerations Summary and outlook STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID SWISS EXPERIENCE

More information

Changing landscape of computing at BNL

Changing landscape of computing at BNL Changing landscape of computing at BNL Shared Pool and New Users and Tools HTCondor Week May 2018 William Strecker-Kellogg Shared Pool Merging 6 HTCondor Pools into 1 2 What? Current Situation

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009 Magellan Project Jeff Broughton NERSC Systems Department Head October 7, 2009 1 Magellan Background National Energy Research Scientific Computing Center (NERSC) Argonne Leadership Computing Facility (ALCF)

More information

2013 AWS Worldwide Public Sector Summit Washington, D.C.

2013 AWS Worldwide Public Sector Summit Washington, D.C. Washington, D.C. How to Buy the Cloud Brett McMillen, Principal Doug VanDyke, General Manager Changing IT Acquisition Strategies Old World IT Price lock Vendor lock-in Rigid CLIN structure CapEx Budget

More information

Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech

Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science Joint Genome Institute LHC Beyond the Higgs Boson LSST SKA Harvey Newman, Caltech NSF CC*/CICI Workshop: Data Integration

More information

N. Marusov, I. Semenov

N. Marusov, I. Semenov GRID TECHNOLOGY FOR CONTROLLED FUSION: CONCEPTION OF THE UNIFIED CYBERSPACE AND ITER DATA MANAGEMENT N. Marusov, I. Semenov Project Center ITER (ITER Russian Domestic Agency N.Marusov@ITERRF.RU) Challenges

More information

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013 Computing at the Large Hadron Collider Frank Würthwein Professor of Physics of California San Diego November 15th, 2013 Outline The Science Software & Computing Challenges Present Solutions Future Solutions

More information

Building Apps in the Cloud to reduce costs up to 90%

Building Apps in the Cloud to reduce costs up to 90% Building Apps in the Cloud to reduce costs up to 90% Christian Petters, AWS Solutions Architect 18 May 2017 2017, Amazon Web Services, Inc. or its Affiliates. All rights reserved. AWS EC2 Consumption Models

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

AWS Reference Design Document

AWS Reference Design Document AWS Reference Design Document Contents Overview... 1 Amazon Web Services (AWS), Public Cloud and the New Security Challenges... 1 Security at the Speed of DevOps... 2 Securing East-West and North-South

More information

Introduction to Amazon Web Services

Introduction to Amazon Web Services Introduction to Amazon Web Services Introduction Amazon Web Services (AWS) is a collection of remote infrastructure services mainly in the Infrastructure as a Service (IaaS) category, with some services

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Experience of the WLCG data management system from the first two years of the LHC data taking

Experience of the WLCG data management system from the first two years of the LHC data taking Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Eight Tips for Better Archives. Eight Ways Cloudian Object Storage Benefits Archiving with Veritas Enterprise Vault

Eight Tips for Better  Archives. Eight Ways Cloudian Object Storage Benefits  Archiving with Veritas Enterprise Vault Eight Tips for Better Email Archives Eight Ways Cloudian Object Storage Benefits Email Archiving with Veritas Enterprise Vault Most organizations now manage terabytes, if not petabytes, of corporate and

More information

Striped Data Server for Scalable Parallel Data Analysis

Striped Data Server for Scalable Parallel Data Analysis Journal of Physics: Conference Series PAPER OPEN ACCESS Striped Data Server for Scalable Parallel Data Analysis To cite this article: Jin Chang et al 2018 J. Phys.: Conf. Ser. 1085 042035 View the article

More information

Cisco Unified Data Center Strategy

Cisco Unified Data Center Strategy Cisco Unified Data Center Strategy How can IT enable new business? Holger Müller Technical Solutions Architect, Cisco September 2014 My business is rapidly changing and I need the IT and new technologies

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

The ATLAS Distributed Analysis System

The ATLAS Distributed Analysis System The ATLAS Distributed Analysis System F. Legger (LMU) on behalf of the ATLAS collaboration October 17th, 2013 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Amsterdam

More information

Spark and HPC for High Energy Physics Data Analyses

Spark and HPC for High Energy Physics Data Analyses Spark and HPC for High Energy Physics Data Analyses Marc Paterno, Jim Kowalkowski, and Saba Sehrish 2017 IEEE International Workshop on High-Performance Big Data Computing Introduction High energy physics

More information

What is Cloud Computing? What are the Private and Public Clouds? What are IaaS, PaaS, and SaaS? What is the Amazon Web Services (AWS)?

What is Cloud Computing? What are the Private and Public Clouds? What are IaaS, PaaS, and SaaS? What is the Amazon Web Services (AWS)? What is Cloud Computing? What are the Private and Public Clouds? What are IaaS, PaaS, and SaaS? What is the Amazon Web Services (AWS)? What is Amazon Machine Image (AMI)? Amazon Elastic Compute Cloud (EC2)?

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

Washington DC October Consumer Engagement. October 4, Gail Allen, Sr. Manager, Customer Solutions

Washington DC October Consumer Engagement. October 4, Gail Allen, Sr. Manager, Customer Solutions Consumer Engagement Through Social Media October 4, 2012 Gail Allen, Sr. Manager, Customer Solutions KCP&L Company Overview Key Statistics Customers Generation 9 plant sites 26 generating units 830,000

More information

The Cloud Changes Nothing and Everything! Amazon.com, Inc. and its affiliates. All rights reserved.

The Cloud Changes Nothing and Everything! Amazon.com, Inc. and its affiliates. All rights reserved. The Cloud Changes Nothing and Everything! Amazon.com, Inc. and its affiliates. All rights reserved. About How Amazon did Amazon Web Services Deep experience in building and operating global web scale systems?

More information

image credit Fabien Hermenier Cloud compting 101

image credit   Fabien Hermenier Cloud compting 101 image credit http://eyepluscamera.files.wordpress.com/ Fabien Hermenier Cloud compting 101 1 was cloud computing needed? 2 3 Mainframes Then came with affordable PCs Then we spread out the load for security,

More information

Bill Boroski LQCD-ext II Contractor Project Manager

Bill Boroski LQCD-ext II Contractor Project Manager Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,

More information

VC3. Virtual Clusters for Community Computation. DOE NGNS PI Meeting September 27-28, 2017

VC3. Virtual Clusters for Community Computation. DOE NGNS PI Meeting September 27-28, 2017 VC3 Virtual Clusters for Community Computation DOE NGNS PI Meeting September 27-28, 2017 Douglas Thain, University of Notre Dame Rob Gardner, University of Chicago John Hover, Brookhaven National Lab A

More information

image credit Fabien Hermenier Cloud compting 101

image credit  Fabien Hermenier Cloud compting 101 image credit http://eyepluscamera.files.wordpress.com/ Fabien Hermenier Cloud compting 101 1 ? was cloud computing needed 2 3 Mainframes Then came with affordable PCs Then we spread out the load for security,

More information

SLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017

SLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017 SLATE Services Layer at the Edge Rob Gardner University of Chicago Shawn McKee University of Michigan Joe Breen University of Utah First Meeting of the National Research Platform Montana State University

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

Bridging The Gap Between Industry And Academia

Bridging The Gap Between Industry And Academia Bridging The Gap Between Industry And Academia 14 th Annual Security & Compliance Summit Anaheim, CA Dilhan N Rodrigo Managing Director-Smart Grid Information Trust Institute/CREDC University of Illinois

More information

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER WHITE PAPER THE ZADARA CLOUD An overview of the Zadara Storage Cloud and VPSA Storage Array technology Zadara 6 Venture, Suite 140, Irvine, CA 92618, USA www.zadarastorage.com EXECUTIVE SUMMARY The IT

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

THE JOURNEY OVERVIEW THREE PHASES TO A SUCCESSFUL MIGRATION ADOPTION ACCENTURE IS 80% IN THE CLOUD

THE JOURNEY OVERVIEW THREE PHASES TO A SUCCESSFUL MIGRATION ADOPTION ACCENTURE IS 80% IN THE CLOUD OVERVIEW Accenture is in the process of transforming itself into a digital-first enterprise. Today, Accenture is 80 percent in a public cloud. As the journey continues, Accenture shares its key learnings

More information

HOW TO PLAN & EXECUTE A SUCCESSFUL CLOUD MIGRATION

HOW TO PLAN & EXECUTE A SUCCESSFUL CLOUD MIGRATION HOW TO PLAN & EXECUTE A SUCCESSFUL CLOUD MIGRATION Steve Bertoldi, Solutions Director, MarkLogic Agenda Cloud computing and on premise issues Comparison of traditional vs cloud architecture Review of use

More information

The Global Grid and the Local Analysis

The Global Grid and the Local Analysis The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between

More information

November 1 st 2010, Internet2 Fall Member Mee5ng Jason Zurawski Research Liaison

November 1 st 2010, Internet2 Fall Member Mee5ng Jason Zurawski Research Liaison November 1 st 2010, Internet2 Fall Member Mee5ng Jason Zurawski Research Liaison Agenda DYNES (Eric/Jason) SC10 Ac5vi5es (All) LHCOPN Update (Jason) Other Hot Topics? 2 11/1/10, 2010 Internet2 DYNES Background

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data Maaike Limper Emil Pilecki Manuel Martín Márquez About the speakers Maaike Limper Physicist and project leader Manuel Martín

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Table of Contents: The Accelerated Data Center Optimizing Data Center Productivity Same Throughput with Fewer Server Nodes

More information