A ground-up approach to High-Throughput Cloud Computing in High-Energy Physics

Similar documents
PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem

A ground-up approach to High-Throughput Cloud Computing in High-Energy Physics

Managing a tier-2 computer centre with a private cloud infrastructure

PROOF-Condor integration for ATLAS

Lightweight scheduling of elastic analysis containers in a competitive cloud environment: a Docked Analysis Facility for ALICE

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation

opennebula and cloud architecture

UW-ATLAS Experiences with Condor

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

ElasterStack 3.2 User Administration Guide - Advanced Zone

Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

CIT 668: System Architecture. Amazon Web Services

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Clouds in High Energy Physics

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

EBOOK: VMware Cloud on AWS: Optimized for the Next-Generation Hybrid Cloud

Virtualizing a Batch. University Grid Center

ATLAS Experiment and GCE

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

arxiv: v1 [cs.dc] 7 Apr 2014

Baremetal with Apache CloudStack

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

A10 HARMONY CONTROLLER

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Batch Services at CERN: Status and Future Evolution

Developing Microsoft Azure Solutions (70-532) Syllabus

CloudMan cloud clusters for everyone

A High Availability Solution for GRID Services

Opportunities for container environments on Cray XC30 with GPU devices

Data Centers and Cloud Computing. Slides courtesy of Tim Wood

VMware vcloud Air User's Guide

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013

Clouds at other sites T2-type computing

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Striped Data Server for Scalable Parallel Data Analysis

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS

Data Centers and Cloud Computing. Data Centers

CouchDB-based system for data management in a Grid environment Implementation and Experience

Cloud Computing introduction

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014

OPENSTACK: THE OPEN CLOUD

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

Software as a Service (SaaS) Platform as a Service (PaaS) Infrastructure as a Service (IaaS)

ANSE: Advanced Network Services for [LHC] Experiments

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

Introduction to data centers

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. reserved. Insert Information Protection Policy Classification from Slide 8

Visita delegazione ditte italiane

Scheduling Computational and Storage Resources on the NRP

Data Centers and Cloud Computing

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

The CMS Computing Model

Hedvig as backup target for Veeam

MONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT

Open Cloud Reference Architecture

Xen and CloudStack. Ewan Mellor. Director, Engineering, Open-source Cloud Platforms Citrix Systems

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Large Scale Computing Infrastructures

Windows Server 2012 Hands- On Camp. Learn What s Hot and New in Windows Server 2012!

Paperspace. Architecture Overview. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper

SQL Azure. Abhay Parekh Microsoft Corporation

Summary of the LHC Computing Review

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

Zadara Enterprise Storage in

ARCHITECTING WEB APPLICATIONS FOR THE CLOUD: DESIGN PRINCIPLES AND PRACTICAL GUIDANCE FOR AWS

Evolution of Cloud Computing in ATLAS

SoftNAS Cloud Data Management Products for AWS Add Breakthrough NAS Performance, Protection, Flexibility

How Microsoft Built MySQL, PostgreSQL and MariaDB for the Cloud. Santa Clara, California April 23th 25th, 2018

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

2013 AWS Worldwide Public Sector Summit Washington, D.C.

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

Scientific Computing on Emerging Infrastructures. using HTCondor

70-414: Implementing an Advanced Server Infrastructure Course 01 - Creating the Virtualization Infrastructure

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

What is Cloud Computing? What are the Private and Public Clouds? What are IaaS, PaaS, and SaaS? What is the Amazon Web Services (AWS)?

Developing Microsoft Azure Solutions (70-532) Syllabus

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO

IBM Bluemix compute capabilities IBM Corporation

The Global Grid and the Local Analysis

Identifying Workloads for the Cloud

CHEM-E Process Automation and Information Systems: Applications

zenterprise The Ideal Platform For Smarter Computing Improving Service Delivery With Private Cloud Computing

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

CMS users data management service integration and first experiences with its NoSQL data storage

SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS

Introduction to Amazon Cloud & EC2 Overview

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide

SwiftStack and python-swiftclient

Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack

The GAP project: GPU applications for High Level Trigger and Medical Imaging

LHCb experience running jobs in virtual machines

Global Software Distribution with CernVM-FS

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

Welcome! Presenters: STFC January 10, 2019

Transcription:

A ground-up approach to High-Throughput Cloud Computing in High-Energy Physics Dario Berzano Supervisors: M.Masera, G.Ganis, S.Bagnasco Università di Torino - Doctoral School of Sciences and Innovative Technologies Ph.D. Defense - Torino, 16.04.2014

Virtualization and Cloud Computing in High-Energy Physics

Institutions and projects involved INFN Torino Centro di Calcolo Università di Torino CERN PH-SFT PROOF and CernVM The ALICE Experiment @ LHC 3

Computing in High-Energy Physics From raw detector hits to formats suitable for repeated user analysis Raw collected ESD reconstructed AOD filtered Massive data: 177 PB in total (at LS1) LHCb 7 PB highest rate: LHCb (1 MHz trigger) largest event size: ALICE (6 MB in Pb-Pb) High Throughput Computing: network and storage set our limit events are independent: trivial parallelism ATLAS 55 PB ALICE 30 PB CMS 85 PB 4

The Worldwide LHC Computing Grid Tier-0 CERN only The Grid: many geographically distributed computing sites with a tiered structure Tier-1 sim + rec Tier-2 user analysis Main data flow Each site provides computing and storage Data is replicated: faster access and backup Tier boundaries are nowadays loose: better network and higher CPU density in practice they all talk to each other Tier-3 local access 10 000 users 50 000 000 jobs 340 000 cores 180 PB of storage 5

Virtualization and Cloud Computing Virtualization hardware resources do not correspond to exposed resources Partitioning resources export virtual small disks or CPUs not corresponding to actual hardware resources (e.g. Dropbox) Emulating resources reprocess data from old experiments by emulating old hardware: Long Term Data Preservation Cloud Computing leverage virtualization to turn computing resources into services Software as a Service software is not installed but runs somewhere and exposes a remote access interface (e.g. Gmail) Infrastructure as a Service virtual machines run somewhere and they can be accessed as if they were physical nodes 6

Administrative domains in the cloud infrastructure virtual infrastructure administrator configures virtual machines: does not care about the underlying hardware services user uses the services just like before, completely unaware of virtualization administrators of distributed and independent clouds manage the hardware, replace disks when broken, monitor resources usage, coordinate local and remote users 7

Benefits of cloud computing Ensures a consistent environment for your software Clear separation of administrative domains Support different use cases on the same hardware infrastructure, and rapidly move resources between them: multi-tenancy Opportunistic exploitation of otherwise unused resources: a good example are high-level trigger farms outside of data taking Cloud computing aims to facilitate access to computing resources 8

Drawbacks of cloud computing Virtualization exposes to the VMs the lowest common denominator of hardware features (such as CPU specs): no architecture-specific optimization possible However loss of performances is invisible in most cases: near zero loss on CPU bound tasks Grid jobs slowed down by remote I/O Virtualization is appropriate for some use cases (Grid-like) and not suitable for others (real-time applications, triggers, etc.) Grid-like applications: cloud has more benefits than drawbacks 9

Building a private cloud from a Grid computing center

INFN Torino s computing center Fact sheet Gross storage: 1 000 TB Computing: 14 khs06 Job capacity: 1 100 cores Network: 1/10 GbE Classified as a Grid Tier-2 CPU$[kHS06]$ 16" 14" 12" 10" 8" 6" 4" 2" Cloud from Dec 2011 0" set+06" feb+08" giu+09" nov+10" mar+12" ago+13" 1400" 1200" 1000" Torino s computing capacity evolution 800" 600" 400" 200" 0" Raw$disk$[TBL]$ Main customer: the ALICE experiment (> 90% jobs) Since Dec 2011: all new hardware configured for the cloud 11

The orchestrator: OpenNebula Controls and monitors hypervisors and other virtual resources Manages lifecycle of virtual machines and virtual farms Repository of base images and marketplace for external images Web interface with virtual VM consoles One of the first tools available Very robust Open source Easily customizable via the API 12

Design of our cloud architecture Service hypervisors high availability for critical services VMs run from a shared disk not suitable for high I/O live migration: move running VMs, zero service interruption e.g. local Grid head node Working class hypervisors high performance applications VMs run from the local disk optimized for high I/O failures acceptable: expendable VMs, jobs resubmitted e.g. Grid worker nodes Shared storage for services 2 redundant GlusterFS servers Self-recovery upon disk failures Constraints Integration with non-cloud part Progressive migration to cloud 13

Techniques for fast deployment Base VM image with the OS: the root disk QCoW2: image grows on use Slow writes: use ephemeral disk for that Popular images (e.g. Grid nodes) cached and booted as snapshots Images cache Delete/create img in OpenNebula as usual Single sync cmd Fast torrent-like sync: all nodes are seeds Extra space: the ephemeral storage Raw disk (no QCoW2) with fallocate: O(1) Standard ext3/4 creation is O(size): use XFS or lazy ext4 init: O(1) Deployment times 1 node: < 15 s 40 nodes: < 2 min 14

Multiple tenants and the Grid The Grid is just a special tenant of our private cloud Objective: computing resources never wasted the Grid always needs resources relinquishes resources in favor of others It may take hours for our Grid VMs to drain of 1 000 jobs, ~75 finish per hour our Grid VMs currently have 6 cores Draining Grid virtual machines Command to set VMs to drain mode No new jobs are accepted OpenNebula shutoff when all jobs done Little free space for small use cases (~50 slots): larger use cases should book resources the evening before 15

Sandboxed virtual farms New resources provisioning model Each tenant has some resources quota Base images for various OSes provided Per-tenant sandboxed private networks Isolated at MAC addr level via ebtables User can map a public Elastic IP to any VM Access via standard Amazon EC2 commands Clearer to use than native OpenNebula s Under the hood Automated creation of sandboxed env: user, quota, network Networking isolation and Elastic IPs via Virtual Routers Virtual Routers auto launched at creation of sandbox If changing from OpenNebula to other tools, still same commands EC2 clients ready to use on public login nodes 16

Virtual router: OpenWRT Each sandboxed network is managed by a Virtual Router Tiny: 1 CPU, < 200 MB RAM, runs on Service Hypervisors Has Public and Private IP: provides DNS, DHCP, NAT, firewall Elastic IP functionality obtained via port forwarding Invisible and inaccessible to users A customized OpenWRT Linux distro for domestic routers Web management interface Network as a Service 17

From Grid to clouds By exposing a standard API (EC2), Torino s private cloud is prepared for a future of interoperating small clouds Fully configured VMs submitted both at user and experiment level Users can keep using the same workflow (e.g. Grid submission) and have in addition the possibility of launching VMs directly Very generic requirements for local cloud sites Experiments can centrally ensure environment consistency in a noninvasive manner for the local sites Clouds provide genuine abstraction of resources (CPU, RAM, disk ) 18

Running elastic applications: the Virtual Analysis Facility

Cloud-awareness Clouds can be a troubled environment Resources are diverse Like the Grid but at virtual machine level Virtual machines are volatile Might appear and disappear without notice Building a cloud aware application for HEP Scale promptly when resources vary No prior assignment of data to the workers Deal smoothly with crashes Automatic failover and clear recovery procedures Usual Grid workflow static job pre-splitting cloud-aware 20

PROOF is cloud-aware PROOF: the Parallel ROOT Facility Based on unique advanced features of ROOT Event-based parallelism Automatic merging and display of results Runs on batch systems and Grid with PROOF on Demand PROOF is interactive Constant control and feedback of attached resources Data is not preassigned to the workers pull scheduler New workers dynamically attached to a running process Interactivity is what makes PROOF cloud-aware 21

PROOF dynamic scheduling Adaptive workload: very granular pull scheduler master worker Nonuniform workload distribution Packets per worker get next ready Packets 4000 3500 3000 packet generator packet get next packet get next process ready process ready time 2500 2000 1500 1000 500 0 0.83 0.77 0.5 0.23 0.17 0.33 0.47 0.27 0.37 0.67 0.0 0.73 0.57 0.53 0.13 0.43 0.63 16 14 12 10 8 Worker activity stop (seconds) all workers Worker are done in ~20 s Mean 2287 RMS 16.61 packet process 6 4 2 Uniform completion time 2260 2270 2280 2290 2300 2310 2320 2330 Query Processing Time (s) 22

PROOF dynamic workers User workflow Under the hood User queues N workers Wait until 1 worker is up initially available bulk init init worker worker worker worker master register init worker register init worker User launches analysis Workers gradually join init process new workers autoregister deferred init init process init process time 23

Scheduling PROOF workers on the Grid Rationale Show how resources ramp up on Grid sites according to their size Compare time-to-results of the same analysis with PROOF and Grid Benchmark conditions ATLAS Italian Grid sites + CERN 100 jobs queued Obtained jobs can be used as: Num. available workers 100 80 60 PROOF workers pull scheduling: dynamic workers and workload Grid jobs push scheduling: wait for the last job to complete 40 20 Tier-0 Tier-1 Tier-2{ CERN CNAF ROMA1 NAPOLI MILANO Large sites give resources more rapidly 0 0 500 1000 1500 2000 2500 3000 3500 Time [s] 24

PROOF vs. Grid scheduling Time to results effective time to wait before having the results Batch jobs: num. jobs chosen optimally Actual time to results [s] 10000 8000 6000 4000 2000 Grid batch jobs (ideal num. of workers) PROOF pull and dynamic workers 0 0 100 200 300 400 500 600 700 800 Serialized required process time [s] Serialized time time a job would require if ran on a single core Speedup: 28% on a 12 hours job 18% on a 10 days job 3 10 Analytical results: they represent the upper speedup limit PROOF uses the same resources in a more efficient way Grid: must wait for the last job to finish before collecting results PROOF: workload assignment is independent from the n. of jobs 25

The Virtual Analysis Facility PROOF+PoD CernVM HTCondor elastiq What is the VAF? A cluster of CernVM virtual machines: one head node, many workers Running the HTCondor job scheduler Capable of growing and shrinking based on the usage with elastiq Configured via a web interface: cernvm-online.cern.ch Entire cluster launched with a single command User interacts only by submitting jobs Elastic Cluster as a Service: elasticity is embedded, no external tools PoD and dynamic workers: run PROOF on top of it as a special case 26

The VAF is cloud-aware VAF leverages the CernVM ecosystem and HTCondor CernVM-FS: all experiments software downloaded on demand, featuring aggressive caching with HTTP proxies CernVM 3: tiny (< 20 MB) virtual machine image, immediate to deploy: OS on demand with root filesystem from CernVM-FS CernVM Online: web interface for VM and cluster creation, and secure repository of created configurations HTCondor: batch system with great scaling capabilities, with worker nodes self-registering to the head node 27

How elasticity works: elastiq elastiq is a Python app monitoring the queue to make it elastic Batch system s queue running running running waiting waiting start new VMs cloud controller exposing EC2 API shutdown idle VMs working working idle idle idle Running VMs Jobs waiting too long will trigger a scale up Supports minimum and maximum quota of VMs You deploy only the master node: minimum quota immediately launches VMs automatically Integrated in CernVM 3 source: github.com/dberzano/elastiq 28

Create a VAF: CernVM Online A cluster in four steps Configure the head node Stable and in production: cernvm-online.cern.ch Configure the worker nodes Put them together in a cluster Deploy the cluster then copy and paste the generated command Just click on the deploy button 29

µμcernvm+proof startup latency Measured the delay before requested resources become available Target clouds: Medium: OpenNebula @ INFN Torino Large: OpenStack @ CERN (Agile) Test conditions: µμcernvm use a HTTP caching proxy Precaching via a dummy boot µμcernvm image is < 20 MB Image transfer time negligible What contributes to latency µμcernvm configuration occurring at boot First-time software download from CernVM-FS HTCondor nodes registration PROOF+PoD reaction time VMs deployed when resources are available Rule out delay and errors due to lack of resources 30

µμcernvm+proof startup latency 5:00 Maximum 4:36 minutes latency: perfectly acceptable! Time to wait for workers [m:ss] 4:00 3:00 2:00 1:00 3:57 4:36 0:00 CERN OpenStack Torino OpenNebula Measured time elapsed between PoD workers request and availability: pod-info -l Results are the average of 10 VMs successfully deployed 31

Data access for Torino s VAF 700" Concurrent)read)accesses)with)PROOF)from)GlusterFS) 600" Cumula&ve)throughput)[MB/s]) 500" 400" 300" 200" 100" Limit: workers network (1 GbE) Up to cumulative 600 MB/s serving 84 parallel workers 0" 24" 36" 48" 60" 72" 84" 96" 108" 120" 132" #)PROOF)Workers) Data replicated on a dedicated GlusterFS storage Current size: 51 TB 99% full Comparable speed using the Grid storage instead of dedicated: more sustainable 32

Using the Virtual Analysis Facility The elastic, on-demand Virtual Analysis Facility is multipurpose Runs every task that can be submitted to a queue (not only PROOF) Runs on every cloud and respects the environment: unused resources are properly disposed Where can I run it? Usage examples: Any private cloud Medical imaging e.g. Torino s cloud or CERN Agile All types of Monte Carlos Quality Assurance jobs Commercial clouds you pay for e.g. Amazon Elastic Cloud Neuroscience (e.g. human brain simulations) It can be used now: VAF is production grade, cloud resources are widespread 33

Outcomes

Private cloud at INFN Torino Complete redefinition of the provisioning model: from Grid to cloud Progressive migration of Grid nodes and services: now all new computing resources are cloudified Small groups can finally get computing power without even accessing the computing center and without leaving resources unused Pioneered the setup of a production-grade private cloud using industry standard tools at INFN 35

The ALICE Experiment ALICE Analysis Facilities: static PROOF deployments Contributions to the definition of the data access model Development of a dataset stager daemon, now part of ROOT, in production since 2010 The VAF prototype in Torino was the first ALICE PROOF deployment on virtual machines Towards our Virtual Analysis Facility VAF runs on HLT farm to process QA batch jobs (no PROOF involved) Forthcoming PROOF deployments will start using VAF 36

PROOF and CernVM The VAF involved conjoint work in PROOF and CernVM PROOF Development of the long-awaited dynamic workers feature Making PROOF on Demand work on particular clouds CernVM Development of the support for multiple contextualization sources Consolidation of HTCondor support Contribution and maintenance of CernVM Online 37

A High-Energy Physics use case Searching Λ hypernuclei (hypertriton and anti-hypertriton) in heavy ion collisions through invariant mass analysis Rare events: requires large amount of data to analyze storage testing: ~20 TB for a Pb-Pb LHC 2010 run period Many cuts plus topological cuts to improve signal (also TPC+ITS refit!) high CPU usage Particle identification uses OADB access stress-test for network Light output data: ~80 histograms Analysis with S.Bufalino, E.Botta Real analysis entirely done on the VAF with PROOF Measured the reliability of our cloud infrastructure Reference analysis used while developing the VAF 38

M5L-CAD automatic lung CT analysis Fight lung cancer by massive pre-screening and cloud analysis Identification algorithms: RGVP, CAM, VBNA Results and probabilities are then combined SaaS Interface: WIDEN (Web-based image and diagnosis exchange network) manages the workflow Only a browser required for the Physician MAGIC5 Positives are notified via email and SMS Number of false positives is low First prototype of embedded elasticity It became the main component of the VAF Work in collaboration with P. Cerello 39

A national research project: PRIN Computing project involving 11 universities and INFN LNL, and different LHC experiments The proposal has been accepted and it is being funded by the Department of Education (MIUR) effective january 2013 One of the aspects is the adoption of the Virtual Analysis Facility as a sustainable analysis model This is pushing other computing centers to configure their own cloud Torino is currently providing a fundamental support to other computing centers, thanks to the expertise in cloud computing acquired during the last three years 40

Thank you!

Additional slides

Demo: elastic VAF with PROOF in Torino 43

Demo: elastic VAF with PROOF in Torino Scree Screencast: http://youtu.be/frq9cnxmcdi 44