Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO"

Transcription

1 Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO Ulrike Schnoor (CERN) Anton Gamel, Felix Bührer, Benjamin Rottler, Markus Schumacher (University of Freiburg) February 02, 2018 Pre-GDB Meeting

2 Using HPC ressources via virtualization HPC NEMO at Uni Freiburg Resource Use NEMO in Freiburg to extend local Tier-3 resources ( Black Forest Grid = BFG) Job types Currently mainly local ATLAS analysis and simulation jobs, but easily extendable to any ATLAS jobs Setup Full virtualization of the environment + embedding into the existing OpenStack-Torque/Moab infrastructure in a way that is based on demand fully automated transparent for the user Ulrike Schnoor (CERN) 2/14

3 bwforcluster HPC center NEMO Shared by 3 communities in Baden-Württemberg: Elementary Particle Physics, Neuroscience, Microsystems engineering 752 worker nodes, each with 2 10 cores 128 GB RAM 100 Gbit/s OmniPath 240 GB local SSD 500 TB workspace (BeeGFS) TOP500: Ranked 214 in June in June 2017 (Link) In operation since July 2016 Hybrid of HPC and cloud approach: OpenStack orchestrates bare metal jobs and virtual machines in parallel Ulrike Schnoor (CERN) 3/14

4 Virtualization of ATLAS infrastructure on NEMO Ingredients OpenStack: Management framework allowing to run both virtual machines and bare metal jobs on NEMO Hypervisor: KVM User interface: BFG login nodes Access to CVMFS, Frontier via BFG squid Scheduler: Slurm (front-end for Torque/Moab (back-end for Scheduling for dynamic allocation of VMs: ROCED VM image (SL6, CentOS7) Access to storage: dcache client, local BeeGFS Access to software: CVMFS client Ulrike Schnoor (CERN) 4/14

5 Virtual machine image tool chain Requirements Scientific Linux 6 CernVM image uses modified kernel not suitable Setup Packer ( for automatized image generation Basis: SL6 iso Output: VM template image (qcow2) Contextualization with puppet Install software, services (e.g. cvmfs client), user management etc. with the BFG puppet server identical and modularized setup Important updates? generate new VM Ulrike Schnoor (CERN) 5/14

6 Scheduling with Slurm Elastic Computing Slurm Elastic Computing: Resume and suspend machines on demand with adaptable resume/suspend functions and timeouts Challenges: 3-layer system with Slurm, Torque/Moab, and OpenStack allows almost no transmission/propagation of error messages Not intended for non-permanent resources (queue in Moab): Timeouts not sufficiently adaptable Solution: intermediate layer such as ROCED Ulrike Schnoor (CERN) 6/14

7 ROCED Responsive On-Demand Cloud-enabled Deployment Tool developed by CMS colleagues in Karlsruhe (KIT): Monitors demands in a batch system and dynamically manages virtual machines accordingly Python code with modular structure to adapt to different schedulers, VM types, Clouds etc. Integration and Requirement Adapters modified for BFG/Slurm setup: in production Integration Adapters... integrates booted compute nodes into existing batch server HTCondor Torque Grid Engine SLURM ROCED Core Broker... decides which machines to boot or shutdown Site Adapters Requirement Adapters... supplies information about needed compute nodes, e.g. queue size HTCondor Torque Grid Engine SLURM... boot machines on various Cloud Computing sites Hybrid HPC Cluster Commercial Providers OpenStack Ulrike Schnoor (CERN) 7/14

8 Summary and Outlook Slurm Elastic Computing setup can be used but is very fragile and leads to many job failures Using ROCED instead of Slurm Elastic Computing use non-elastic Slurm together with ROCED Requirement Adapter Integration Adapter implementation for Slurm and BFG in place Future possibilities: Use of containers CVMFS images instead of home-brew with Packer Ulrike Schnoor (CERN) 8/14

9 The Team Anton Gamel, Felix Buehrer, Benjamin Rottler, Ulrike Schnoor, Markus Schumacher Contacts in the Computing Center (HPC Team): Michael Janczyk, Bernd Wiebelt, Dirk von Suchodoletz Formerly also: Konrad Meier Ulrike Schnoor (CERN) 9/14

10 Backup Ulrike Schnoor (CERN) 10/14

11 The Black Forest Grid (BFG) Tier-2 and Tier-3 site of the WLCG In operation since 2005 CPU: 260 nodes, in total 4700 cores (HT) Several generations of worker node hardware Storage: dcache 1.35 PB (grid) lustre parallel storage 180 TB (local users) Local users from physics, biodynamics, and many other groups Future: exclusively Tier-2 and Tier-3 WLCG Ulrike Schnoor (CERN) 11/14

12 Baden-Württemberg HPC bwhpc-c5 project: Initiative in Baden-Württemberg for common frame for HPC ressources at BW universities co-financed by DFG bwforclusters federated approach: user group defined by research field not affiliation Freiburg: bwforcluster for Elementary Particle Physics, Neuroscience, and Microsystems Engineering: NEMO Ulrike Schnoor (CERN) 12/14

13 How to run ATLAS jobs on NEMO? OS: ATLAS currently needs Scientific Linux 6; NEMO runs CentOS7 Software: cvmfs = CERN VM File System: basis for all experiment-specific software not installed on NEMO Storage: afs not available on NEMO Virtualize the environment - Virtual machine image and orchestration/scheduling setup can be used both by local jobs as well as grid jobs Ulrike Schnoor (CERN) 13/14

14 Timeouts in Slurm Elasticity of the Slurm Elastic Computing module can be influenced with several timeout parameters: Main issue: ResumeTimeout should be long in order to catch Moab queue should be short in order to restart quickly if VM start fails Other problem: VMs often stay in COMPLETING (after job is terminated, before turning IDLE) for a long time Ulrike Schnoor (CERN) 14/14

Hands-On Workshop bwunicluster June 29th 2015

Hands-On Workshop bwunicluster June 29th 2015 Hands-On Workshop bwunicluster June 29th 2015 Agenda Welcome Introduction to bwhpc and the bwunicluster Modules - Software Environment Management Job Submission and Monitoring Interactive Work and Remote

More information

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers CHEP 2016 - San Francisco, United States of America Gunther Erli, Frank Fischer, Georg Fleig, Manuel Giffels, Thomas

More information

Virtualized Scientific Research Environments and the future role of Computer Centers

Virtualized Scientific Research Environments and the future role of Computer Centers Virtualized Scientific Research Environments and the future role of Computer Centers University of Freiburg, escience dept. K. Meier, B. Grüning, C. Blank, M. Janczyk, D. v. Suchodoletz 31/05/2017 10th

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine Journal of Physics: Conference Series OPEN ACCESS Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine To cite this article: Henrik Öhman et al 2014 J. Phys.: Conf.

More information

bwfortreff bwhpc user meeting

bwfortreff bwhpc user meeting bwfortreff bwhpc user meeting bwhpc Competence Center MLS&WISO Universitätsrechenzentrum Heidelberg Rechenzentrum der Universität Mannheim Steinbuch Centre for Computing (SCC) Funding: www.bwhpc-c5.de

More information

Singularity tests at CC-IN2P3 for Atlas

Singularity tests at CC-IN2P3 for Atlas Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Singularity tests at CC-IN2P3 for Atlas Vamvakopoulos Emmanouil Journées LCG-France, 22-24 Novembre 2017, LPC

More information

A Container On a Virtual Machine On an HPC? Presentation to HPC Advisory Council. Perth, July 31-Aug 01, 2017

A Container On a Virtual Machine On an HPC? Presentation to HPC Advisory Council. Perth, July 31-Aug 01, 2017 A Container On a Virtual Machine On an HPC? Presentation to HPC Advisory Council Perth, July 31-Aug 01, 2017 http://levlafayette.com Necessary and Sufficient Definitions High Performance Computing: High

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

HTCondor Week 2015: Implementing an HTCondor service at CERN

HTCondor Week 2015: Implementing an HTCondor service at CERN HTCondor Week 2015: Implementing an HTCondor service at CERN Iain Steers, Jérôme Belleman, Ulrich Schwickerath IT-PES-PS HTCondor Week 2015 HTCondor at CERN 2 Outline The Move Environment Grid Pilot Local

More information

Operating two InfiniBand grid clusters over 28 km distance

Operating two InfiniBand grid clusters over 28 km distance Operating two InfiniBand grid clusters over 28 km distance Sabine Richling, Steffen Hau, Heinz Kredel, Hans-Günther Kruse IT-Center University of Heidelberg, Germany IT-Center University of Mannheim, Germany

More information

IN2P3-CC cloud computing (IAAS) status FJPPL Feb 9-11th 2016

IN2P3-CC cloud computing (IAAS) status FJPPL Feb 9-11th 2016 Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules IN2P3-CC cloud computing (IAAS) status FJPPL Feb 9-11th 2016 1 Outline Use cases R&D Internal core services Computing

More information

Cloud Computing. UCD IT Services Experience

Cloud Computing. UCD IT Services Experience Cloud Computing UCD IT Services Experience Background - UCD IT Services Central IT provider for University College Dublin 23,000 Full Time Students 7,000 Researchers 5,000 Staff Background - UCD IT Services

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

Extraordinary HPC file system solutions at KIT

Extraordinary HPC file system solutions at KIT Extraordinary HPC file system solutions at KIT Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State Roland of Baden-Württemberg Laifer Lustre and tools for ldiskfs investigation

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

Scalability / Data / Tasks

Scalability / Data / Tasks Jožef Stefan Institute Scalability / Data / Tasks Meeting Scalability Requirements with Large Data and Complex Tasks: Adapting Existing Technologies and Best Practices in Slovenia Jan Jona Javoršek Jožef

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

Access: bwunicluster, bwforcluster, ForHLR

Access: bwunicluster, bwforcluster, ForHLR Access: bwunicluster, bwforcluster, ForHLR Shamna Shamsudeen, SCC, KIT Steinbuch Centre for Computing (SCC) Funding: www.bwhpc-c5.de Outline Introduction Registration Processes bwunicluster bwforcluster

More information

HPC learning using Cloud infrastructure

HPC learning using Cloud infrastructure HPC learning using Cloud infrastructure Florin MANAILA IT Architect florin.manaila@ro.ibm.com Cluj-Napoca 16 March, 2010 Agenda 1. Leveraging Cloud model 2. HPC on Cloud 3. Recent projects - FutureGRID

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

DEEP DIVE: OPENSTACK COMPUTE

DEEP DIVE: OPENSTACK COMPUTE DEEP DIVE: OPENSTACK COMPUTE Stephen Gordon Technical Product Manager, Red Hat @xsgordon AGENDA OpenStack architecture refresher Compute architecture Instance life cycle Scaling compute

More information

Running HEP Workloads on Distributed Clouds

Running HEP Workloads on Distributed Clouds Running HEP Workloads on Distributed Clouds R.Seuster, F. Berghaus, K. Casteels, C. Driemel M. Ebert, C. R. Leavett-Brown, M. Paterson, R.Sobie, T. Weiss-Gibson 2017 Fall HEPiX meeting, Tsukuba 16. - 20.

More information

HP Matrix Operating Environment 7.2 Getting Started Guide

HP Matrix Operating Environment 7.2 Getting Started Guide HP Matrix Operating Environment 7.2 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

Brief review of the HEPIX 2011 spring Darmstadt, 2-6 May

Brief review of the HEPIX 2011 spring Darmstadt, 2-6 May Brief review of the HEPIX 2011 spring Darmstadt, 2-6 May http://indico.cern.ch/conferencedisplay.py?confid=118192 Andrey Y Shevel 7 June 2011 Andrey Y Shevel 1 The presentation outlook HEPiX program Site

More information

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February LHC Cloud Computing with CernVM Ben Segal 1 CERN 1211 Geneva 23, Switzerland E mail: b.segal@cern.ch Predrag Buncic CERN E mail: predrag.buncic@cern.ch 13th International Workshop on Advanced Computing

More information

A ground-up approach to High-Throughput Cloud Computing in High-Energy Physics

A ground-up approach to High-Throughput Cloud Computing in High-Energy Physics A ground-up approach to High-Throughput Cloud Computing in High-Energy Physics Dario Berzano Supervisors: M.Masera, G.Ganis, S.Bagnasco Università di Torino - Doctoral School of Sciences and Innovative

More information

Global Software Distribution with CernVM-FS

Global Software Distribution with CernVM-FS Global Software Distribution with CernVM-FS Jakob Blomer CERN 2016 CCL Workshop on Scalable Computing October 19th, 2016 jblomer@cern.ch CernVM-FS 1 / 15 The Anatomy of a Scientific Software Stack (In

More information

A10 HARMONY CONTROLLER

A10 HARMONY CONTROLLER DATA SHEET A10 HARMONY CONTROLLER AGILE MANAGEMENT, AUTOMATION, ANALYTICS FOR MULTI-CLOUD ENVIRONMENTS PLATFORMS A10 Harmony Controller provides centralized agile management, automation and analytics for

More information

Scientific Workflows and Cloud Computing. Gideon Juve USC Information Sciences Institute

Scientific Workflows and Cloud Computing. Gideon Juve USC Information Sciences Institute Scientific Workflows and Cloud Computing Gideon Juve USC Information Sciences Institute gideon@isi.edu Scientific Workflows Loosely-coupled parallel applications Expressed as directed acyclic graphs (DAGs)

More information

Managing a tier-2 computer centre with a private cloud infrastructure

Managing a tier-2 computer centre with a private cloud infrastructure Journal of Physics: Conference Series OPEN ACCESS Managing a tier-2 computer centre with a private cloud infrastructure To cite this article: Stefano Bagnasco et al 2014 J. Phys.: Conf. Ser. 523 012012

More information

Introduction to OpenStack

Introduction to OpenStack Introduction to OpenStack SANOG 28 4 August 2016 Elizabeth K. Joseph @pleia2 Elizabeth K. Joseph Senior Automation & Tools Engineer at HPE Joined the OpenStack Infrastructure Team in 2013, core and root

More information

OpenNebula on VMware: Cloud Reference Architecture

OpenNebula on VMware: Cloud Reference Architecture OpenNebula on VMware: Cloud Reference Architecture Version 1.2, October 2016 Abstract The OpenNebula Cloud Reference Architecture is a blueprint to guide IT architects, consultants, administrators and

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

Running Jobs in the Vacuum

Running Jobs in the Vacuum Running Jobs in the Vacuum A. McNab 1, F. Stagni 2 and M. Ubeda Garcia 2 1 School of Physics and Astronomy, University of Manchester, UK 2 CERN, Switzerland E-mail: andrew.mcnab@cern.ch fstagni@cern.ch

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS DESY Computing Seminar Frank Volkmer, M. Sc. Bergische Universität Wuppertal Introduction Hardware Pleiades Cluster

More information

CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING. M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś

CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING. M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś Presentation plan 2 Cyfronet introduction System description SLURM modifications Job

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

opennebula and cloud architecture

opennebula and cloud architecture opennebula and cloud architecture Stefano Bagnasco INFN Torino OpenNebula Cloud Architecture- 1/120 outline Recap from yesterday OpenNebula Open Cloud Reference Architecture OpenNebula internal achitecture

More information

Now SAML takes it all:

Now SAML takes it all: Now SAML takes it all: Federation of non Web-based Services in the State of Baden-Württemberg Sebastian Labitzke Karlsruhe Institute of Technology (KIT) Steinbuch Centre for Computing (SCC) labitzke@kit.edu

More information

Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters

Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters Jie Zhang Dr. Dhabaleswar K. Panda (Advisor) Department of Computer Science & Engineering The

More information

DELL EMC TECHNICAL SOLUTION BRIEF. ARCHITECTING A DELL EMC HYPERCONVERGED SOLUTION WITH VMware vsan. Version 1.0. Author: VICTOR LAMA

DELL EMC TECHNICAL SOLUTION BRIEF. ARCHITECTING A DELL EMC HYPERCONVERGED SOLUTION WITH VMware vsan. Version 1.0. Author: VICTOR LAMA DELL EMC TECHNICAL SOLUTION BRIEF ARCHITECTING A DELL EMC HPERCONVERGED SOLUTION WITH VMware vsan Version 1.0 Author: VICTOR LAMA Dell EMC Networking SE July 2017 What is VMware vsan? VMware vsan (formerly

More information

RED HAT ENTERPRISE VIRTUALIZATION Virtualisation today, transition for the future

RED HAT ENTERPRISE VIRTUALIZATION Virtualisation today, transition for the future RED HAT ENTERPRISE VIRTUALIZATION Virtualisation today, transition for the future James Read Solution Architect, EMEA Alliances Daniel Messer Solution Architect

More information

An overview of batch processing. 1-June-2017

An overview of batch processing. 1-June-2017 An overview of batch processing 1-June-2017 One-on-one Your computer Not to be men?oned in this talk Your computer (mul?ple cores) (mul?ple threads) One thread One thread One thread One thread One thread

More information

DevOps Course Content

DevOps Course Content Introduction to DevOps: Background Ingredients of DevOps DevOps principles Who has adopted? Mirage or Reality? Challenges, Domain specific Technology specific DevOps Toolchain (Practices and Tools) SDLC

More information

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013 Distributed Systems 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski Rutgers University Fall 2013 December 12, 2014 2013 Paul Krzyzanowski 1 Motivation for the Cloud Self-service configuration

More information

Compiling applications for the Cray XC

Compiling applications for the Cray XC Compiling applications for the Cray XC Compiler Driver Wrappers (1) All applications that will run in parallel on the Cray XC should be compiled with the standard language wrappers. The compiler drivers

More information

bwcloud: cross-site server virtualization

bwcloud: cross-site server virtualization bwcloud: cross-site server virtualization O. V. Dulov Steinbuch Centre for Computing, Karlsruhe Institute of Technology Hermann-von-Helmholtz-Platz 1, Building 441 / 236, 76344 Eggenstein-Leopoldshafen,

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

Performance Analysis and Prediction for distributed homogeneous Clusters

Performance Analysis and Prediction for distributed homogeneous Clusters Performance Analysis and Prediction for distributed homogeneous Clusters Heinz Kredel, Hans-Günther Kruse, Sabine Richling, Erich Strohmaier IT-Center, University of Mannheim, Germany IT-Center, University

More information

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK RHRK-Seminar High Performance Computing with the Cluster Elwetritsch - II Course instructor : Dr. Josef Schüle, RHRK Overview Course I Login to cluster SSH RDP / NX Desktop Environments GNOME (default)

More information

Monitoring for IT Services and WLCG. Alberto AIMAR CERN-IT for the MONIT Team

Monitoring for IT Services and WLCG. Alberto AIMAR CERN-IT for the MONIT Team Monitoring for IT Services and WLCG Alberto AIMAR CERN-IT for the MONIT Team 2 Outline Scope and Mandate Architecture and Data Flow Technologies and Usage WLCG Monitoring IT DC and Services Monitoring

More information

Oracle Real Application Clusters (RAC) 12c Release 2 What s Next?

Oracle Real Application Clusters (RAC) 12c Release 2 What s Next? Oracle Real Application Clusters (RAC) 12c Release 2 What s Next? Markus Michalewicz Senior Director of Product Management, Oracle RAC Development Markus.Michalewicz@oracle.com @OracleRACpm http://www.linkedin.com/in/markusmichalewicz

More information

Introduction to SciTokens

Introduction to SciTokens Introduction to SciTokens Brian Bockelman, On Behalf of the SciTokens Team https://scitokens.org This material is based upon work supported by the National Science Foundation under Grant No. 1738962. Any

More information

Cisco Extensible Network Controller

Cisco Extensible Network Controller Data Sheet Cisco Extensible Network Controller Product Overview Today s resource intensive applications are making the network traffic grow exponentially putting high demands on the existing network. Companies

More information

Introduction to Amazon Cloud & EC2 Overview

Introduction to Amazon Cloud & EC2 Overview Introduction to Amazon Cloud & EC2 Overview 2015 Amazon Web Services, Inc. and its affiliates. All rights served. May not be copied, modified, or distributed in whole or in part without the express consent

More information

System Requirements. System Requirements for Cisco DCNM, Release 10.4(1), page 1. System Requirements for Cisco DCNM, Release 10.

System Requirements. System Requirements for Cisco DCNM, Release 10.4(1), page 1. System Requirements for Cisco DCNM, Release 10. This chapter lists the tested and supported hardware and software specifications for Cisco Prime Data Center Network Management (DCNM) server and client architecture. The application has been tested in

More information

Red Hat Virtualization 4.1 Technical Presentation May Adapted for MSP RHUG Greg Scott

Red Hat Virtualization 4.1 Technical Presentation May Adapted for MSP RHUG Greg Scott Red Hat Virtualization 4.1 Technical Presentation May 2017 Adapted for MSP RHUG Greg Scott gscott@redhat.com Who is this bald guy? Red Hat TAM for the financial and telco industries Lots of ties to the

More information

Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society Journal of Physics: Conference Series PAPER OPEN ACCESS Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society To cite this article: J A Kennedy

More information

Building Private Cloud Infrastructure

Building Private Cloud Infrastructure Building Private Cloud Infrastructure Matthias Wessendorf Consulting Systems Engineer 20.11.2014 Cloud == FOG?? 3 The Path to Data Center Transformation Application- Based Silos Zones of Virtualization

More information

Road to Private Cloud mit OpenStack Projekterfahrungen

Road to Private Cloud mit OpenStack Projekterfahrungen Road to Private Cloud mit OpenStack Projekterfahrungen Andreas Kress Enterprise Architect Oracle Sales Consulting DOAG Regio Nürnberg/Franken 20. April 2017 Safe Harbor Statement The following is intended

More information

Integration of Cloud and Grid Middleware at DGRZR

Integration of Cloud and Grid Middleware at DGRZR D- of International Symposium on Computing 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology March 12, 2010 Overview D- 1 D- Resource Center Ruhr 2 Clouds in the German

More information

Volunteer Computing at CERN

Volunteer Computing at CERN Volunteer Computing at CERN BOINC workshop Sep 2014, Budapest Tomi Asp & Pete Jones, on behalf the LHC@Home team Agenda Overview Status of the LHC@Home projects Additional BOINC projects Service consolidation

More information

Data Access and Data Management

Data Access and Data Management Data Access and Data Management in grids Jos van Wezel Overview Background [KIT, GridKa] Practice [LHC, glite] Data storage systems [dcache a.o.] Data and meta data Intro KIT = FZK + Univ. of Karlsruhe

More information

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP THE EUROPEAN ORGANISATION FOR PARTICLE PHYSICS RESEARCH (CERN) 2 THE LARGE HADRON COLLIDER THE LARGE HADRON COLLIDER

More information

The Cirrus Research Computing Cloud

The Cirrus Research Computing Cloud The Cirrus Research Computing Cloud Faculty of Science What is Cloud Computing? Cloud computing is a physical cluster which runs virtual machines Unlike a typical cluster there is no one operating system

More information

BOSS and LHC computing using CernVM and BOINC

BOSS and LHC computing using CernVM and BOINC BOSS and LHC computing using CernVM and BOINC otn-2010-0x openlab Summer Student Report BOSS and LHC computing using CernVM and BOINC Jie Wu (Supervisor: Ben Segal / IT) 1 December 2010 Version 1 Distribution::

More information

How CloudEndure Disaster Recovery Works

How CloudEndure Disaster Recovery Works How Disaster Recovery Works Technical White Paper How Disaster Recovery Works THE TECHNOLOGY BEHIND CLOUDENDURE S ENTERPRISE-GRADE DISASTER RECOVERY SOLUTION Introduction Disaster Recovery is a Software-as-a-Service

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

Assistance in Lustre administration

Assistance in Lustre administration Assistance in Lustre administration Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu

More information

Cisco Prime Data Center Network Manager Release 7.1

Cisco Prime Data Center Network Manager Release 7.1 Product Bulletin Cisco Prime Data Center Network Manager Release 7.1 PB733518 Cisco Prime Data Center Network Manager (DCNM) software is an advanced network management system (NMS) for storage, LAN, and

More information

Citrix Synchronizer Quick Start Guide

Citrix Synchronizer Quick Start Guide Citrix Synchronizer Quick Start Guide Version 5.9.1 November 2017 About Citrix Synchronizer Synchronizer is the server used to deliver Virtual Machines (VMs) to DesktopPlayer clients. It manages: Users

More information

OPEN STORAGE IN THE ENTERPRISE with GlusterFS and Ceph

OPEN STORAGE IN THE ENTERPRISE with GlusterFS and Ceph DUSTIN L. BLACK, RHCA OPEN STORAGE IN THE ENTERPRISE with GlusterFS and Ceph Dustin L. Black, RHCA Principal Technical Account Manager Red Hat Strategic Customer Engagement 2014-10-13 Dustin L. Black,

More information

Sky Computing on FutureGrid and Grid 5000 with Nimbus. Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes Bretagne Atlantique Rennes, France

Sky Computing on FutureGrid and Grid 5000 with Nimbus. Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes Bretagne Atlantique Rennes, France Sky Computing on FutureGrid and Grid 5000 with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes Bretagne Atlantique Rennes, France Outline Introduction to Sky Computing The Nimbus Project

More information

HP Matrix Operating Environment 7.4 Getting Started Guide

HP Matrix Operating Environment 7.4 Getting Started Guide HP Matrix Operating Environment 7.4 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT

INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT Abhisek Pan 2, J.P. Walters 1, Vijay S. Pai 1,2, David Kang 1, Stephen P. Crago 1 1 University of Southern California/Information Sciences Institute 2

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

Citrix CloudPlatform (powered by Apache CloudStack) Version Patch D Release Notes. Revised July 02, :15 pm Pacific

Citrix CloudPlatform (powered by Apache CloudStack) Version Patch D Release Notes. Revised July 02, :15 pm Pacific Citrix CloudPlatform (powered by Apache CloudStack) Version 3.0.5 Patch D Release Notes Revised July 02, 2014 10:15 pm Pacific Citrix CloudPlatform (powered by Apache CloudStack) Version 3.0.5 Patch D

More information

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef VMs at a Tier-1 site EGEE 09, 21-09-2009 Sander Klous, Nikhef Contents Introduction Who are we? Motivation Why are we interested in VMs? What are we going to do with VMs? Status How do we approach this

More information

Virtualization Overview. Joel Jaeggli AFNOG SS-E 2013

Virtualization Overview. Joel Jaeggli AFNOG SS-E 2013 Virtualization Overview Joel Jaeggli AFNOG SS-E 2013 1 What are we using this Year? Mac-mini servers Intel core i7 quad core 8 hyperthreads 16GB of ram 2 x 256GB SATA SSD A pretty hefty server Less than

More information

SURVEY PAPER ON CLOUD COMPUTING

SURVEY PAPER ON CLOUD COMPUTING SURVEY PAPER ON CLOUD COMPUTING Kalpana Tiwari 1, Er. Sachin Chaudhary 2, Er. Kumar Shanu 3 1,2,3 Department of Computer Science and Engineering Bhagwant Institute of Technology, Muzaffarnagar, Uttar Pradesh

More information

Baremetal with Apache CloudStack

Baremetal with Apache CloudStack Baremetal with Apache CloudStack ApacheCon Europe 2016 Jaydeep Marfatia Cloud, IOT and Analytics Me Director of Product Management Cloud Products Accelerite Background Project lead for open source project

More information

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Improved ATLAS HammerCloud Monitoring for Local Site Administration Improved ATLAS HammerCloud Monitoring for Local Site Administration M Böhler 1, J Elmsheuser 2, F Hönig 2, F Legger 2, V Mancinelli 3, and G Sciacca 4 on behalf of the ATLAS collaboration 1 Albert-Ludwigs

More information

ALICE Grid Activities in US

ALICE Grid Activities in US ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions

More information

CernVM-FS. Catalin Condurache STFC RAL UK

CernVM-FS. Catalin Condurache STFC RAL UK CernVM-FS Catalin Condurache STFC RAL UK Outline Introduction Brief history EGI CernVM-FS infrastructure The users Recent developments Plans 2 Outline Introduction Brief history EGI CernVM-FS infrastructure

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

How CloudEndure Works

How CloudEndure Works How Works How Works THE TECHNOLOGY BEHIND CLOUDENDURE S DISASTER RECOVERY AND LIVE MIGRATION SOLUTIONS offers Disaster Recovery and Live Migration Software-as-a-Service (SaaS) solutions. Both solutions

More information

Introduction to Cluster Computing

Introduction to Cluster Computing Introduction to Cluster Computing Prabhaker Mateti Wright State University Dayton, Ohio, USA Overview High performance computing High throughput computing NOW, HPC, and HTC Parallel algorithms Software

More information

HyperConverged Appliance

HyperConverged Appliance HyperConverged Appliance DATASHEET HCA Value Proposition StarWind HyperConverged Appliance is a complete turnkey solution designed to eliminate unreasonably high complexity and cost of IT infrastructures.

More information

Baadal: the IITD computing cloud (Beta release)

Baadal: the IITD computing cloud (Beta release) Baadal: the IITD computing cloud (Beta release) The CSC has commissioned a new cloud computing environment for high performance computing based on 1. 32 blade servers each with 2x6 core Intel(R) Xeon(R)

More information

System Requirements. Version 8.2 May 2, For the most recent version of this document, visit our documentation website.

System Requirements. Version 8.2 May 2, For the most recent version of this document, visit our documentation website. System Requirements Version 8.2 May 2, 2014 For the most recent version of this document, visit our documentation website. Table of Contents 1 System requirements 3 2 Scalable infrastructure example 3

More information

Data Center and Cloud Automation

Data Center and Cloud Automation Data Center and Cloud Automation Tanja Hess Systems Engineer September, 2014 AGENDA Challenges and Opportunities Manual vs. Automated IT Operations What problem are we trying to solve and how do we solve

More information

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP HPE HELION CLOUDSYSTEM 9.0 HPE Helion CloudSystem Foundation CloudSystem Foundation Key Use Cases Automate dev/test CICD on OpenStack technology compatible infrastructure Accelerate cloud-native application

More information

CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS. Field Activities

CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS. Field Activities CLOUD ARCHITECTURE & PERFORMANCE WORKLOADS Field Activities Matt Smith Senior Solution Architect Red Hat, Inc @rhmjs Jeremy Eder Principal Performance Engineer Red Hat, Inc @jeremyeder CLOUD ARCHITECTURE

More information

Traditional Desktop Deployment. Desktop Delivery Vision. End to End desktop virtualization. virtualization. virtualization

Traditional Desktop Deployment. Desktop Delivery Vision. End to End desktop virtualization. virtualization. virtualization End to End desktop virtualization End to End desktop virtualization Martijn Martijn Bosschaart Bosschaart End to End desktop virtualization Channel Channel Systems Systems Engineer Engineer Netherlands

More information

Cisco Enterprise Cloud Suite Overview Cisco and/or its affiliates. All rights reserved.

Cisco Enterprise Cloud Suite Overview Cisco and/or its affiliates. All rights reserved. Cisco Enterprise Cloud Suite Overview 2015 Cisco and/or its affiliates. All rights reserved. 1 CECS Components End User Service Catalog SERVICE PORTAL Orchestration and Management UCS Director Application

More information