Dynamic Extension of a Virtualized Cluster by using Cloud Resources

Size: px
Start display at page:

Download "Dynamic Extension of a Virtualized Cluster by using Cloud Resources"

Transcription

1 Dynamic Extension of a Virtualized Cluster by using Cloud Resources Oliver Oberst, Thomas Hauth, David Kernert, Stephan Riedel, Günter Quast Institut für Experimentelle Kernphysik, Karlsruhe Institute of Technology, Wolfgang-Gaede-Strasse 1, Karlsruhe oliver.oberst@cern.ch Abstract. The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system. 1. Introduction Todays HPC clusters are typically overdimensioned to cope with expected peak loads of the system. Through sharing a centralized HPC cluster infrastructure among different users groups the overheads in terms of hardware, administration effort, infrastructure and energy consumption can be minimized. In cases where a common computing environment is not applicable to meet all software requirements of all participating user groups virtualization can be used to supply any required operating system and software environments. The intrinsic performance loss through virtualization is negligible if specific user groups with diverging prerequisites are then able to use additional HPC resources in a shared computing cluster. By dynamically virtualizing the worker nodes the computer cluster is dynamically partitioned providing several different environments A second area where virtualization can be adopted is the extension of the local HPC resources by adding Cloud worker nodes. This extension is even eased if the common usage scenario of the HPC resource utilizes virtual machines which can easily prepared for off-side use within a cloud infrastructure. Both, the dynamic partitioning of a shared HPC cluster and its extension by using Cloud worker nodes are summarized in the following. 2. Dynamic Virtualization of Worker Nodes There are three possible ways for user groups with diverging computing infrastructure requirements as depicted in Figure 1. Either each group is running on their own separated Published under licence by IOP Publishing Ltd 1

2 Figure 1. Three typical possibilities to run a HPC infrastructure. The scenario on the top shows independent clusters maintained by the specific user groups. The second and third scenarios show shared, centralized clusters which run on the same hardware infrastructure. In these cases it can either be offered with statically or dynamically partitioned sub-clusters. As seen in [1] infrastructure or a common infrastructure is shared amongst them. For the second case two further scenarios are conceivable: either statically provide cluster partitions to each group with its own environment installed, or virtualize the computing which results in a dynamic partitioning of the cluster on a job-by-job basis. For the latter, one assumes that the resource management system has to be aware of the usage of virtualization and in fact systems like Condor [2] or Open Grid Scheduler [3] own such functionality. However, we found a possibility to virtualize the worker nodes without having a virtualization aware batch system in use. By mainly using a standard tool which is available in most of the resource managers, the prologue and epilogue scripts, the virtual worker nodes can be handled as being part of the actual user job. The usage of this functionality is implemented in our tool called ViBatch [4] ViBatch The requirements for our concept of dynamic virtualized worker nodes requires pro- and epilogue scripting functionality within the batch system and the common virtualization API libvirt[5]. The detailed ViBatch work-flow is sketched in Figure 2. In detail, the work flow can be described with these steps: (i) A user submits a job to a batch system. He decides if the job should run on a virtual worker node or the native host OS by submitting to an appropriate queue which needs to be set up on the batch server. Due to that it is easily possible to mix up virtual and native worker nodes on the same cluster. 2

3 Figure 2. Schematic overview of the ViBatch concept: portable to any batch system with proand epilogue scripting functionality, independent from the underlying hypervisor, lightweight setup, transparent to the user, allows a mixed batch system setup with native and virtual worker nodes [4]. (ii) If a virtual queue is selected, the batch system executes the prologue script at the beginning of each job. (iii) The prologue script prepares the the virtual machine image by cloning the VM from a provided template on the local worker node hard disk. (iv) Modify template to accept the actual user job later on. Currently, a non-password protected and user specific public ssh key is copied to the authorized keys file on the VM. (v) Start the virtual machine via the libvirt API. Hand over a proper MAC address for the virtual network interface to allow individual network setup via DHCP. (vi) At the end of the booting process, the VM creates a lock-file via an init script on the local or cluster file system (vii) The prologue script checks for this lock-file to guarantee a completely booted VM. (viii) The actual user job is piped via ssh to the VM. (ix) The user jobs is executed inside the VM (x) After the job has finished and the job output was returned to the user, the epilogue script is executed. (xi) The VM is shut down and destroyed. A detailed view of the job hand-over to the VMs is shown in Figure 3. Currently ViBatch runs integrated into the production system of a shared HPC cluster located at the Karlsruhe Institute of Technology (KIT). It is shared among nine different research departments and owns the following key specifications: 1600 CPU cores, 200 x 8-core Intel Xeon X5355(VT-x,64bit) CPUs with 2GB of memory per core, SUSE Linux Enterprise server 11SP2 as host OS (Kernel default), KVM [6] hypervisor (version:qemu-kvm ). The virtualized worker nodes are used by KIT users affiliated to the Compact Muon Solenoid (CMS) [7] experiment at the Large Hadron Collider. The specific requirements of this group are: Scientific Linux CERN 5 [8] and experiment specific software. The experiment specific software is imported via CernVMFS[9] into the VMs which outsources the installation 3

4 Figure 3. This figure illustrates the hand over of the jobs from the batch system via the worker node to the virtual machine using ssh shells. of new CMS experiment software releases. During the last two years several thousands of jobs successfully ran via the Maui/TORQUE [10] batch system through the VM queues as depicted in Figure 4 with the profile of typical High Energy Physics (HEP) applications with both, CPU and I/O intensive job classes. Due to the fact that there is now para-virtualization driver available up to now, the cluster file system Lustre [11] was exported via NFS[12] from each host to its currently running VMs. As each VM image is deleted after a job execution, the logs of these VMs have to be stored to enable forensics for security maintenance and debugging. This is performed by using a central syslog-ng [13] server to store the VM logs. The utilization of the central log server has to be configured within the virtual machine templates of each partition prior to the deployment. The cloning and modification steps of the VM are measured to be in the order of less then two seconds. This can be reached as cloning here means creating a copy-on-write overlay of the locally stored VM template on the worker node hard drive in contrast to creating a copy of the template for each VM instance. These templates are deployed by the ViBatch operators as needed, e.g. after applying security-updates or after a change of the VM setup, through the ViBatch helper scripts. It is planned to revisit the deployment for further improvement e.g. using peer-to-peer techniques between the worker nodes. However, for the current production operation of ViBatch, the VM template deployment has no performance impact as the jobs only use the local VM template overlay. Moreover, the length of the prologue procedure is heavily 4

5 correlated to the used VM images. It mainly depends on the VM boot up time. Through optimizing the VMs itself by removing not needed services e.g. yum auto-update we reach a boot time of 35 seconds which adds up to 40 second start time of the jobs in the virtual worker node queue for the SLC5 VMs. Figure 4. Job Success rate of one month of production usage of ViBatch. 3. Extending Batch Systems with Cloud Resources Additionally to the dynamic partitioning of the cluster using virtualization, Infrastructure as a Service (IaaS) Cloud resources can be dynamically attached to the local resource manager to extend the available farm in times of heavy load. During the last years, various IaaS Cloud providers and implementations entered the service market. One of the first to offer Cloud services was Amazon with EC2[14]. They offer different machine configurations, so called machine types, on a pay-per-hour basis. The software, used by Amazon itself to provide and manage their Cloud services is proprietary and not available to the public. The company Eucalyptus Systems [15] is one of the existing solutions to fill this gap by developing an open-source Cloud Computing infrastructure software called Eucalyptus which implements the same API as Amazon s EC2 does. The Cloud Computing research group at the Steinbuch Centre for Computing [16] at the Karlsruhe Institute of Technology (KIT) runs a private Cloud based on OpenNebula (ONE) [17]. At it s current stage of expansion, the private Cloud can run up to several hundred single-core and multi-core virtual machines. As a result of the possibility to utilize even more resources for our HEP researchers, the Institut für Experimentelle Kernphysik (EKP) at KIT decided to evaluate and develop a dynamic batch system extension tool for our local resources which resulted in the Cloud Meta-Scheduler ROCED ( Responsive on-demand Cloud Enabled Deployment )[18, 19, 20] ROCED The modular design of the meta-scheduler ROCED enables the use of different combinations of local batch system and remote Cloud interfaces. ROCED is composed of three different so-called 5

6 Adapters as depicted in Figure 5 Figure 5. ROCED design baseline. Three individual Adapters are used to interface local batch system and remote Cloud software. The three Adapters are in Detail: Requirement Adapter Gathers information from the local resource manager and calculates the required amount of Cloud worker nodes. Site Adapter Interfaces the Cloud site. Boots and stops the Cloud worker nodes. Integration Adapter Registers newly provisioned Cloud nodes and removes them if required. ROCED has two modes of operation, the so-called ROCED topologies. Within the first topology, a remote batch server is connected as a slave to the local resource management system with a fixed amount of remote cloud worker nodes. In contrast to that, the remote Cloud worker nodes are dynamically provisioned and attached to the local batch system within the second topology, which is the one used within our current setup. The plot in Figure 6 gives an impression of ROCED running in the Topology 2 mode. If the job queue length exceeds a configured threshold, ROCED extends the local cluster by starting additional Cloud nodes. As soon as they are registered within the local batch system, they will be filled up with jobs. At a queue length below the threshold ROCED will unregister the nodes in the batch system and shutdown the Cloud nodes. ROCED is implemented in Python 2.6 and it currently supports Torque and Oracle Grid Engine as batch systems and Amazon EC2, Eucalyptus and OpenNebula Cloud interfaces. 6

7 Figure 6. ROCED Topology 2 example. As soon as the queue is filled above a certain threshold ROCED starts additional remote Cloud worker nodes. As soon as the queue length falls below the threshold again, the remote nodes are shutdown and removed. Taken from [20] 3.2. The ROCED Workflow ROCED runs in management cycles with a configurable length. separated into the following steps: The ROCED workflow is (i) Queue monitoring Within each cycle ROCED first gathers the current queue lengths of one or more batch servers and their monitored queues. (ii) Boot VM Then the ROCED Broker decides how many VMs are required according to the queue lengths. The Site Adapter then decides which Cloud provider to contact by using the current Cloud resource prices. The VMs are started then accordingly. (iii) Add node The fully booted Cloud VMs are added to the local batch system by the Integration Adapter. (iv) Execute job As soon as the VMs are integrated into the local batch system jobs will be started on the free Cloud worker nodes (v) Remove and shutdown If there are no additional submitted jobs in the batch system and the queues drain, the Cloud nodes will be removed from the batch system and shut down. To enable a flawless management and operation of the remote Cloud worker nodes ROCED utilizes a strictly linear state machine as sketched in Figure 7. For each lifetime step of a VM a distinct Adapter is responsible. 4. The Fusion of ViBatch and ROCED The combination of both tools, ViBatch and ROCED, leads to a dynamically scalable virtual cluster. This combination is currently tested as a preparation for its production usage at the Institut für Experimentelle Kernphysik (IEKP) at KIT. Figure 8 depicts the current design. Vi- Batch manages the dynamic virtualization of the IC1 cluster at the SCC(Campus South) with SLC5 VM nodes, whereas ROCED attaches SLC5 Cloud VMs the private ONE campus cloud at SCC(Campus North). As already mentioned the Lustre cluster file system is exported to the local VMs via NFS servers running on the hardware nodes. With this technique we can also provide access to the Lustre to the remote Cloud VMs, as the remote Cloud site is attached via a powerful 10GBit network link between the two locations (Campus North and Campus South) 7

8 3. 4. up 2. ROCED Broker decides how many machines have to be started or shut down Adapter in charge of changing the state of the virtual machine: booting 1. Integration Adapter Site Adapter Figure 7. The ROCED state machine. For each of the eight possible Cloud node states a distinct Adapter takes care of the management. of the KIT. Within the current test setup a dedicated Cloud-enabled queue is prepared within Torque. ROCED monitors only this queue and provides the required ONE resources. In a production scenario, all virtual SLC5 queues will be added to the ROCED configuration to have a fully Cloud extended setup. 5. General Performance Considerations As the additional layer of virtualization has an impact on the performance of the executed applications, we investigated the performance of our production system with respect to typical HEP applications. In Figure 9 one can see that for pure CPU intensive jobs one looses 12% performance whereas Monte Carlo Simulations of High Energy Physics processes loose 17% with respect to the increased I/O. In the case of data analysis the HEP users are bound to their experiment software. Within the CMS experiment, the software framework and therefore the analysis results are only validated for SLC operating systems. The host OS SLES11Sp2 on the IC1 cluster is fixed as it is a compromise between the nine shareholder institutes of the IC1 cluster. With respect to this fact, the benefit of being able to run jobs on this shared cluster prevails over any concerns of losing a few percent in performance through virtualization. 6. Conclusion, Outlook and Future Work The dynamic partitioning of a cluster with ViBatch has proven its stability and performance over the last two years in production usage at KIT. Local and interactive clusters play still a major role in the analysis workflow of todays HEP experiments. They are mainly used as development area and final analysis resources due to the fast turnaround on interactive machines. Therefore, the technique of using a meta-scheduler like ROCED to dynamically add transparent Cloud resources is of great importance when trying to intercept peak load times of the local computing 8

9 IC1 Cluster SCC Campus Cloud Virtual Worker Nodes Virtual SLC5 Environment Cloud Worker Nodes Worker Nodes NFS server Infiniband Lustre Storage OpenVPN Server (VM) PBS Server (VM) Maui/Torque ROCED ViBatch Figure 8. ViBatch + ROCED extending the IC1 cluster. Figure 9. Performance Benchmarks using the KVM virtualization for: HEP specific Monte Carlo Simulations (binary compatible to the Host OS) and a CPU benchmark. infrastructure. The fusion of ViBatch and ROCED will continue by further merging both tools and testing the scalability and performance. Whereas the current test environment is mainly setup by hand, future development will unify the virtual machine setup and deployment as well as the general configuration setup. 7. Acknowledgments We thank the staff of Steinbuch Computing Centre that was responsible for the general setup of the IC and the ONE private Cloud. We wish to acknowledge the financial support of the Bundesministerium für Bildung und Forschung BMBF. 9

10 References [1] Volker Bge, Hermann Hessling, Yves Kemp, Marcel Kunze, Oliver Oberst, Gnter Quast, Armin Scheurer, and Owen Synge. Integration of virtualized worker nodes in standard batch systems. Journal of Physics: Conference Series, 219(5):052010, [2] Condor - High Troughput Computing 2 [3] Oracle Grid Engine 2 [4] A Scheurer, O Oberst, V Bge, G Quast, and M Kunze. Virtualized batch worker nodes: Conception and integration in hpc environments. Journal of Physics: Conference Series, 331(6):062043, , 3 [5] Libvirt Virtualization API 2 [6] KVM Virtualisation 3 [7] CMS Collaboration. The CMS experiment at the CERN LHC. JINST, 3:S08004, [8] ScientificLinux Homepage 3 [9] CernVM File System 3 [10] The MAUI Scheduler 4 [11] Lustre File System 4 [12] S. Shepler, B. Callaghan, D. Robinson, R. Thurlow, C. Beame, M. Eisler, and D. Noveck. Network File System (NFS) version 4 Protocol. RFC 3530 (Proposed Standard), April [13] syslog-ng log manager 4 [14] Amazon Elastic Compute Cloud 5 [15] Eucalyptus Systems 5 [16] Steinbuch Center for Computing 5 [17] The OpenNebula Project 5 [18] S. Riedel. Einbindung von Cloud-Ressourcen in Workflows der Teilchenphysik und Messung des Underlying Event in Proton-Proton-Kollisionen am LHC, volume IEKP-KA/ Institut fuer Experimentelle Kernphysik - Karlsruhe Institute of Technology, [19] T. Hauth. Dynamische Erweiterung von Batchsystemen mit Cloud Ressourcen und Messung der Jetenergieskala des CMS Detektors, volume IEKP-KA/ Institut fuer Experimentelle Kernphysik - Karlsruhe Institute of Technology, [20] T Hauth, G Quast, M Kunze, V Bge, A Scheurer, and C Baun. Dynamic extensions of batch systems with cloud resources. Journal of Physics: Conference Series, 331(6):062034, , 7 10

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers CHEP 2016 - San Francisco, United States of America Gunther Erli, Frank Fischer, Georg Fleig, Manuel Giffels, Thomas

More information

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO Ulrike Schnoor (CERN) Anton Gamel, Felix Bührer, Benjamin Rottler, Markus Schumacher (University of Freiburg) February 02, 2018

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

Cloud Computing. UCD IT Services Experience

Cloud Computing. UCD IT Services Experience Cloud Computing UCD IT Services Experience Background - UCD IT Services Central IT provider for University College Dublin 23,000 Full Time Students 7,000 Researchers 5,000 Staff Background - UCD IT Services

More information

Virtualization. A very short summary by Owen Synge

Virtualization. A very short summary by Owen Synge Virtualization A very short summary by Owen Synge Outline What is Virtulization? What's virtulization good for? What's virtualisation bad for? We had a workshop. What was presented? What did we do with

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Scientific Workflows and Cloud Computing. Gideon Juve USC Information Sciences Institute

Scientific Workflows and Cloud Computing. Gideon Juve USC Information Sciences Institute Scientific Workflows and Cloud Computing Gideon Juve USC Information Sciences Institute gideon@isi.edu Scientific Workflows Loosely-coupled parallel applications Expressed as directed acyclic graphs (DAGs)

More information

arxiv: v1 [cs.dc] 7 Apr 2014

arxiv: v1 [cs.dc] 7 Apr 2014 arxiv:1404.1814v1 [cs.dc] 7 Apr 2014 CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment G Lestaris 1, I Charalampidis 2, D Berzano, J Blomer, P Buncic, G Ganis

More information

INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT

INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT Abhisek Pan 2, J.P. Walters 1, Vijay S. Pai 1,2, David Kang 1, Stephen P. Crago 1 1 University of Southern California/Information Sciences Institute 2

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

Automated Deployment of Private Cloud (EasyCloud)

Automated Deployment of Private Cloud (EasyCloud) Automated Deployment of Private Cloud (EasyCloud) Mohammed Kazim Musab Al-Zahrani Mohannad Mostafa Moath Al-Solea Hassan Al-Salam Advisor: Dr.Ahmed Khayyat 1 Table of Contents Introduction Requirements

More information

Elastic Compute Service. Quick Start for Windows

Elastic Compute Service. Quick Start for Windows Overview Purpose of this document This document describes how to quickly create an instance running Windows, connect to an instance remotely, and deploy the environment. It is designed to walk you through

More information

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID The WLCG Motivation and benefits Container engines Experiments status and plans Security considerations Summary and outlook STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID SWISS EXPERIENCE

More information

Genomics on Cisco Metacloud + SwiftStack

Genomics on Cisco Metacloud + SwiftStack Genomics on Cisco Metacloud + SwiftStack Technology is a large component of driving discovery in both research and providing timely answers for clinical treatments. Advances in genomic sequencing have

More information

Report on the HEPiX Virtualisation Working Group

Report on the HEPiX Virtualisation Working Group Report on the HEPiX Virtualisation Working Group Thomas Finnern Owen Synge (DESY/IT) The Arts of Virtualization > Operating System Virtualization Core component of today s IT infrastructure > Application

More information

ElasterStack 3.2 User Administration Guide - Advanced Zone

ElasterStack 3.2 User Administration Guide - Advanced Zone ElasterStack 3.2 User Administration Guide - Advanced Zone With Advance Zone Configuration TCloud Computing Inc. 6/22/2012 Copyright 2012 by TCloud Computing, Inc. All rights reserved. This document is

More information

COP Cloud Computing. Presented by: Sanketh Beerabbi University of Central Florida

COP Cloud Computing. Presented by: Sanketh Beerabbi University of Central Florida COP6087 - Cloud Computing Presented by: Sanketh Beerabbi University of Central Florida A cloud is a collection of networked resources configured such that users can request scalable resources (VMs, platforms,

More information

Geant4 on Azure using Docker containers

Geant4 on Azure using Docker containers http://www.geant4.org Geant4 on Azure using Docker containers Andrea Dotti (adotti@slac.stanford.edu) ; SD/EPP/Computing 1 Outlook Motivation/overview Docker + G4 Azure + G4 Conclusions 2 Motivation/overview

More information

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Improved ATLAS HammerCloud Monitoring for Local Site Administration Improved ATLAS HammerCloud Monitoring for Local Site Administration M Böhler 1, J Elmsheuser 2, F Hönig 2, F Legger 2, V Mancinelli 3, and G Sciacca 4 on behalf of the ATLAS collaboration 1 Albert-Ludwigs

More information

OpenNebula on VMware: Cloud Reference Architecture

OpenNebula on VMware: Cloud Reference Architecture OpenNebula on VMware: Cloud Reference Architecture Version 1.2, October 2016 Abstract The OpenNebula Cloud Reference Architecture is a blueprint to guide IT architects, consultants, administrators and

More information

Clouds in High Energy Physics

Clouds in High Energy Physics Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service

More information

Automated Deployment of Private Cloud (EasyCloud)

Automated Deployment of Private Cloud (EasyCloud) Automated Deployment of Private Cloud (EasyCloud) Mohammed Kazim Musab Al-Zahrani Mohannad Mostafa Moath Al-Solea Hassan Al-Salam Advisor: Dr.Ahmad Khayyat COE485 T151 1 Table of Contents Introduction

More information

HPC learning using Cloud infrastructure

HPC learning using Cloud infrastructure HPC learning using Cloud infrastructure Florin MANAILA IT Architect florin.manaila@ro.ibm.com Cluj-Napoca 16 March, 2010 Agenda 1. Leveraging Cloud model 2. HPC on Cloud 3. Recent projects - FutureGRID

More information

Modelling of virtualized servers

Modelling of virtualized servers Modelling of virtualized servers Ákos Kovács, Gábor Lencse Abstract The virtualized systems are one of the key elements of the next generation IT infrastructures. Modeling it will prevent mistakes, and

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Use of containerisation as an alternative to full virtualisation in grid environments.

Use of containerisation as an alternative to full virtualisation in grid environments. Journal of Physics: Conference Series PAPER OPEN ACCESS Use of containerisation as an alternative to full virtualisation in grid environments. Related content - Use of containerisation as an alternative

More information

Clouds at other sites T2-type computing

Clouds at other sites T2-type computing Clouds at other sites T2-type computing Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are used in a variety of ways for Tier-2 type computing MC simulation, production

More information

SIMCLOUD: RUNNING OPERATIONAL SIMULATORS IN THE CLOUD

SIMCLOUD: RUNNING OPERATIONAL SIMULATORS IN THE CLOUD ABSTRACT SIMCLOUD: RUNNING OPERATIONAL SIMULATORS IN THE CLOUD Annabell Langs (1), Claudia Mehlig (1), Stefano Ferreri (2), Mehran Sarkarati (3) (1) Telespazio VEGA Deutschland GmbH Europaplatz 5, 64293

More information

Amazon Elastic Compute Cloud (EC2)

Amazon Elastic Compute Cloud (EC2) Amazon Elastic Compute Cloud (EC2) 1 Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity ( Virtual Machine) in the AWS cloud. Why EC2 Available in different locations

More information

Clouds: An Opportunity for Scientific Applications?

Clouds: An Opportunity for Scientific Applications? Clouds: An Opportunity for Scientific Applications? Ewa Deelman USC Information Sciences Institute Acknowledgements Yang-Suk Ki (former PostDoc, USC) Gurmeet Singh (former Ph.D. student, USC) Gideon Juve

More information

How Parallels RAS Enhances Microsoft RDS. White Paper Parallels Remote Application Server

How Parallels RAS Enhances Microsoft RDS. White Paper Parallels Remote Application Server How Parallels RAS Enhances Microsoft RDS White Paper Parallels Remote Application Server Table of Contents Introduction... 3 Overview of Microsoft Remote Desktop Services... 3 Microsoft RDS Pain Points...

More information

Chapter 3 Virtualization Model for Cloud Computing Environment

Chapter 3 Virtualization Model for Cloud Computing Environment Chapter 3 Virtualization Model for Cloud Computing Environment This chapter introduces the concept of virtualization in Cloud Computing Environment along with need of virtualization, components and characteristics

More information

A Cloud-based Dynamic Workflow for Mass Spectrometry Data Analysis

A Cloud-based Dynamic Workflow for Mass Spectrometry Data Analysis A Cloud-based Dynamic Workflow for Mass Spectrometry Data Analysis Ashish Nagavaram, Gagan Agrawal, Michael A. Freitas, Kelly H. Telu The Ohio State University Gaurang Mehta, Rajiv. G. Mayani, Ewa Deelman

More information

Distributed Systems COMP 212. Lecture 18 Othon Michail

Distributed Systems COMP 212. Lecture 18 Othon Michail Distributed Systems COMP 212 Lecture 18 Othon Michail Virtualisation & Cloud Computing 2/27 Protection rings It s all about protection rings in modern processors Hardware mechanism to protect data and

More information

Block Storage Service: Status and Performance

Block Storage Service: Status and Performance Block Storage Service: Status and Performance Dan van der Ster, IT-DSS, 6 June 2014 Summary This memo summarizes the current status of the Ceph block storage service as it is used for OpenStack Cinder

More information

PBS PROFESSIONAL VS. MICROSOFT HPC PACK

PBS PROFESSIONAL VS. MICROSOFT HPC PACK PBS PROFESSIONAL VS. MICROSOFT HPC PACK On the Microsoft Windows Platform PBS Professional offers many features which are not supported by Microsoft HPC Pack. SOME OF THE IMPORTANT ADVANTAGES OF PBS PROFESSIONAL

More information

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision At-A-Glance Unified Computing Realized Today, IT organizations assemble their data center environments from individual components.

More information

Aneka Dynamic Provisioning

Aneka Dynamic Provisioning MANJRASOFT PTY LTD Aneka Aneka 3.0 Manjrasoft 05/24/2012 This document describes the dynamic provisioning features implemented in Aneka and how it is possible to leverage dynamic resources for scaling

More information

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine April 2007 Part No 820-1270-11 Revision 1.1, 4/18/07

More information

MONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT

MONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT The Monte Carlo Method: Versatility Unbounded in a Dynamic Computing World Chattanooga, Tennessee, April 17-21, 2005, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2005) MONTE CARLO SIMULATION

More information

Oracle Solaris 11: No-Compromise Virtualization

Oracle Solaris 11: No-Compromise Virtualization Oracle Solaris 11: No-Compromise Virtualization Oracle Solaris 11 is a complete, integrated, and open platform engineered for large-scale enterprise environments. Its built-in virtualization provides a

More information

Remote power and console management in large datacenters

Remote power and console management in large datacenters Remote power and console management in large datacenters A Horváth IT department, CERN, CH-1211 Genève 23, Switzerland E-mail: Andras.Horvath@cern.ch Abstract. Today s datacenters are often built of a

More information

Advanced Architectures for Oracle Database on Amazon EC2

Advanced Architectures for Oracle Database on Amazon EC2 Advanced Architectures for Oracle Database on Amazon EC2 Abdul Sathar Sait Jinyoung Jung Amazon Web Services November 2014 Last update: April 2016 Contents Abstract 2 Introduction 3 Oracle Database Editions

More information

SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS

SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS SWsoft ADVANCED VIRTUALIZATION AND WORKLOAD MANAGEMENT ON ITANIUM 2-BASED SERVERS Abstract Virtualization and workload management are essential technologies for maximizing scalability, availability and

More information

Managing a tier-2 computer centre with a private cloud infrastructure

Managing a tier-2 computer centre with a private cloud infrastructure Journal of Physics: Conference Series OPEN ACCESS Managing a tier-2 computer centre with a private cloud infrastructure To cite this article: Stefano Bagnasco et al 2014 J. Phys.: Conf. Ser. 523 012012

More information

When (and how) to move applications from VMware to Cisco Metacloud

When (and how) to move applications from VMware to Cisco Metacloud White Paper When (and how) to move applications from VMware to Cisco Metacloud What You Will Learn This white paper will explain when to migrate various applications running in VMware virtual machines

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Benchmarking the ATLAS software through the Kit Validation engine

Benchmarking the ATLAS software through the Kit Validation engine Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

Table of Contents 1.1. Introduction. Overview of vsphere Integrated Containers 1.2

Table of Contents 1.1. Introduction. Overview of vsphere Integrated Containers 1.2 Table of Contents Introduction Overview of vsphere Integrated Containers 1.1 1.2 2 Overview of vsphere Integrated Containers This document provides an overview of VMware vsphere Integrated Containers.

More information

Powerful Insights with Every Click. FixStream. Agentless Infrastructure Auto-Discovery for Modern IT Operations

Powerful Insights with Every Click. FixStream. Agentless Infrastructure Auto-Discovery for Modern IT Operations Powerful Insights with Every Click FixStream Agentless Infrastructure Auto-Discovery for Modern IT Operations The Challenge AIOps is a big shift from traditional ITOA platforms. ITOA was focused on data

More information

Striped Data Server for Scalable Parallel Data Analysis

Striped Data Server for Scalable Parallel Data Analysis Journal of Physics: Conference Series PAPER OPEN ACCESS Striped Data Server for Scalable Parallel Data Analysis To cite this article: Jin Chang et al 2018 J. Phys.: Conf. Ser. 1085 042035 View the article

More information

RACKSPACE ONMETAL I/O V2 OUTPERFORMS AMAZON EC2 BY UP TO 2X IN BENCHMARK TESTING

RACKSPACE ONMETAL I/O V2 OUTPERFORMS AMAZON EC2 BY UP TO 2X IN BENCHMARK TESTING RACKSPACE ONMETAL I/O V2 OUTPERFORMS AMAZON EC2 BY UP TO 2X IN BENCHMARK TESTING EXECUTIVE SUMMARY Today, businesses are increasingly turning to cloud services for rapid deployment of apps and services.

More information

Research Challenges in Cloud Infrastructures to Provision Virtualized Resources

Research Challenges in Cloud Infrastructures to Provision Virtualized Resources Beyond Amazon: Using and Offering Services in a Cloud Future Internet Assembly Madrid 2008 December 9th, 2008 Research Challenges in Cloud Infrastructures to Provision Virtualized Resources Distributed

More information

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman, StudentMember, IEEE Kawser Wazed Nafi Syed Akther Hossain, Member, IEEE & ACM Abstract Cloud

More information

SURVEY PAPER ON CLOUD COMPUTING

SURVEY PAPER ON CLOUD COMPUTING SURVEY PAPER ON CLOUD COMPUTING Kalpana Tiwari 1, Er. Sachin Chaudhary 2, Er. Kumar Shanu 3 1,2,3 Department of Computer Science and Engineering Bhagwant Institute of Technology, Muzaffarnagar, Uttar Pradesh

More information

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine Journal of Physics: Conference Series OPEN ACCESS Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine To cite this article: Henrik Öhman et al 2014 J. Phys.: Conf.

More information

What is Cloud Computing? What are the Private and Public Clouds? What are IaaS, PaaS, and SaaS? What is the Amazon Web Services (AWS)?

What is Cloud Computing? What are the Private and Public Clouds? What are IaaS, PaaS, and SaaS? What is the Amazon Web Services (AWS)? What is Cloud Computing? What are the Private and Public Clouds? What are IaaS, PaaS, and SaaS? What is the Amazon Web Services (AWS)? What is Amazon Machine Image (AMI)? Amazon Elastic Compute Cloud (EC2)?

More information

The CMS data quality monitoring software: experience and future prospects

The CMS data quality monitoring software: experience and future prospects The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data

More information

Integration of Cloud and Grid Middleware at DGRZR

Integration of Cloud and Grid Middleware at DGRZR D- of International Symposium on Computing 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology March 12, 2010 Overview D- 1 D- Resource Center Ruhr 2 Clouds in the German

More information

Smarter Systems In Your Cloud Deployment

Smarter Systems In Your Cloud Deployment Smarter Systems In Your Cloud Deployment Hemant S Shah ASEAN Executive: Cloud Computing, Systems Software. 5 th Oct., 2010 Contents We need Smarter Systems for a Smarter Planet Smarter Systems = Systems

More information

Virtualization. Michael Tsai 2018/4/16

Virtualization. Michael Tsai 2018/4/16 Virtualization Michael Tsai 2018/4/16 What is virtualization? Let s first look at a video from VMware http://www.vmware.com/tw/products/vsphere.html Problems? Low utilization Different needs DNS DHCP Web

More information

Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack

Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack Robert Collazo Systems Engineer Rackspace Hosting The Rackspace Vision Agenda Truly a New Era of Computing 70 s 80 s Mainframe Era 90

More information

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014 StratusLab Cloud Distribution Installation Charles Loomis (CNRS/LAL) 3 July 2014 StratusLab What is it? Complete IaaS cloud distribution Open source (Apache 2 license) Works well for production private

More information

Paperspace. Architecture Overview. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper

Paperspace. Architecture Overview. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper Architecture Overview Copyright 2016 Paperspace, Co. All Rights Reserved June - 1-2017 Technical Whitepaper Paperspace Whitepaper: Architecture Overview Content 1. Overview 3 2. Virtualization 3 Xen Hypervisor

More information

Virtual Machines. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Machines. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University Virtual Machines Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Today's Topics History and benefits of virtual machines Virtual machine technologies

More information

opennebula and cloud architecture

opennebula and cloud architecture opennebula and cloud architecture Stefano Bagnasco INFN Torino OpenNebula Cloud Architecture- 1/120 outline Recap from yesterday OpenNebula Open Cloud Reference Architecture OpenNebula internal achitecture

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software.

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. White Paper rev. 2017-10-16 2017 FlashGrid Inc. 1 www.flashgrid.io Abstract Ensuring high availability

More information

Identifying Workloads for the Cloud

Identifying Workloads for the Cloud Identifying Workloads for the Cloud 1 This brief is based on a webinar in RightScale s I m in the Cloud Now What? series. Browse our entire library for webinars on cloud computing management. Meet our

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Important DevOps Technologies (3+2+3days) for Deployment

Important DevOps Technologies (3+2+3days) for Deployment Important DevOps Technologies (3+2+3days) for Deployment DevOps is the blending of tasks performed by a company's application development and systems operations teams. The term DevOps is being used in

More information

AliEn Resource Brokers

AliEn Resource Brokers AliEn Resource Brokers Pablo Saiz University of the West of England, Frenchay Campus Coldharbour Lane, Bristol BS16 1QY, U.K. Predrag Buncic Institut für Kernphysik, August-Euler-Strasse 6, 60486 Frankfurt

More information

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013 Distributed Systems 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski Rutgers University Fall 2013 December 12, 2014 2013 Paul Krzyzanowski 1 Motivation for the Cloud Self-service configuration

More information

Backtesting in the Cloud

Backtesting in the Cloud Backtesting in the Cloud A Scalable Market Data Optimization Model for Amazon s AWS Environment A Tick Data Custom Data Solutions Group Case Study Bob Fenster, Software Engineer and AWS Certified Solutions

More information

Alteryx Technical Overview

Alteryx Technical Overview Alteryx Technical Overview v 1.5, March 2017 2017 Alteryx, Inc. v1.5, March 2017 Page 1 Contents System Overview... 3 Alteryx Designer... 3 Alteryx Engine... 3 Alteryx Service... 5 Alteryx Scheduler...

More information

BUILDING A PRIVATE CLOUD. By Mark Black Jay Muelhoefer Parviz Peiravi Marco Righini

BUILDING A PRIVATE CLOUD. By Mark Black Jay Muelhoefer Parviz Peiravi Marco Righini BUILDING A PRIVATE CLOUD By Mark Black Jay Muelhoefer Parviz Peiravi Marco Righini HOW PLATFORM COMPUTING'S PLATFORM ISF AND INTEL'S TRUSTED EXECUTION TECHNOLOGY CAN HELP 24 loud computing is a paradigm

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

Large Scale Computing Infrastructures

Large Scale Computing Infrastructures GC3: Grid Computing Competence Center Large Scale Computing Infrastructures Lecture 2: Cloud technologies Sergio Maffioletti GC3: Grid Computing Competence Center, University

More information

A Case for High Performance Computing with Virtual Machines

A Case for High Performance Computing with Virtual Machines A Case for High Performance Computing with Virtual Machines Wei Huang*, Jiuxing Liu +, Bulent Abali +, and Dhabaleswar K. Panda* *The Ohio State University +IBM T. J. Waston Research Center Presentation

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Virtualization Introduction

Virtualization Introduction Virtualization Introduction Simon COTER Principal Product Manager Oracle VM & VirtualBox simon.coter@oracle.com https://blogs.oracle.com/scoter November 21 st, 2016 Safe Harbor Statement The following

More information

BESIII physical offline data analysis on virtualization platform

BESIII physical offline data analysis on virtualization platform BESIII physical offline data analysis on virtualization platform Qiulan Huang huangql@ihep.ac.cn Computing Center, IHEP,CAS CHEP 2015 Outline Overview of HEP computing in IHEP What is virtualized computing

More information

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February LHC Cloud Computing with CernVM Ben Segal 1 CERN 1211 Geneva 23, Switzerland E mail: b.segal@cern.ch Predrag Buncic CERN E mail: predrag.buncic@cern.ch 13th International Workshop on Advanced Computing

More information

Designing the Stable Infrastructure for Kernel-based Virtual Machine using VPN-tunneled VNC

Designing the Stable Infrastructure for Kernel-based Virtual Machine using VPN-tunneled VNC Designing the Stable Infrastructure for Kernel-based Virtual Machine using VPN-tunneled VNC presented by : Berkah I. Santoso Informatics, Bakrie University International Conference on Computer Science

More information

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool D. Mason for CMS Software & Computing 1 Going to try to give you a picture of the CMS HTCondor/ glideinwms global pool What s the use case

More information

Baremetal with Apache CloudStack

Baremetal with Apache CloudStack Baremetal with Apache CloudStack ApacheCon Europe 2016 Jaydeep Marfatia Cloud, IOT and Analytics Me Director of Product Management Cloud Products Accelerite Background Project lead for open source project

More information

CIT 668: System Architecture. Amazon Web Services

CIT 668: System Architecture. Amazon Web Services CIT 668: System Architecture Amazon Web Services Topics 1. AWS Global Infrastructure 2. Foundation Services 1. Compute 2. Storage 3. Database 4. Network 3. AWS Economics Amazon Services Architecture Regions

More information

BERLIN. 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved

BERLIN. 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved BERLIN 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved Introduction to Amazon EC2 Danilo Poccia Technical Evangelist @danilop 2015, Amazon Web Services, Inc. or its affiliates. All

More information

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems NETAPP TECHNICAL REPORT Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems A Performance Comparison Study of FC, iscsi, and NFS Protocols Jack McLeod, NetApp

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

MODERNISE WITH ALL-FLASH. Intel Inside. Powerful Data Centre Outside.

MODERNISE WITH ALL-FLASH. Intel Inside. Powerful Data Centre Outside. MODERNISE WITH ALL-FLASH Intel Inside. Powerful Data Centre Outside. MODERNISE WITHOUT COMPROMISE In today s lightning-fast digital world, it s critical for businesses to make their move to the Modern

More information

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale Kiewiet Kritzinger DELL EMC CPSD Snr varchitect Introduction to hyper-converged Focus on innovation, not IT integration

More information

Multi-Machine Guide vcloud Automation Center 5.2

Multi-Machine Guide vcloud Automation Center 5.2 Multi-Machine Guide vcloud Automation Center 5.2 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Huawei FusionCloud Desktop Solution 5.1 Resource Reuse Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01.

Huawei FusionCloud Desktop Solution 5.1 Resource Reuse Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Huawei FusionCloud Desktop Solution 5.1 Resource Reuse Technical White Paper Issue 01 Date 2014-03-26 HUAWEI TECHNOLOGIES CO., LTD. 2014. All rights reserved. No part of this document may be reproduced

More information

MOHA: Many-Task Computing Framework on Hadoop

MOHA: Many-Task Computing Framework on Hadoop Apache: Big Data North America 2017 @ Miami MOHA: Many-Task Computing Framework on Hadoop Soonwook Hwang Korea Institute of Science and Technology Information May 18, 2017 Table of Contents Introduction

More information