Virtualization in a Grid Environment. Nils Dijk - Hogeschool van Amsterdam Instituut voor Informatica

Size: px
Start display at page:

Download "Virtualization in a Grid Environment. Nils Dijk - Hogeschool van Amsterdam Instituut voor Informatica"

Transcription

1 Virtualization in a Grid Environment Nils Dijk - nils.dijk@hva.nl Hogeschool van Amsterdam Instituut voor Informatica July 8, 2010

2 Abstract Date: July 8, 2010 Title: Virtualization in a Grid Environment Author: Nils Dijk Company: Nikhef Problem In the grid computing environment there is from both, the client as the developers side, a demand on the ability of running Virtual Machines on grid resources. While running Virtual Machines is opposing new attack surface on the grid resources it is believed to be an improvement for the grid infrastructure. Because virtualization is an upcoming technology in the form of the cloud there is a lot to be investigated and tested before deploying it to a grid infrastructure.

3 Contents 1 Nikhef & Grid computing Nikhef Participating organizations Grid resources PDP group Assignment Why Virtualization Thing to sort out Things explicitly not part of this assignment Requirements for Virtual Machines in existing grid infrastructure Authentication and Authorization Scheduling Destruction Purposed design 10 5 Implementation Gathering Information Image to boot OpenNebula user Resources Network Credits Nikhef & Grid computing Sources 17 1

4 Chapter 1 Nikhef & Grid computing 1.1 Nikhef Nikhef (Nationaal instituut voor subatomaire fysica) is the Dutch national institute for subatomic physics. It s a collaboration between Stichting voor Fundamenteel Onderzoek der Materie (FOM), Universiteit van Amsterdam (UvA), Vrije Universiteit Amsterdam (VU), Radboud Universiteit Nijmegen (RU) and the Universiteit Utrecht (UU). The name was originally an acronym for Nationaal Instituut voor Kernfysica en Hoge Energie- Fysica, (National institute for nuclear and high energy physics). After closing down the linear electron accelerator in 1998 the research into experimental nuclear physics yielded, but the Nikhef name has been retained up to the present day. [5] These days Nikhef is involved in areas dealing with subatomic particles. Most employees at Nikhef are involved with physics projects, some of which like ATLAS, ALICE and LHCb are directly related to the Large Hadron Collider (LHC) particle accelerator at the European Organization for Nuclear Research (CERN). Among the technical departments at Nikhef are Mechanical Engineering (EA), the Mechanic Workshop (MA), Electronics Technology (ET) and Computer Technology (CT). High energy physics experiments generate vast amounts of data, analysis of which requires equally vast amounts of computing power. In the past supercomputers were used to provide this power, but in order to perform analysis on high-energy subatomic particle interactions required by the LHC experiments, a new method of pooling computing resources was adopted: Grid computing. The CT department provides Nikhef s computing infrastructure. The Physics Data Processing (PDP) group is an offshoot of the CT department which develops Grid infrastruc- 2

5 ture, policy and software. Figure 1.1: A diagram showing the organizational structure of Nikhef[3] 1.2 Participating organizations Like supercomputers, Grids attract science. This has led to a community of Grid computing users which advances the Grid computing field on an international scale. Some of the cooperating organisations within the Grid computing community are: BiG Grid, the Dutch e-science Grid. An example of a National Grid Initiative (NGI), of which there are many. 3

6 The Enabling Grids for E-sciencE (EGEE) project. A leading body for NGIs. To be transformed into the European Grid Initiative (EGI). The LHC Computing Grid (LCG) is the Grid employed by CERN to store and analyze data generated by the Large Hadron Collider (LHC). Also a member of EGEE. The Virtual Laboratory for e-science (VL-e). A separate entity that tries to make Grid infrastructure accessible for e-science applications in the Netherlands. 1.3 Grid resources Here s an example of the resources potentially available on a national (BiG Grid) and international (EGEE) level. This is not a static number as the Grid is dynamic in nature. Resources shift in and out due to maintenance requirements or upgrade. The Grid has a tendency to grow in computing and storage capacity. BiG Grid has between 4500 and 5000 computing cores (not including LISA, which has 3000 cores) and about 4.7 petabytes of storage. The capacity of available tape storage is about 3 petabytes. EGEE has roughly computing cores, 28 petabytes of disk storage and 41 petabytes of tape storage. [6] 1.4 PDP group The Physics Data Processing (PDP) group at Nikhef is associated with BiG Grid, the LHC Computing Grid (LCG), Enabling Grids for E-sciencE (EGEE), the Virtual Laboratory for e-science (VL-e) and the (planned) European Grid Initiative (EGI). Within Nikhef, the PDP group concerns itself with policy and infrastructure decisions pertaining to authentication and authorization for international Grid systems. It facilitates the installation and maintenance of computing, storage and human resources. It provides the Dutch national academic Grid and supercomputing Certificate Authority (CA), and also delivers software such as: Grid middleware components (part of the glite stack) Cluster management software (Quattor) The PDP group employs Application Domain Analysts (ADAs) which try to bridge the gap between Grid technology and its users by developing software solutions and offering domain-specific knowledge to user groups. 4

7 Chapter 2 Assignment The PDP group at Nikhef came with an assignment involving the preparation of a virtualized environments for grid jobs. This should be implemented within the Executions Environment Service. This service is written at Nikhef and from the beginning the had been claimed it would be able to produce a virtualized environment for a grid job. 2.1 Why Virtualization Since the hype of the so called cloud, virtualization techniques are used to provide on demand execution of machines configured by a user with there software. These machines can be configured locally and when they are ready for production it can be run anywhere. The grid provides users with an environment to run the software of its users, mainly for scientific purposes. Since al the different libraries user software depends on it s very hard to manage the software stack available on worker nodes. Some organizations have there own dedicated hardware within a sites datacenter to provide there users with the right software, but that is only for the bigger organizations. It is believed that a lot of the software conflicts and thus failure of jobs can be reduced by providing users with the ability to run their jobs within a predictable environment which could be realized by a virtual machine. This machine is either user supplied or it is supplied by their organization they work for. 5

8 2.2 Thing to sort out Despite of all the work done by the Virtualization Working group there is very little known about the possibilities of starting virtual machines on grid infrastructure, the traceability and accountability of it. Since most employees have no spare time on their hands it is desirable to put one full time internship on it. 2.3 Things explicitly not part of this assignment Because virtualization is an enormous field of research I would specify here some parts that may come up while reading this document which are not part of my research. This does not mean I did not look in to some of these things. This internship is not about the performance of virtual machines versus the performance of real hardware. There are lots of discussions going on about this topic and there is still no definite answer to this question. Hypervisor versus Hypervisor is also not addressed within this document. I have been working with the XEN hypervisor just because there was the most known about at Nikhef. Wherever XEN is mentioned it can also be read as KVM or VMWare. 6

9 Chapter 3 Requirements for Virtual Machines in existing grid infrastructure For providing users of the grid with the ability to deploy virtual machines on grid infrastructure and at the same time keeping risks at a minimum there are multiple requirements on the implementation. In this chapter I will give an overview of high level requirements for deploying virtual machines on the grid. 3.1 Authentication and Authorization The grid has over nine thousand users so authentication and authorization is an important aspect of keeping the grid infrastructure safe. On the batch system this authentication is done by the use of x509 certificates. These certificates provide a Public Key Infrastructure. Since UNIX does not provide user authentication by certificates these users are mapped to local UNIX accounts. With this mapping it is important to register which certificate user is mapped to witch UNIX account at a specific time, this for forensic purposes when a user shows undesired behaviour. Because virtual machines are always run as root user on the host it is not possible to start the vm as a simple user process. By using OpenNebula for deployment of virtual machines you make it possible for users to deploy virtual machines while not having root access them self, but OpenNebula does neither support authentication nor authorization in the form of a x509 certificate but instead maintains its own user database in much the 7

10 same manner as for example MySQL does. All actions on virtual machines are run as a dedicated OpenNebula user which is in most cases the oneadmin account. To allow a grid user to deploy virtual machines by authenticating its self with his certificate there has to be a same mapping mechanism as there is already for certificate users to local UNIX accounts. This is implemented as a GridMapDir[1]. 3.2 Scheduling Because the amount of users using the grid for computing each users has to be scheduled for resources much the same as processes get scheduled by an operating system to share the machines resources e.g. the processor, memory and storage. But instead of small amounts of time an operating system gives to a process in each scheduling round the gird supplies the user with much longer times up to 72 hours of continues running. When supplying the user with the ability not to run batch job s but virtualized machines you still have to share the total amount of resources with multiple users. Since OpenNebula is a toolchain for providing cloud services most of the scheduling algorithms are not providing rules commonly used on the grid. For example the rule for fair share 1 which is a policy used by the system administrators of the Compute Element located at Nikhef. The easiest way of scheduling virtual machines is to use the scheduler already active on the Compute Element but since there are different schedulers in use worldwide it is easier said than done. 3.3 Destruction After a job is done or its walltime 2 has passed the virtual machine has to be removed by shutting it down. If for some reason this is not done some resources e.g. memory will still be held by the virtual machine preventing new virtual machines from claiming it. This will result in failures when starting new virtual machines on the Worker Node. Therefore it is essential that that the vm is cleaned up afterwards. Also in the case of an administrator killing a job due to suspicious activity it is mandatory to kill the running VM with it. Otherwise a malicious user/machine could still be running 1 Fair share is a rule to suppress users with excessive job submission while other users with less job submission are also submitting jobs but allow them to use unused resources even if they already passed there time quotum. 2 Walltime is the time on the clock (wall) that the job is allowed to use the computational resource of the Worker Node you are scheduled on. 8

11 and using the infrastructure in a way it is not meant to. 9

12 Chapter 4 Purposed design Since it is preferred by Nikhef the implementation of such a system is within their developed Execution Environment Service I have looked for a way where authorization is performed within the argus framework of the grid where the EES is part of. Figure 4.1 shows the authorization and booting sequence I came up with which was approved by the security experts at Nikhef. Also a list explaining in detail the interaction between all components involved. 1. The job is delivered at the Compute Element of the site 2. The Compute Element contacts the Policy Enforcement Point with the information of the user job which is the request of a virtual machine, this request is done in an XACML2[4] request. 3. The Policy Enforcement Point asks the Policy Decision Point for a decision about the request based on obligations published by the Policy Administration Point 4. The obligations returned to the Policy Enforcement Point to be fulfilled. 5. The Policy Enforcement Point uses the Execution Environment Service to fulfill obligations making the EES an obligation handler of the pep. 6. The Execution Environment Service returns to the Policy Enforcement Point with an answer for the fulfilled obligations 7. The Policy Enforcement Point returns positive or negative to the Compute Element 8. On a positive answer from the Policy Enforcement Point the Compute Element passes the job to the Local Resource Management System 10

13 9. The Local Resource Management System schedules the job to a Worker Node with a hypervisor running on it. 10. As the job is deployed it contacts the Authorization Framework through the Policy Enforcement Point with information about the host the job is running on and the user requesting it. 11. The Policy Enforcement Point uses the Execution Environment Service again as an obligation handler for the incomming request. 12. As the Execution Environment Service sees that the request contains a host to run a Virtual Machine on it deploys a machine assigned to the requesting user on the specified host. 13. Open Nebula contacts the hypervisor running on the node to start the virtual machine. 14. Open Nebula returns the VM identifier 1 to the Execution Environment Service 15. The Execution Environment Service forwards the VM identifier to the Policy Enforcement Point by the means of an obligation. 16. The Policy Enforcement Point forwards the VM identifier in the response to the request send by the Worker Node in the same way as the Execution Environment Service did to him 1 Unique number assigned to the vm by Open Nebula 11

14 PDP PAP 1 3 CE 7 8 LRMS 2 4 PEPd EES WN Dom ONE Figure 4.1: Diagram showing the deployment sequence of Virtual Machines 12

15 Chapter 5 Implementation For implementing a service which is able to boot virtual machines there are several steps involved. Beginning with the gathering of the information needed for booting. 5.1 Gathering Information Before a Virtual Machine can be booted it is essential to gather all the information e.g. the image to boot, owner of the virtual machine and the resources the machine needs. I will explain all the different attributes needed before booting a virtual machine and why it is implemented as it is Image to boot For virtual machines to start you need to know what image to start as the image contains the virtual machine. Here follow some possibilities how to obtain the location of the image to boot and a motivation for the chosen implementation. User supplied The user defines the image to boot in his Job Description Language. This way the user had full control over the virtual machine he would like to run on the infrastructure. 13

16 Argus The Argus framework consisting of the Policy Administration Point, Policy Decision Point and Policy Enforcement Point are able to set obligations to be fulfilled for the requesting user. This way it is possible to set an obligation for the image to boot for a specific user or role he takes within an organization. GridMapFile An other way of providing the image is by the use of a GridMapFile 1 which is a file containing mappings from one sort of information to an other which looks like "Expression to map" ThingToMapTo,SomethingOther. In this situation it would for example map an FQAN 2 to the information needed. FQAN s are used to describe a role a person has within the VO he works for. This file is stored on the file system and is maintained by the administrators of a site. Choise and motivation In the ideal situation a user should be able to supply the image he would like to boot. Unfortunately it would be very hard to run user supplied images in such an environment because of the privileges a machine gets and thus the user who supplied it. Also the current interpreters for the JDL do not pass the information al the way to the Argus framework so this should be addressed before users can have full control over the image they would like to boot. This eliminates the possibility for user supplied images at least until the JDL supports it. Ideally the EES should be able to perform its operation without the need to be invoked through Argus. To keep plugins capable of performing without the need for obligations provided by Argus it is mandatory to gather the image to boot through some other way. Despite of Argus beeing more scalable than a file for these mappings there is chosen to do it with a GridMapFile for several reasons: 1. It is more portable than relying on the Argus Framework 2. It is easier to implement 3. less work than registering XACML attributes with other organizations 1 A specification of the GridMapFile is provided by the twiki at cern: bin/view/egee/authzmapfile 2 Fully Qualified Attribute Name 14

17 Since GridMapFiles[2] are already being used on the grid there is a stable implementation where Argus is fairly new OpenNebula user There is a finite number of users in the Open Nebula database. For traceability it is desired to boot images as separate users log files can show who is responsible for booting an image. Because there are finite accounts they should be dynamically mapped to users of the grid. This is already done with unix accounts, with a construction called GridMapDir, and for simplicity it will be implemented the same; the DN from the users certificate is mapped to an Open Nebula user the first time it is seen and released after it is not used a long time. Hereby it is traceable who booted a Virtual Machine within a certain amount of time. To communicate with Open Nebula you need a session key which is a concatenation of the username and a hash of the password. To obtain a session for a given user of Open Nebula the username is mapped, by the use of a GridMapFile, to an Open Nebula session as the XMLRPC layer of Open Nebula expects Resources Currently a machine running batch jobs is logically divided in jobslots by the number of cores a machine has. Since it is not possible to allocate more RAM for virtual machines then there is physically available (with some overhead for the hosting machine) it is good practice to dived the total amount of RAM available by the number of virtual machines e.g. the cores of the machine. This is currently done by setting a default in the Open Nebula configuration files Network To mimic the behavior of a normal cluster all machines are currently connected to the same public network as defined by Open Nebula. For some users or groups it could be desirable to put all machines the have running to the same private network. But at the moment there is no support for virtual local area networks in the tools used and it s very complex because switching equipment should be configured on the fly to allow machines in dedicated vlan s. The network administrator at Nikhef s gridsite is currently looking into switches which are able to be configured on the fly. Now the only way to separate networks is through ip ranges, which can be faked by the owner of the virtual machine, and therefor it is not safe to supply users with private networks. 15

18 Chapter 6 Credits Some of the contents of this document are a direct copy from an other document. This is done because these parts are mandatory to include but are already written numerous of times. Here I will define which chapters or parts of it are written by someone else. In most cases I have found and contacted the original author of the parts to ask for permission to reuse there effort but I cannot guarantee that for all parts the original author has been found nor contacted. 6.1 Nikhef & Grid computing This chapters is written by Aram Verstegen when he was on his internship at Nikhef where he developed the Execution Environment Service also discussed in this document. It was published in his thesis[7] and therefore may be familiar by readers who also read that 16

19 Chapter 7 Sources 1 p l a c e s o u r c e s here 17

20 Bibliography [1] Grid map dir mechanism. GridMapDir. [2] The gridmap file. html. [3] Organigram nikhef. organisatiestructuur/. [4] G. Garzoglio (editor). An xacml attribute and obligation profile for authorized interoperability in grids. and October [5] Kees Huyser. Over nikhef. [6] The EGEE Project. Egee in numbers [7] Aram Verstegen. Execution environment service, November

21 Execution Environment Service Glossary ADA Application Domain Analyst (3) Authorization Framework Set of tools running on the grid to authorize users for the actions the want to perform on the grid. (4, 13, 20) BiG Grid CERN The Dutch e-science grid (3) Organisation Europenne pour la Recherche Nuclaire (1) Compute Element EGEE EGI A cluster of Worker Nodes located at the same geological location announcing them selfs as one resource to the outside. (4, 5, 9, 13, 20, 21) Enabling Grids for E-sciencE (3) European Grid Initiative (3) A service providing site specific environments for a job submitted by a user based on the policies of both the system administrator of a site and from the Virtual Organization (4, 5, 13, 14) Hypervisor The software layer enabling the execution of Virtual Machines (6) Information Manager The interface Open Nebula uses to monitor the hypervisors. By implementing it with a corresponding Virtual Machine Manager sone could expand the supported hypervisors of Open Nebula (6) Job Description Language LCG LHC The syntax for the description of the job a user would like to be executed on grid resources. This description contains but is not limited to the job to run, the amount of memory the job uses and files the job needs to access (4) The LHC Computing Grid (3) Large Hadron Collider, the particle accelerator at CERN (1) Local Resource Management System The scheduler on the site to schedule jobs to nodes they correspond to. (13) 19

22 Nikhef Public Key Infrastructure Nationaal instituut voor subatomaire fysica (Dutch national institute for subatomic physics). Originally: Nationaal Instituut voor Kernfysica en Hoge Energie-Fysica (1) RU A cryptographic methode for encrypting messages over untrusted networks. (8) Open Nebula Open Nebula is a tool chain providing a high level interface to a Virtual Machine cluster (6, 7, 13, 19, 20) Radboud Universiteit Nijmegen (1) Stichting FOM Stichting voor Fundamenteen Onderzoek der Materie (1) PDP Physics Data Processing (3) Policy Administration Point Service, part of the Authorization Framework, used by system administrators on a site or by a Virtual Organization to administrate policies for there users. (4, 5, 13, 14, 20) Policy Decision Point Service, part of the Authorization Framework, local at the site which collects the policies from the Policy Administration Point and make decisions on these policies for incoming requests (4, 5, 13, 14) Policy Enforcement Point Entry point of the Authorization Framework which enforces the policies provided by the site and the Virtual Organization for the requesting user (4, 5, 13, 14) Transfer Manager UU UvA Set of tools to manage the deployment of all the image s used by Open Nebula (6, 15) Universiteit Utrecht (1) Universiteit van Amsterdam (1) Virtual Machine Manager The interface used by Open Nebula to interact with a hypervisor for starting and stopping a Virtual Machine (6, 19) Virtual Organization An administrative container for users working on the same kind of experiments. A good example is Atlas which is the Virtual Organization for physicists processing the results of the Large Hadron Collider (5, 19, 20) 20

23 VL-e Virtual Laboratory for e-science (3) VU Vrije Universiteit Amsterdam (1) Worker Node Computer in a Compute Element where the jobs are executed. (4, 9, 13, 19) Workload Management System System which is aware of multiple Compute Elements and there expected response time 1. (4) 1 time it will take for a submitted job to the Compute Element to be completed 21

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef VMs at a Tier-1 site EGEE 09, 21-09-2009 Sander Klous, Nikhef Contents Introduction Who are we? Motivation Why are we interested in VMs? What are we going to do with VMs? Status How do we approach this

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Introduction to Grid Infrastructures

Introduction to Grid Infrastructures Introduction to Grid Infrastructures Stefano Cozzini 1 and Alessandro Costantini 2 1 CNR-INFM DEMOCRITOS National Simulation Center, Trieste, Italy 2 Department of Chemistry, Università di Perugia, Perugia,

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Argus Vulnerability Assessment *1

Argus Vulnerability Assessment *1 Argus Vulnerability Assessment *1 Manuel Brugnoli and Elisa Heymann Universitat Autònoma de Barcelona June, 2011 Introduction Argus is the glite Authorization Service. It is intended to provide consistent

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

EGI-InSPIRE. Security Drill Group: Security Service Challenges. Oscar Koeroo. Together with: 09/23/11 1 EGI-InSPIRE RI

EGI-InSPIRE. Security Drill Group: Security Service Challenges. Oscar Koeroo. Together with: 09/23/11 1 EGI-InSPIRE RI EGI-InSPIRE Security Drill Group: Security Service Challenges Oscar Koeroo Together with: 09/23/11 1 index Intro Why an SSC? SSC{1,2,3,4} SSC5 Future 2 acknowledgements NON INTRUSIVE DO NOT affect actual

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Scalable Computing: Practice and Experience Volume 10, Number 4, pp

Scalable Computing: Practice and Experience Volume 10, Number 4, pp Scalable Computing: Practice and Experience Volume 10, Number 4, pp. 413 418. http://www.scpe.org ISSN 1895-1767 c 2009 SCPE MULTI-APPLICATION BAG OF JOBS FOR INTERACTIVE AND ON-DEMAND COMPUTING BRANKO

More information

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID The WLCG Motivation and benefits Container engines Experiments status and plans Security considerations Summary and outlook STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID SWISS EXPERIENCE

More information

Virtualization. A very short summary by Owen Synge

Virtualization. A very short summary by Owen Synge Virtualization A very short summary by Owen Synge Outline What is Virtulization? What's virtulization good for? What's virtualisation bad for? We had a workshop. What was presented? What did we do with

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 21 March 2011 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 3 May 2012 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

The CESAR Project using J2EE for Accelerator Controls

The CESAR Project using J2EE for Accelerator Controls EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH CERN AB DIVISION CERN-AB-2004-001 (CO) The CESAR Project using J2EE for Accelerator Controls V. Baggiolini, P. Bailly, B. Chauchaix, F. Follin, J. Fullerton,

More information

Integration of Cloud and Grid Middleware at DGRZR

Integration of Cloud and Grid Middleware at DGRZR D- of International Symposium on Computing 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology March 12, 2010 Overview D- 1 D- Resource Center Ruhr 2 Clouds in the German

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

Grid Security Policy

Grid Security Policy CERN-EDMS-428008 Version 5.7a Page 1 of 9 Joint Security Policy Group Grid Security Policy Date: 10 October 2007 Version: 5.7a Identifier: https://edms.cern.ch/document/428008 Status: Released Author:

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Argus Authorization Service

Argus Authorization Service Argus Authorization Service Valery Tschopp - SWITCH GDB Meeting, 11.07.2012 @ CERN EMI is partially funded by the European Commission under Grant Agreement RI-261611 Authorization What is authorization?

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

DIRAC Distributed Secure Framework

DIRAC Distributed Secure Framework DIRAC Distributed Secure Framework A Casajus Universitat de Barcelona E-mail: adria@ecm.ub.es R Graciani Universitat de Barcelona E-mail: graciani@ecm.ub.es on behalf of the LHCb DIRAC Team Abstract. DIRAC,

More information

An XACML Attribute and Obligation Profile for Authorization Interoperability in Grids

An XACML Attribute and Obligation Profile for Authorization Interoperability in Grids GWD-C Federated Security fed-sec@ogf.org Rachana Ananthakrishnan, Argonne National Laboratory Gabriele Garzoglio, Fermilab Oscar Koeroo, Nikhef March 11, 2012 Protocol version 1.2 An XACML Attribute and

More information

A High Availability Solution for GRID Services

A High Availability Solution for GRID Services A High Availability Solution for GRID Services Álvaro López García 1 Mirko Mariotti 2 Davide Salomoni 3 Leonello Servoli 12 1 INFN Sezione di Perugia 2 Physics Department University of Perugia 3 INFN CNAF

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Grid Computing Security hack.lu 2006 :: Security in Grid Computing :: Lisa Thalheim 1

Grid Computing Security hack.lu 2006 :: Security in Grid Computing :: Lisa Thalheim 1 Grid Computing Security 20.10.2006 hack.lu 2006 :: Security in Grid Computing :: Lisa Thalheim 1 What to expect from this talk Collection of bits about GC I thought you might find interesting Mixed bag:

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side System and Network Engineering RP1 Troubleshooting Grid authentication from the client side Adriaan van der Zee 2009-02-05 Abstract This report, the result of a four-week research project, discusses the

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

DIRAC distributed secure framework

DIRAC distributed secure framework Journal of Physics: Conference Series DIRAC distributed secure framework To cite this article: A Casajus et al 2010 J. Phys.: Conf. Ser. 219 042033 View the article online for updates and enhancements.

More information

A VO-friendly, Community-based Authorization Framework

A VO-friendly, Community-based Authorization Framework A VO-friendly, Community-based Authorization Framework Part 1: Use Cases, Requirements, and Approach Ray Plante and Bruce Loftis NCSA Version 0.1 (February 11, 2005) Abstract The era of massive surveys

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

Federated Authentication with Web Services Clients

Federated Authentication with Web Services Clients Federated Authentication with Web Services Clients in the context of SAML based AAI federations Thomas Lenggenhager thomas.lenggenhager@switch.ch Mannheim, 8. March 2011 Overview SAML n-tier Delegation

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

Deploying virtualisation in a production grid

Deploying virtualisation in a production grid Deploying virtualisation in a production grid Stephen Childs Trinity College Dublin & Grid-Ireland TERENA NRENs and Grids workshop 2 nd September 2008 www.eu-egee.org EGEE and glite are registered trademarks

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

Grid Computing Middleware. Definitions & functions Middleware components Globus glite

Grid Computing Middleware. Definitions & functions Middleware components Globus glite Seminar Review 1 Topics Grid Computing Middleware Grid Resource Management Grid Computing Security Applications of SOA and Web Services Semantic Grid Grid & E-Science Grid Economics Cloud Computing 2 Grid

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI Setup Desktop Grids and Bridges Tutorial Robert Lovas, MTA SZTAKI Outline of the SZDG installation process 1. Installing the base operating system 2. Basic configuration of the operating system 3. Installing

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side Troubleshooting Grid authentication from the client side By Adriaan van der Zee RP1 presentation 2009-02-04 Contents The Grid @NIKHEF The project Grid components and interactions X.509 certificates, proxies

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

SGS11: Swiss Grid School 2011 Argus The EMI Authorization Service

SGS11: Swiss Grid School 2011 Argus The EMI Authorization Service 1 SGS11: Swiss Grid School 2011 Argus The EMI Authorization Service Andres Aeschlimann SWITCH Outline 1. Argus Authorization Service 2. Service Deployment 3. Authorization Policies 4. Simplified Policy

More information

Table of Contents VSSI VMware vcenter Infrastructure...1

Table of Contents VSSI VMware vcenter Infrastructure...1 Table of Contents VSSI VMware vcenter Infrastructure...1 Document version...1 Glossary...1 VMware vsphere Infrastructure...1 Connect to vsphere Server using the vsphere Client...2 VMware vsphere home window...3

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Grid Services Security Vulnerability and Risk Analysis

Grid Services Security Vulnerability and Risk Analysis Grid Services Security Vulnerability and Risk Analysis Dr Linda Cornwall RAL www.eu-egee.org EGEE and glite are registered trademarks Contents Why we setup the Grid Security Vulnerability Group Starting

More information

HammerCloud: A Stress Testing System for Distributed Analysis

HammerCloud: A Stress Testing System for Distributed Analysis HammerCloud: A Stress Testing System for Distributed Analysis Daniel C. van der Ster 1, Johannes Elmsheuser 2, Mario Úbeda García 1, Massimo Paladin 1 1: CERN, Geneva, Switzerland 2: Ludwig-Maximilians-Universität

More information

How Parallels RAS Enhances Microsoft RDS. White Paper Parallels Remote Application Server

How Parallels RAS Enhances Microsoft RDS. White Paper Parallels Remote Application Server How Parallels RAS Enhances Microsoft RDS White Paper Parallels Remote Application Server Table of Contents Introduction... 3 Overview of Microsoft Remote Desktop Services... 3 Microsoft RDS Pain Points...

More information

An XACML Attribute and Obligation Profile for Authorization Interoperability in Grids

An XACML Attribute and Obligation Profile for Authorization Interoperability in Grids GWD-CP.205 Federated Security fedsec-cg@ogf.org Rachana Ananthakrishnan, Argonne National Laboratory Gabriele Garzoglio *, Fermilab Oscar Koeroo *, Nikhef January 21, 2013 Protocol version 1.2 An XACML

More information

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

BUILDING A PRIVATE CLOUD. By Mark Black Jay Muelhoefer Parviz Peiravi Marco Righini

BUILDING A PRIVATE CLOUD. By Mark Black Jay Muelhoefer Parviz Peiravi Marco Righini BUILDING A PRIVATE CLOUD By Mark Black Jay Muelhoefer Parviz Peiravi Marco Righini HOW PLATFORM COMPUTING'S PLATFORM ISF AND INTEL'S TRUSTED EXECUTION TECHNOLOGY CAN HELP 24 loud computing is a paradigm

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

glite Java Authorisation Framework (gjaf) and Authorisation Policy coordination

glite Java Authorisation Framework (gjaf) and Authorisation Policy coordination glite Java Authorisation Framework (gjaf) and Authorisation Policy coordination Yuri Demchenko University of Amsterdam MWSG meeting EGEE 06 Conference, September 27, 2006, Geneve www.eu-egee.org EGEE and

More information

Access the power of Grid with Eclipse

Access the power of Grid with Eclipse Access the power of Grid with Eclipse Harald Kornmayer (Forschungszentrum Karlsruhe GmbH) Markus Knauer (Innoopract GmbH) October 11th, 2006, Eclipse Summit, Esslingen 2006 by H. Kornmayer, M. Knauer;

More information

Development of new security infrastructure design principles for distributed computing systems based on open protocols

Development of new security infrastructure design principles for distributed computing systems based on open protocols Development of new security infrastructure design principles for distributed computing systems based on open protocols Yu. Yu. Dubenskaya a, A. P. Kryukov, A. P. Demichev Skobeltsyn Institute of Nuclear

More information

Horizon Console Administration. 13 DEC 2018 VMware Horizon 7 7.7

Horizon Console Administration. 13 DEC 2018 VMware Horizon 7 7.7 Horizon Console Administration 13 DEC 2018 VMware Horizon 7 7.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

The LGI Pilot job portal. EGI Technical Forum 20 September 2011 Jan Just Keijser Willem van Engen Mark Somers

The LGI Pilot job portal. EGI Technical Forum 20 September 2011 Jan Just Keijser Willem van Engen Mark Somers The LGI Pilot job portal EGI Technical Forum 20 September 2011 Jan Just Keijser Willem van Engen Mark Somers Outline What? Why? How? Pro's and Cons What's next? Credits 2 What is LGI? LGI Project Server

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

AMD Opteron Processors In the Cloud

AMD Opteron Processors In the Cloud AMD Opteron Processors In the Cloud Pat Patla Vice President Product Marketing AMD DID YOU KNOW? By 2020, every byte of data will pass through the cloud *Source IDC 2 AMD Opteron In The Cloud October,

More information

A Login Shell interface for INFN-GRID

A Login Shell interface for INFN-GRID A Login Shell interface for INFN-GRID S.Pardi2,3, E. Calloni1,2, R. De Rosa1,2, F. Garufi1,2, L. Milano1,2, G. Russo1,2 1Università degli Studi di Napoli Federico II, Dipartimento di Scienze Fisiche, Complesso

More information

An Oracle White Paper October Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10

An Oracle White Paper October Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10 An Oracle White Paper October 2010 Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10 Introduction When business-critical systems are down for a variety

More information

Support for multiple virtual organizations in the Romanian LCG Federation

Support for multiple virtual organizations in the Romanian LCG Federation INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information

More information

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER V.V. Korenkov 1, N.A. Kutovskiy 1, N.A. Balashov 1, V.T. Dimitrov 2,a, R.D. Hristova 2, K.T. Kouzmov 2, S.T. Hristov 3 1 Laboratory of Information

More information

Procedures and Resources Plan

Procedures and Resources Plan Project acronym D4Science Project full title DIstributed collaboratories Infrastructure on Grid Enabled Technology 4 Science Project No 212488 Procedures and Resources Plan Deliverable No DSA1.1b January

More information

PCI DSS Compliance. White Paper Parallels Remote Application Server

PCI DSS Compliance. White Paper Parallels Remote Application Server PCI DSS Compliance White Paper Parallels Remote Application Server Table of Contents Introduction... 3 What Is PCI DSS?... 3 Why Businesses Need to Be PCI DSS Compliant... 3 What Is Parallels RAS?... 3

More information

e-research Infrastructures for e-science Axel Berg SARA national HPC & e-science support center RAMIRI, June 15, 2011

e-research Infrastructures for e-science Axel Berg SARA national HPC & e-science support center RAMIRI, June 15, 2011 e-research Infrastructures for e-science Axel Berg SARA national HPC & e-science support center RAMIRI, June 15, 2011 Science Park Amsterdam a world of science in a city of inspiration > Faculty of Science

More information

Clouds at other sites T2-type computing

Clouds at other sites T2-type computing Clouds at other sites T2-type computing Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are used in a variety of ways for Tier-2 type computing MC simulation, production

More information

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014 StratusLab Cloud Distribution Installation Charles Loomis (CNRS/LAL) 3 July 2014 StratusLab What is it? Complete IaaS cloud distribution Open source (Apache 2 license) Works well for production private

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Benchmarking third-party-transfer protocols with the FTS

Benchmarking third-party-transfer protocols with the FTS Benchmarking third-party-transfer protocols with the FTS Rizart Dona CERN Summer Student Programme 2018 Supervised by Dr. Simone Campana & Dr. Oliver Keeble 1.Introduction 1 Worldwide LHC Computing Grid

More information

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Tier-2 structure in Poland R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Plan LHC experiments in Poland Tier-2 centers Current activities - SC3 Expectations

More information

Volunteer Computing at CERN

Volunteer Computing at CERN Volunteer Computing at CERN BOINC workshop Sep 2014, Budapest Tomi Asp & Pete Jones, on behalf the LHC@Home team Agenda Overview Status of the LHC@Home projects Additional BOINC projects Service consolidation

More information

L3.4. Data Management Techniques. Frederic Desprez Benjamin Isnard Johan Montagnat

L3.4. Data Management Techniques. Frederic Desprez Benjamin Isnard Johan Montagnat Grid Workflow Efficient Enactment for Data Intensive Applications L3.4 Data Management Techniques Authors : Eddy Caron Frederic Desprez Benjamin Isnard Johan Montagnat Summary : This document presents

More information

Migration. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1

Migration. 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1 22 AUG 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/

More information

AGIS: The ATLAS Grid Information System

AGIS: The ATLAS Grid Information System AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS

More information

Credential Management in the Grid Security Infrastructure. GlobusWorld Security Workshop January 16, 2003

Credential Management in the Grid Security Infrastructure. GlobusWorld Security Workshop January 16, 2003 Credential Management in the Grid Security Infrastructure GlobusWorld Security Workshop January 16, 2003 Jim Basney jbasney@ncsa.uiuc.edu http://www.ncsa.uiuc.edu/~jbasney/ Credential Management Enrollment:

More information

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3. Installing and Configuring VMware Identity Manager Connector 2018.8.1.0 (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.3 You can find the most up-to-date technical documentation on

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

Security in the CernVM File System and the Frontier Distributed Database Caching System

Security in the CernVM File System and the Frontier Distributed Database Caching System Security in the CernVM File System and the Frontier Distributed Database Caching System D Dykstra 1 and J Blomer 2 1 Scientific Computing Division, Fermilab, Batavia, IL 60510, USA 2 PH-SFT Department,

More information

Where are you with your Cloud or Clouds? Simon Kaye Dr Cloud

Where are you with your Cloud or Clouds? Simon Kaye Dr Cloud Where are you with your Cloud or Clouds? Simon Kaye Dr Cloud 15 th September, 2011 2 3 Cloud Computing definitions are varying, but a common set of attributes can be identified 4 Organizations need to

More information

Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R

Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R 2 0 1 7 Table of Contents Introduction 2 Clustering with Oracle Clusterware 12c Release 2 3 Oracle

More information

Data Center Management and Automation Strategic Briefing

Data Center Management and Automation Strategic Briefing Data Center and Automation Strategic Briefing Contents Why is Data Center and Automation (DCMA) so important? 2 The Solution Pathway: Data Center and Automation 2 Identifying and Addressing the Challenges

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1

TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1 TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1 ABSTRACT This introductory white paper provides a technical overview of the new and improved enterprise grade features introduced

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information