EUROPEAN MIDDLEWARE INITIATIVE

Similar documents
EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN

EUROPEAN MIDDLEWARE INITIATIVE

EGEE and Interoperation

Jozef Cernak, Marek Kocan, Eva Cernakova (P. J. Safarik University in Kosice, Kosice, Slovak Republic)

glite Grid Services Overview

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE

PoS(EGICF12-EMITC2)081

ARC NOX AND THE ROADMAP TO THE UNIFIED EUROPEAN MIDDLEWARE

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools

ARC integration for CMS

EMI Componets Installation And Configuration

On the EGI Operational Level Agreement Framework

Eclipse Technology Project: g-eclipse

EGI Operations and Best Practices

EUROPEAN MIDDLEWARE INITIATIVE

Status of KISTI Tier2 Center for ALICE

EUROPEAN MIDDLEWARE INITIATIVE

Integration of Cloud and Grid Middleware at DGRZR

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

EUROPEAN MIDDLEWARE INITIATIVE

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

Provisioning of Grid Middleware for EGI in the framework of EGI InSPIRE

ALICE Grid Activities in US

Grid Computing Middleware. Definitions & functions Middleware components Globus glite

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

EUROPEAN MIDDLEWARE INITIATIVE

Installation Guide. How to install the Active Security monitoring component for int.eu.grid JRA1

Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009

Middleware-Tests with our Xen-based Testcluster

Agenda: Alberto dimeglio (AdM) Balazs Konya (BK) Helmut Heller (HH) Steve Crouch (SC)

Access the power of Grid with Eclipse

The ARC Information System

The EU DataGrid Testbed

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interconnect EGEE and CNGRID e-infrastructures

EDGI Project Infrastructure Benchmarking

DESY. Andreas Gellrich DESY DESY,

European Globus Community Forum The Future of Globus in Europe

Ref. Ares(2015) /12/2015. D9.1 Project Collaborative Workspace Bénédicte Ferreira, IT

ISTITUTO NAZIONALE DI FISICA NUCLEARE

The INFN Tier1. 1. INFN-CNAF, Italy

EGI-InSPIRE. ARC-CE IPv6 TESTBED. Barbara Krašovec, Jure Kranjc ARNES. EGI-InSPIRE RI

Easy Access to Grid Infrastructures

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Andrea Sciabà CERN, Switzerland

European Grid Infrastructure

How to build Scientific Gateways with Vine Toolkit and Liferay/GridSphere framework

Monte Carlo Production on the Grid by the H1 Collaboration

The glite middleware. Ariel Garcia KIT

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1

ATLAS Tier-3 UniGe

Deploying virtualisation in a production grid

Parallel Computing in EGI

The LHC Computing Grid

The European DataGRID Production Testbed

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Managing Scientific Computations in Grid Systems

On the employment of LCG GRID middleware

The Legnaro-Padova distributed Tier-2: challenges and results

E G E E - I I. Document identifier: Date: 10/08/06. Document status: Document link:

Overview of HEP software & LCG from the openlab perspective

Deliverable D8.4 Certificate Transparency Log v2.0 Production Service

Introduction to Grid Technology

The Role and Functions of European Grid Infrastructure

THE WIDE AREA GRID. Architecture

Operation of Site Running StratusLab toolkit v1.0

Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

Computing activities in Napoli. Dr. Silvio Pardi (INFN-Napoli) Belle II Italian collaboration meeting 21 November 2017 Pisa - Italy

Monitoring tools in EGEE

Grid Scheduling Architectures with Globus

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Chelonia. a lightweight self-healing distributed storage

Scientific data management

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

Grid Interoperation and Regional Collaboration

Monitoring ARC services with GangliARC

R-GMA (Relational Grid Monitoring Architecture) for monitoring applications

Operating the Distributed NDGF Tier-1

Open Source Software Licence at CERN Recommendations from the OSL Task Force François Fluckiger, Editor 20 April; 2012

EGI-InSPIRE. EGI Applications Database (TNA3.4) William Vassilis Karageorgos, et al.

Chapter 2 Introduction to the WS-PGRADE/gUSE Science Gateway Framework

Grid Infrastructure For Collaborative High Performance Scientific Computing

First European Globus Community Forum Meeting

Edinburgh (ECDF) Update

Advanced School in High Performance and GRID Computing November Introduction to Grid computing.

Architecture Proposal

dcache: challenges and opportunities when growing into new communities Paul Millar on behalf of the dcache team

E u r o p e a n G r i d I n i t i a t i v e

Introduction to Grid Infrastructures

FREE SCIENTIFIC COMPUTING

The Grid Monitor. Usage and installation manual. Oxana Smirnova

Transcription:

EUROPEAN MIDDLEWARE INITIATIVE MSA2.2 - CONTINUOUS INTEGRATION AND C ERTIFICATION TESTBEDS IN PLACE EC MILESTONE: MS23 Document identifier: EMI_MS23_v1.0.doc Activity: Lead Partner: Document status: Document link: SA2 INFN Final http://cdsweb.cern.ch/record/1277551?ln=en

Copyright notice: Copyright (c) Members of the EMI Collaboration. 2010. See http://www.eu-emi.eu/about/partners/ for details on the copyright holders. EMI ( European Middleware Initiative ) is a project partially funded by the European Commission. For more information on the project, its partners and contributors please see http://www.eu-emi.eu. This document is released under the Open Access license. You are permitted to copy and distribute verbatim copies of this document containing this copyright notice, but modifying this document is not allowed. You are permitted to copy this document in whole or in part into other documents if you attach the following reference to the copied elements: "Copyright (C) 2010. Members of the EMI Collaboration. http://www.eu-emi.eu ". The information contained in this document represents the views of EMI as of the date they are published. EMI does not guarantee that any information contained herein is error-free, or up to date. EMI MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, BY PUBLISHING THIS DOCUMENT.

Delivery Slip Name Partner / Activity Date Signature From Danilo N.Dongiovanni INFN 20/11/2010 Approved by PEB 26/11/2010 Document Log Issue Date Comment Author / Partner 1 08/05/2010 First version available for revision within SA2 Danilo Dongiovanni 2 25/09/2010 Internal Review Alberto Aimar 3 10/08/2010 Added Unicore Resources Danilo Dongiovanni 4 10/28/2010 Added EMI Testbed GGUS Support Unit Danilo Dongiovanni 5 11/02/2010 6 11/19/2010 Changed document structure: milestone report on first page and technical details in following sections Changed document structure: milestone report on first page. Summary of technical details reported in the documentation url in Annex. Danilo Dongiovanni Danilo Dongiovanni Document Change Record Issue Item Reason for Change 1 2 3

MILESTONE REPORT EMI continuous integration and certification testbed has been put in place and it is currently available for Product Team developers to perform integration tests. The infrastructural and operational resources in place consist of: HW/SW resources provided (currently 90 server instances) with: OS installed (one among agreed platforms for EMI, i.e. SL5 x32 or x86_64), network connection and utilities. Server host certificates (required in ARC and glite, containers certificate in UNICORE not related to host). Also a tool for testing certificates generation was made available (ARC). Server level monitoring tool to collect statistics about server availability/reliability. Software Product installed and configured. Full coverage of already available products mentioned in EMI Technical plan was granted. dcache middleware testing instances were provided by DESY partner and integrated in the testbed. A set of configurable information system service instances publishing resources available in the testbed. User Interface accounts provided to PT testers on demand. Operational resources provided: A Virtual Organization for testing has been created: testers.emi-eu.eu EMI Testbed Documentation Instances logbook reporting details on installed software version and configuration. A GGUS Support Unit has been created for support requests handling and tracking Communication channel and task tracking tools for SA2.6 task members activity coordination Technical details and full documentation about the infrastructural and operational resources together with a description of testing scenarios and use cases supported can be found at the following public url: https://twiki.cern.ch/twiki/bin/view/emi/testbed A summary of contents there available are also attached in the Annex of this document.

ANNEX: Technical Details on implemented testbed infrastructure 1.1. SUPPORTED TESTING SCENARIOS The testbed model definition moved from EMI Dow and task participants experience and resources and took advantage of discussion within SA2 and with developers representatives (JRA1, SA1, Release and SW Area Manager). The information was collected through surveys or meeting and summarized in minutes or documents available at SA2.6 task homepage 1. A detailed presentation of EMI integration testbed models is object of the DSA2.4 deliverable (ref. SA2.6 task homepage); therefore here we focus on the testing scenarios supported in the testbed infrastructure put in place: 1. EMI Internal Testing scenario A: Integration testing within a minor release (no backward compatibility broken), so that a Release Candidate 2 Service Version (RCSV in the following) can be tested VERSUS other Version Services (PVS in the following). This implies a distributed testbed of production services available for each middleware stack, with possible multiple instances for central services. This could also imply cases of RCSV vs. other RCSV or RCSV vs. (RCSV + PVS): imagine the case of two interacting products in which a common bug is fixed contemporaneously, that would imply RCSV for both of them to be tested together with all other services at production version. Key performance indicators KSA2.1, KSA2.2 will apply to this testbed. 2. EMI Internal Testing scenario B: Integration testing for a major release (where it is allowed to have new features or backward compatibility broken for many services). This implies a testbed of RCSV available for each middleware stack, so basically this means providing hardware with platform installed for Product Teams (PT) to install needed RCSV and allow them for previewing other's PT RCSV. Key performance indicators KSA2.1, KSA2.2 will apply to this testbed. 1.2. DEFAULT USE CASES SUPPORTED Use Case A: developer John needs to test the correct interaction between service X (Release Candidate version) and services Y ( Version), Z (Release Candidate Version). Solution: service X is configured to see resources published in the chosen EMI Testbed central information system instance. Depending on the test performed John may need some configuration effort on services Y or Z, to ensure they can interact with X. John sends a support request to EMI Testbed group (see section 1.5). Use Case B: developer John needs to test the correct interaction between his service X (version X.X.X installed on some instance of his PT) and services Y ( Version), Z (Release Candidate Version). Solution: service X is configured to see resources published in chosen the EMI Testbed central information system instance. He can also setup a new information system merging information from both the mentioned central information system and a local information system publishing some development resources, building a custom testbed view. Notice that service Y and Z will not be configured to see resources out of EMI Testbed. Use Case C: developer John needs to test the correct interaction between his service X (Release Candidate Version) and services Y ( Version), Z (Release Candidate Version) not currently in the testbed, through a User Interface (ex. Job submission from UI involving broker, information system, storage element, compute element). Solution: John requests (see line 3 in this table) an account on one of the User Interfaces provided in the testbed, which is configured to see resources published in the chosen EMI Testbed central information system instance. Depending on the 1 https://twiki.cern.ch/twiki/bin/view/emi/tsa26 2 Here we assume that for each service a single Release Candidate version per Release exists.

test performed John may need some configuration effort on services Y or Z, to ensure they can interact with X. Moreover John needs service Z to be installed in the Testbed. John sends a support request to EMI Testbed group (see section 1.5). 1.3. TESTBED INFRASTRUCTURE INVENTORY 1.3.1 ARC ARC 3 middleware currently deploys 11 products on Fedora, Debian, RedHat, Ubuntu, Windows and MacOSX platforms. The following services (multiple instances for some services) were made available: Product Name Version Platform Partner Site GIIS service (ARC LDAP- Infosys) Release 0.8.2 CentOS5.5 i386 Nagios Release 3.2.1 Instant CA Release 0.9 CentOS5.5 i386 CentOS5.5 i386 Classic ARC Grid Monitor Release 0.8.2 CentOS5.5 i386 WS-ARC Grid Monitor Release Candidate CentOS5.5 i386 ARC ISIS service (4 Instances) Release Candidate CentOS5.5 i386; SLC5.3/x86; Debian Lenny /x86, NIIF Classic ARC CE Release 0.8.2 CE1 type Release 1.1 A-REX Release 1.1 Bartender service (2 instances) Release 1.1 CentOS5.5 i386 CentOS5.5 i386 SLC5.3/x86; Debian Lenny /x8, NIIF Debian Lenny /x86; CentOS5.5/x86_64, NIIF AHash service (2 instances) Release 1.1 Classic ARC clients Release 0.8.2 WS-ARC clients Release 1.1 ARC data clients Release 0.8.2 Debian Lenny /x86 Ubuntu Hardy /i386 Ubuntu Hardy /i386 Ubuntu Hardy /i386 NIIF 3 http://www.knowarc.eu

Librarian service (3 instances) Release 1.1 Echo service Release 1.1 Shepherd service (2 instances) Release 1.1 CentOS5.5/x86_64 ; Debian Lenny /x86 SLC5.3/x86 ; Debian Lenny /x86 Debian Lenny /x86, NIIF, NIIF NIIF 1.3.2 glite GLite 4 middleware currently deploys 19 products in Release 3.1 SL4 OS and 17 products in Release 3.2 SL5 (when service implementation changes without affecting API services are counted as one, ex. glitevoms mysql/oracle database implementation) on SL4 and SL5 platforms. The following services (multiple instances for some services) were made available: Product Name Version Platform Partner Site glitewms (3 instances) glite 3.1 ; RC SL4 INFN, CERN dgas ig_hlr Version ig48_sl4 SL4 INFN glite-cream 3.2 Version(LSF); 3.1.24 SL5/x86_64;S LC4.8/x86 INFN glite UI (3 instances) 3.1 ; 3.2, RC SLC4.8; SL5/x86_64 INFN, CERN GliteBDII (site; Top) 3.1.23 SLC4.8/x86 CERN glite-px 3.1.29 SLC4.8/x86 CERN glite-lcgce 3.1.40 SLC4.8/x86 CERN glite WN 3.1.11; 3.1.30 ; 3.2.7 SL4.8/x86, SL4.8/x86_64; SL5.5/x86_64 CERN glite-lfc_mysql 3.1.29 SL4.8/x86_64 CERN glite-voms 3.1.27; 3.2 SLC4.8/x86; SL5.5/x86_64 CERN glite-fts_oracle 3.1.22 SLC4.8/x86 CERN glite-vobox 3.1.42 SLC4.8/x86 CERN glite-se_dpm_mysql 3.2.5 SL5.5/x86_64 CERN 4 http://www.glite.eu/

glite-dpm_pool 3.1.32 SLC4.8/x86 CERN glite-se_dpm_mysql 3.1.35 / disk 3.1.29 SLC4.8/x86 CERN Nagios 3.2.1-1; 3.2.0-1 SLC5.5/x86_6 4 ; SLC4.8/x86 glite (9 instances) 2.1.7-1, RC SLC5.5/x86_6 4 ; SLC4.8/x86 CERN CESNET STORM INFN grid Release 3.1.0-0_ig50_sl4 Version SL4 INFN 1.3.3 UNICORE UNICORE 5 middleware currently deploys 11 products without particular dependencies making it executable on Linux, Windows and MacOSX platforms, generally deployed on opensuse 11.2 for certification purposes. Product Name Version Platform Partner Site Gateway 6.3.1 ; opensuse 11.3 JUELICH Registry 6.3.1 ; opensuse 11.3 JUELICH X incl. XNJS 6.3.1 ; opensuse 11.3 JUELICH OGSA-BES interfaces 6.3.1 ; opensuse 11.3 JUELICH HiLA 2.1 ; opensuse 11.3 JUELICH XUUDB 6.3.1 ; opensuse 11.3 JUELICH UVOS 6.3.1 ; opensuse 11.3 JUELICH Command line Client (UCC) 6.3.1 ; opensuse 11.3 JUELICH 1.3.4 dcache Also dcache 6 certification resources were kindly made available for integration testing purposes from dcache EMI partners. In particular resources below can be accessed through CERN gliteui instance. Product Name Version Platform Partner Site 5 http://www.unicore.eu/ 6 http://www.dcache.org/

BDII SL4 DESY dcache A SL5; SL4 32bit PNFS; SL4 64bit PNFS dcache B Sl5; SL4 32bit Chimera; SL4 64bit Chimera; DESY DESY 1.4. MONITORING AND KPI Key performance indicators KSA2.1 and KSA2.2 reported in table below, imply automatic monitoring solutions for resources able to produce statistics on server's availability and reliability. Each middleware currently has a monitoring solution deployed: ARC (Ganglia, GridMonitor, Nagios), glite (Nagios), UNICORE (Nagios). The Key Performance Indicators. CODE KPI Description Method to Measure Estimated Targets KSA2.1 SA2 Services Reliability % uptime dependent only on the SA2 services themselves (individual KPIs for test beds, repository, etc) Participating sites monitoring tools 99.00% KSA2.2 SA2 Services Availability Total % uptime including the underlying suppliers (individual KPIs for test beds, repository, etc) Participating sites monitoring tools 97.00% Availability and reliability statistics are currently provided just on testbed server instances, not on services given the fact that not all services have availability/reliability metrics defined and tools to measure them. Concerning the adopted tool to monitor instances and produce statistics, all middlewares plan to converge on Nagios 7 solution for two suitable features for our task purposes: Evolution of Nagios into grid monitoring service monitoring 8 Nagios, which is expected to provide metrics for Solution for geographical distribution: a second level Nagios can implement a central instance, republishing and aggregating data coming from local sites Nagios instances. Initially, availability and reliability statistics periodically produced by local sites Nagios instances will be made available in the testbed public documentation center for SA2.6 described in section 1.5. 7 http://www.nagios.org/ 8 https://twiki.cern.ch/twiki/bin/view/egee/oat_egee_iii

1.5. DOCUMENTATION For public documentation of EMI Internal Testbed resources the following web page was put in place https://twiki.cern.ch/twiki/bin/view/emi/testbed, reporting: Description of supported testing scenarios Role and duties of SA2.6 task and PT contribution Procedures for testbed update requests Testbed Monitoring solutions Procedure to enable testers.emi-eu.eu VO EMI Testbed Coverage of EMI components Testbed Inventory with a list of provided instances specifying: Middleware Suite, Service Deployed, Platform, Server Hostname, Site Location, reference Product Team, Status Logbook Status Logbook field in previous table is a link to a instance-specific web page describing the hardware details of instance, software version installed, configuration information and history of updates. The maintenance of this page is in charge of people performing installation; configuration or updates (can be PT members). 1.6. USER SUPPORT, COORDINATION AND COMMUNICATION Both coordination and installation/configuration activities concerning testbeds require clear channels of communication and a way to track the effort of people involved. As mentioned in DS2.4 deliverable document, a distributed effort model for testbed setup and maintenance was adopted, with the possible involvement of Product Teams members as support effort for service installation and configuration. To coordinate and track all the distributed effort the following solution was adopted: User Support Requests Handling: An emi-support-testbed support unit has been created in GGUS, for testbed support requests reception. Representatives from all partners contributing to testbed are members of the support unit, and agreed on a 2 working days response time on best effort. The Support Unit will be part of next GGUS release. The adoption of GGUS will give a common framework to handle both requests coming from EMI developers or users and those coming from external users (e.g. the users of large scale testbed involving other projects or partners contribution). Communication: SA2.6: an emi-sa26@eu-emi.eu was created both for task internal communication and for testbed requests reception. Activity Tracking: SA2.6 and Product Teams activities on testbed will be tracked through Savannah 9 tasks. An emi-sa2-testbed Savannah squad has been created to submit requests. Product Team squads have been created to track PT activities on testbeds 1.7. TESTBED UPDATE Resources made available for integration testing described in section 1.2 form a first nucleus of EMI testbed, putting together services currently used for certification from all middlewares converging into EMI. 9 https://savannah.cern.ch/

EMI testbed evolution is then strictly connected to its actual usage by product teams members performing integration testing. In fact depending on the specific integration test to be performed different coordination, installation or configuration activities can be required. So we expect EMI internal testbed customers (PTs, SW Area Manager, SA1, and JRA1) to submit support requests for the following expected cases, treated as described below: Requests for configuration support of existing services. These requests may include enabling VO/users, making services to talk to each other, custom bdii setup etc. Procedure: Open a GGUS ticket assigned to EMI-Testbeds Support Unit explaining your testing and configuration needs. The request will be then evaluated and tracked into a savannah task on the testbed squad. If needed a PT members of services involved in the test will be contacted and their contribute will be tracked by savannah Requests for new services setup (or particular service RC versions setup): Open a GGUS ticket assigned to EMI-Testbeds Support Unit explaining: Your testing needs and the type and version of services you need to be installed and the PT producing that service Please also specify if you need the service to be included in the permanent EMI testbed. The request will be then evaluated and tracked into a savannah task on the testbed squad. If needed, the involved PT members will be then contacted and their contribute tracked by savannah tasks. Requests for specific testbed (in this category: performance tests, security tests, data management tests, etc.): Open a GGUS ticket assigned to EMI-Testbeds Support Unit explaining: Your testing needs and an estimate of HW and SW requirements for your test PTs involved in the setup and suggestions on possible sites/pt/ngi that may help in the setup. The request will be then evaluated and tracked into a savannah task on the testbed squad. The period of time you expect to have the testbed on for The involved PT members will be then contacted and their contribute tracked by savannah tasks. 1.8. TESTBED ACCESSIBILITY User Interface Service Instances: As default use case we assume testbed users to have direct access just to user interface instances, that is just to grid middleware access point services. To request an account on a EMI Testbed User Interfaces instance, every user with a valid certificate from a trusted Certification Authority, should send a user support request following the procedure and tools described at point 4 in this table. Notice that it is also possible to install the set of clients directly on personal machine (ex. usual use case in UNICORE). Other Service Instances: Root access on other services can be granted on request, depending on the local sites security policy (which generally is also subjected to national laws about traceability of access on servers). If the access is required for debugging or logs exploration purposes, logs sharing solutions will be implemented on demand (publishing of logs on public AFS area, GridFTP downloads, https access). 1.9. TESTBED RESOURCES DISCOVERY Information Systems Configuration: Each of the middleware has a service for resource discovery and publication (ARC, glite BDII, Unicore Registry). A central information system instance was configured for each middleware publishing the resources in the testbed. Cross-middleware

compatibility among existing information system services is in EMI plans, and EMI Testbed will reflect that integration once it will be technically available. Implications for testbed usage: the set of resources visible to the end users (developers) depends on the configuration of their access point (the information system instance configured in the User Interface instance user is logged on). In practice user can build a custom testbed by selecting needed resources from the pool of those published in the central information system or merging them with other resources published on other information system (ex. Product Team internal development testbed).