Servicing HEP experiments with a complete set of ready integreated and configured common software components

Similar documents
Security in the CernVM File System and the Frontier Distributed Database Caching System

BOSS and LHC computing using CernVM and BOINC

ATLAS Nightly Build System Upgrade

Geant4 Computing Performance Benchmarking and Monitoring

Large Scale Software Building with CMake in ATLAS

ATLAS software configuration and build tool optimisation

Data and Analysis preservation in LHCb

Recent Developments in the CernVM-File System Server Backend

Using S3 cloud storage with ROOT and CvmFS

File Access Optimization with the Lustre Filesystem at Florida CMS T2

CernVM-FS beyond LHC computing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

Virtualizing a Batch. University Grid Center

CernVM a virtual software appliance for LHC applications

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

DIRAC pilot framework and the DIRAC Workload Management System

Global Software Distribution with CernVM-FS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Upgrading your GEANT4 Installation

LCG Conditions Database Project

Benchmarking the ATLAS software through the Kit Validation engine

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Monte Carlo Production on the Grid by the H1 Collaboration

Software installation and condition data distribution via CernVM File System in ATLAS

Configuration and Build System

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

Initial Explorations of ARM Processors for Scientific Computing

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook in 2012

Overview of LHCb applications and software environment. Bologna Tutorial, June 2006

Experience with PROOF-Lite in ATLAS data analysis

Use of containerisation as an alternative to full virtualisation in grid environments.

CASTORFS - A filesystem to access CASTOR

PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem

Introducing LCG Views. Pere Mato LIM Meeting, 16th January 2016

Benchmarking AMD64 and EMT64

Evaluation of the Huawei UDS cloud storage system for CERN specific data

Phronesis, a diagnosis and recovery tool for system administrators

How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment

CernVM - a virtual software appliance for LHC applications

Subtlenoise: sonification of distributed computing operations

Evolution of Database Replication Technologies for WLCG

LHCb Distributed Conditions Database

Mingw-w64 and Win-builds.org - Building for Windows

Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

Software Testing Infrastructure status

Measurements of the LHCb software stack on the ARM architecture

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Fast access to the CMS detector condition data employing HTML5 technologies

INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE

Automation and Testing for Simplified Software Deployment

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

Project 0: Implementing a Hash Table

Improvements to the User Interface for LHCb's Software continuous integration system.

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008

8 Novembre How to install

The ALICE Glance Shift Accounting Management System (SAMS)

Geant4 application in a Web browser

GStat 2.0: Grid Information System Status Monitoring

Trivial And Non-Trivial Data Analysis for Geant4

CSE 4/521 Introduction to Operating Systems

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP

ELFms industrialisation plans

GLAST AnaGrp 23 Jan Core Meeting Report

CMS - HLT Configuration Management System

Early experience with the Run 2 ATLAS analysis model

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

How to install and build an application

An SQL-based approach to physics analysis

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Stitched Together: Transitioning CMS to a Hierarchical Threaded Framework

CS420: Operating Systems

CMS users data management service integration and first experiences with its NoSQL data storage

PoS(ACAT08)073. C++ and Data. Axel Naumann CERN Philippe Canal. Fermilab

AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS

First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications

Overview of HEP software & LCG from the openlab perspective

Availability measurement of grid services from the perspective of a scientific computing centre

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

arxiv: v1 [cs.dc] 7 Apr 2014

Trivial Data Analysis for Geant4 Geant4 v9.3p01

How to install and build an application

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

FEATURES EASILY CREATE AND DEPLOY HIGH QUALITY TCL EXECUTABLES TO ANYONE, ANYWHERE

Data transfer over the wide area network with a large round trip time

A data handling system for modern and future Fermilab experiments

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation

The DMLite Rucio Plugin: ATLAS data in a filesystem

10 Gbit/s Challenge inside the Openlab framework

Project 0: Implementing a Hash Table

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

How to install and build an application

Dataflow Monitoring in LHCb

System level traffic shaping in disk servers with heterogeneous protocols

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser

Transcription:

Journal of Physics: Conference Series Servicing HEP experiments with a complete set of ready integreated and configured common software components To cite this article: Stefan Roiser et al 2010 J. Phys.: Conf. Ser. 219 042022 Related content - Hepsoft - an approach for up to date multiplatform deployment of HEP specific software S Roiser - Geant 4 nightly builds system Victor Diez, Gunter Folger and Stefan Roiser - CernVM a virtual software appliance for LHC applications P Buncic, C Aguado Sanchez, J Blomer et al. View the article online for updates and enhancements. This content was downloaded from IP address 148.251.232.83 on 13/04/2018 at 03:09

Servicing HEP experiments with a complete set of ready integreated and configured common software components Stefan Roiser 1, Ana Gaspar 1, Yves Perrin 1, Karol Kruzelecki 2 1 CERN, CH-1211 Geneva 23, PH Department, SFT Group 2 CERN, CH-1211 Geneva 23, PH Department, LBC Group E-mail: {stefan.roiser,ana.gaspar,yves.perrin,karol.kruzelecki}@cern.ch Abstract. The LCG Applications Area at CERN provides basic software components for the LHC experiments such as ROOT, POOL, COOL which are developed in house and also a set of external software packages ( 70) which are needed in addition such as Python, Boost, Qt, CLHEP, etc. These packages target many different areas of HEP computing such as data persistency, math, simulation, grid computing, databases, graphics, etc. Other packages provide tools for documentation, debugging, scripting languages and compilers. All these packages are provided in a consistent manner on different compilers, architectures and operating systems. The Software Process and Infrastructure project (SPI) [1] is responsible for the continous testing, coordination, release and deployment of these software packages. The main driving force for the actions carried out by SPI are the needs of the LHC experiments, but also other HEP experiments could profit from the set of consistent libraries provided and receive a stable and well tested foundation to build their experiment software frameworks. This presentation will first provide a brief description of the tools and services provided for the coordination, testing, release, deployment and presentation of LCG/AA software packages and then focus on a second set of tools provided for outside LHC experiments to deploy a stable set of HEP related software packages both as binary distribution or from source. 1. Introduction The LCG Applications Area is part of the LHC Computing Grid effort, providing application related software packages and tools which are used in common by LHC experiments. Examples for these applications are Geant4 [2], ROOT [3], COOL [4], CORAL [5], POOL [6] and RELAX [7]. In addition to software also tools and techniques, common for LHC, are provided. These include current research activities in the virtualization and multicore areas and communications tools such as Savannah tracking or Hypernews bulletin board system. 2. LHC software From the point of view of the LCG Applications Area the LHC software stack can be divided into several parts. 2.1. Software layers The LHC software is divided into different layers (see Picture 1). The most basic layer are the LCG external software packages, these are software components which are downloaded from c 2010 IOP Publishing Ltd 1

LHC AliRoot Gaudi Experiment So0ware CMSSW Athena LCG / AA projects POOL COOL CORAL ROOT RELAX LCG / AA external so0ware Python Xerces Qt Grid Java Boost valgrind GSL ~70 pack ages Figure 1. LHC software stack external sources and recompiled for the platforms the LCG Applications Area develops and deploys their software on. In total these are currently around 70 packages containing libraries and tools of all different kinds such as mathematical libraries, graphics libraries, debugging and documentation tools, etc. The next layer are the LCG/AA projects which are developed and maintained in house. The projects of this layer are ROOT, POOL, COOL, CORAL and RELAX providing functionality for data persistence, conditions databases, database abstraction and commonly used functionality. The top layer consists of experiment specific software such as reconstruction and analysis programs. LCG software is produced on several different architectures and platforms. Operating systems include SLC4, SLC5, Mac OSX 10.5 and Windows XP. The whole stack is produced on several different compilers such as gcc 3.4, gcc 4.0, gcc 4.3, VC7.1 and VC9 and 32 and 64 bit architectures. With these set of different possibilities LCG software is currently produced on 20 different platforms, being a operating system, architecture, compiler and optimization combination. Future plans include new compilers such as llvm and icc. These two first layers of the LHC software stack (LCG externals and projects) are the constituents of the so called LCG Configurations. These are consistent sets of all LCG exernals and projects produced on all LCG provided platforms on demand of LHC experiments. The release schedules of such configurations are usually discussed in the Architects Forum [8], a bi-weekly meeting where LCG Applications Area project leaders and LHC software coordinators meet. A major LCG Configuration is usually released with several changes in the AA software stack, such as new compilers, platforms or upgrades of the LCG/AA projects. LCG Configurations are numbered (e.g. LCG 55, LCG 56, etc. ) and each major LCG Configuration series is denoted by such a new number (see Picture 2). In addition to major releases also bug fix releases are being produced on demand. These are mostly in source changes which will touch only the internal behaviour of one or more software constituents. These bug fix releases are denoted by a small letter appended to the release series (e.g. LCG 55a, LCG 55b, etc.). Usually there are 2 to 3 major release series with up to 4-8 bug fix releases per year. 2

LCG 56 LCG 54 LCG 55 2008 2009 Figure 2. LCG AA software releases in 2008/09 2.2. Testing infrastructure In order to have continous testing of the LCG/AA stack a nightly build system has been developed. This system builds and tests the LCG/AA projects every 24 hours on a subset of all LCG platforms which are picked on request of experiments and developers. Another reason for building only a subset of all currently provided 20 platforms is the limitation in CPU resources. Usually the process is repeated in different slots. A slot denotes a combination of LCG/AA project versions which aim at a specific target, such as a release series or the repository HEAD of all projects. The results of the build and testing processes are summarized on a web page (see Picture 3) and the build products (libraries, executables) are made available in a shared file system (afs) where it can be used further on. The nightly build system can be executed on all LCG provided operating systems (Linux, Mac, Windows), architectures and compilers. Usually only a subset of all possible combinations is used. Execution of builds usually starts around midnight providing the complete set of results in the morning to the developers. In addition to providing feedback to the LCG/AA developers the nightly builds system is also used by LHC experiments, who are building with their nightly builds on top of the LCG/AA provided stack. This way we achieve a very tight loop of testing which provides very fast response about code changes in the LCG/AA software area. The use of the nightly builds decreased drastically the time for releasing the whole software stack, which was usually in the timescale of weeks while we dropped now to not more than a working day for producing the release on all platforms. 3. How to use LHC software in a non-lhc environment The tools and techniques in the previous section are currently in use for servicing the LHC experiments. With very little additional effort the same processes can be re-used for usage of the same software stack outside the LHC environment. The motivation for this move was initiated by current usages of different packages up to LHC experiment specific ones (e.g. Gaudi framework used by LHCb and generic tools for event data modelling and detector descriptions). In addition a web page at National Accelerator Laboratoy SLAC lists some 660 currently working High Energy Physics experiments which might not necessarily have the same staffing as LHC and LCG Applications Area. So the idea was to provide the full LCG/AA stack also to smaller HEP experiments and allow them to profit from the currently available test and build infrastructure. This section will describe 3 different ways how LCG/AA software can be re-deployed on remote sites. The tools and techniques used in the following subsections are already in use by LHC experiments or the LCG Applications Area so the additional effort for their re-use outside 3

Every day Different pla0orms Build & Test All LCG/AA projects Different ConfiguraAons Test History Figure 3. AA nightly builds overview page LHC is minimal. 3.1. CERNVM One of the research activities currently going on in the LCG Applications Area is the Workpackage 8 for virtualization technologies [9]. This work package explores the possibilities to run LHC software inside virtual machines which can be hosted on any operating system. For this purpose a special strip down linux operating system was defined which resembles closely the currently used Scientific Linux called CernvmOS. This operating system also makes use of a special files system which allows the caching of files onto the local machine and re-using them when disconnected. The CERNVM system has been deployed and used already by several LHC experiments. As the LHC experiment sofware stack by definition also includes the LCG/AA one this system could be equally used for LCG/AA software deployment. The actual setup for deployment of the software happens during the initial configuration phase of the virtual machine. In this phase the user has to choose e.g. the LHC experiment he is working on and subsequently the proper files will we provided to him through the CERNVM file system. If needed a special instance only for LCG/AA software would be also feasible to deploy. 3.2. Binary distribution For every LCG Configuration a subset of the release platforms is being used as tar platforms. These special platforms denote the ones for which every software package is being packaged as tar file for the version it is part of in the LCG Configuration released. The tar files are put into a central repository where they can be fetched subsequently by the LHC experiments and used for their software deployment. The details of the different LCG Configurations are being presented in a web page http://lcgsoft.cern.ch which lists all available configurations and their 4

details. This web page can be used for downloading the needed software packages and for their local installation. The only pre-requisite of this way of software deployment is the usage of one of the provided LCG tar platforms. Currently these are slc4 ia32 gcc34, slc4 amd64 gcc34, osx105 ia32 gcc401, win32 vc71 dbg. 3.3. Recompile from source The third option for deploying the LCG/AA software stack is to rebuild the needed packages from source. This option is available for all packages for mostly posix compliant operating systems (Linux, Mac OSX). Every software package in the LCG Applications Area is described with build instructions in the LCGCMT project. The instructions in this project provide an abstraction layer for the build of all packages across platforms. This system is also already in use by non LHC experiments for rebuilding the complete AA software stack for their needed platforms. 4. Conclusion The LCG Applications Area provides basic software constituents which LHC experiments use to build their specific applications on top. The testing of the AA is well integrated with the experiment integration testing allowing early and fast feedback about changes in source code and also allowing quick deployments of new AA releases. This well integrated and tested infrastructure is now also made available to physics experiments outside the scope of LHC. The software can be retrieved in 3 different ways depending on the specific need. [1] http://spi.cern.ch [2] http://geant4.cern.ch [3] http://root.cern.ch [4] http://lcgapp.cern.ch/project/conddb [5] http://pool.cern.ch/coral [6] http://pool.cern.ch [7] https://twiki.cern.ch/twiki/bin/view/lcg/relax [8] http://lcgapp.cern.ch/project/mgmt/af.html [9] http://cernvm.web.cern.ch/cernvm/ 5