Servicing HEP experiments with a complete set of ready integreated and configured common software components

Size: px
Start display at page:

Download "Servicing HEP experiments with a complete set of ready integreated and configured common software components"

Transcription

1 Journal of Physics: Conference Series Servicing HEP experiments with a complete set of ready integreated and configured common software components To cite this article: Stefan Roiser et al 2010 J. Phys.: Conf. Ser Related content - Hepsoft - an approach for up to date multiplatform deployment of HEP specific software S Roiser - Geant 4 nightly builds system Victor Diez, Gunter Folger and Stefan Roiser - CernVM a virtual software appliance for LHC applications P Buncic, C Aguado Sanchez, J Blomer et al. View the article online for updates and enhancements. This content was downloaded from IP address on 13/04/2018 at 03:09

2 Servicing HEP experiments with a complete set of ready integreated and configured common software components Stefan Roiser 1, Ana Gaspar 1, Yves Perrin 1, Karol Kruzelecki 2 1 CERN, CH-1211 Geneva 23, PH Department, SFT Group 2 CERN, CH-1211 Geneva 23, PH Department, LBC Group {stefan.roiser,ana.gaspar,yves.perrin,karol.kruzelecki}@cern.ch Abstract. The LCG Applications Area at CERN provides basic software components for the LHC experiments such as ROOT, POOL, COOL which are developed in house and also a set of external software packages ( 70) which are needed in addition such as Python, Boost, Qt, CLHEP, etc. These packages target many different areas of HEP computing such as data persistency, math, simulation, grid computing, databases, graphics, etc. Other packages provide tools for documentation, debugging, scripting languages and compilers. All these packages are provided in a consistent manner on different compilers, architectures and operating systems. The Software Process and Infrastructure project (SPI) [1] is responsible for the continous testing, coordination, release and deployment of these software packages. The main driving force for the actions carried out by SPI are the needs of the LHC experiments, but also other HEP experiments could profit from the set of consistent libraries provided and receive a stable and well tested foundation to build their experiment software frameworks. This presentation will first provide a brief description of the tools and services provided for the coordination, testing, release, deployment and presentation of LCG/AA software packages and then focus on a second set of tools provided for outside LHC experiments to deploy a stable set of HEP related software packages both as binary distribution or from source. 1. Introduction The LCG Applications Area is part of the LHC Computing Grid effort, providing application related software packages and tools which are used in common by LHC experiments. Examples for these applications are Geant4 [2], ROOT [3], COOL [4], CORAL [5], POOL [6] and RELAX [7]. In addition to software also tools and techniques, common for LHC, are provided. These include current research activities in the virtualization and multicore areas and communications tools such as Savannah tracking or Hypernews bulletin board system. 2. LHC software From the point of view of the LCG Applications Area the LHC software stack can be divided into several parts Software layers The LHC software is divided into different layers (see Picture 1). The most basic layer are the LCG external software packages, these are software components which are downloaded from c 2010 IOP Publishing Ltd 1

3 LHC AliRoot Gaudi Experiment So0ware CMSSW Athena LCG / AA projects POOL COOL CORAL ROOT RELAX LCG / AA external so0ware Python Xerces Qt Grid Java Boost valgrind GSL ~70 pack ages Figure 1. LHC software stack external sources and recompiled for the platforms the LCG Applications Area develops and deploys their software on. In total these are currently around 70 packages containing libraries and tools of all different kinds such as mathematical libraries, graphics libraries, debugging and documentation tools, etc. The next layer are the LCG/AA projects which are developed and maintained in house. The projects of this layer are ROOT, POOL, COOL, CORAL and RELAX providing functionality for data persistence, conditions databases, database abstraction and commonly used functionality. The top layer consists of experiment specific software such as reconstruction and analysis programs. LCG software is produced on several different architectures and platforms. Operating systems include SLC4, SLC5, Mac OSX 10.5 and Windows XP. The whole stack is produced on several different compilers such as gcc 3.4, gcc 4.0, gcc 4.3, VC7.1 and VC9 and 32 and 64 bit architectures. With these set of different possibilities LCG software is currently produced on 20 different platforms, being a operating system, architecture, compiler and optimization combination. Future plans include new compilers such as llvm and icc. These two first layers of the LHC software stack (LCG externals and projects) are the constituents of the so called LCG Configurations. These are consistent sets of all LCG exernals and projects produced on all LCG provided platforms on demand of LHC experiments. The release schedules of such configurations are usually discussed in the Architects Forum [8], a bi-weekly meeting where LCG Applications Area project leaders and LHC software coordinators meet. A major LCG Configuration is usually released with several changes in the AA software stack, such as new compilers, platforms or upgrades of the LCG/AA projects. LCG Configurations are numbered (e.g. LCG 55, LCG 56, etc. ) and each major LCG Configuration series is denoted by such a new number (see Picture 2). In addition to major releases also bug fix releases are being produced on demand. These are mostly in source changes which will touch only the internal behaviour of one or more software constituents. These bug fix releases are denoted by a small letter appended to the release series (e.g. LCG 55a, LCG 55b, etc.). Usually there are 2 to 3 major release series with up to 4-8 bug fix releases per year. 2

4 LCG 56 LCG 54 LCG Figure 2. LCG AA software releases in 2008/ Testing infrastructure In order to have continous testing of the LCG/AA stack a nightly build system has been developed. This system builds and tests the LCG/AA projects every 24 hours on a subset of all LCG platforms which are picked on request of experiments and developers. Another reason for building only a subset of all currently provided 20 platforms is the limitation in CPU resources. Usually the process is repeated in different slots. A slot denotes a combination of LCG/AA project versions which aim at a specific target, such as a release series or the repository HEAD of all projects. The results of the build and testing processes are summarized on a web page (see Picture 3) and the build products (libraries, executables) are made available in a shared file system (afs) where it can be used further on. The nightly build system can be executed on all LCG provided operating systems (Linux, Mac, Windows), architectures and compilers. Usually only a subset of all possible combinations is used. Execution of builds usually starts around midnight providing the complete set of results in the morning to the developers. In addition to providing feedback to the LCG/AA developers the nightly builds system is also used by LHC experiments, who are building with their nightly builds on top of the LCG/AA provided stack. This way we achieve a very tight loop of testing which provides very fast response about code changes in the LCG/AA software area. The use of the nightly builds decreased drastically the time for releasing the whole software stack, which was usually in the timescale of weeks while we dropped now to not more than a working day for producing the release on all platforms. 3. How to use LHC software in a non-lhc environment The tools and techniques in the previous section are currently in use for servicing the LHC experiments. With very little additional effort the same processes can be re-used for usage of the same software stack outside the LHC environment. The motivation for this move was initiated by current usages of different packages up to LHC experiment specific ones (e.g. Gaudi framework used by LHCb and generic tools for event data modelling and detector descriptions). In addition a web page at National Accelerator Laboratoy SLAC lists some 660 currently working High Energy Physics experiments which might not necessarily have the same staffing as LHC and LCG Applications Area. So the idea was to provide the full LCG/AA stack also to smaller HEP experiments and allow them to profit from the currently available test and build infrastructure. This section will describe 3 different ways how LCG/AA software can be re-deployed on remote sites. The tools and techniques used in the following subsections are already in use by LHC experiments or the LCG Applications Area so the additional effort for their re-use outside 3

5 Every day Different pla0orms Build & Test All LCG/AA projects Different ConfiguraAons Test History Figure 3. AA nightly builds overview page LHC is minimal CERNVM One of the research activities currently going on in the LCG Applications Area is the Workpackage 8 for virtualization technologies [9]. This work package explores the possibilities to run LHC software inside virtual machines which can be hosted on any operating system. For this purpose a special strip down linux operating system was defined which resembles closely the currently used Scientific Linux called CernvmOS. This operating system also makes use of a special files system which allows the caching of files onto the local machine and re-using them when disconnected. The CERNVM system has been deployed and used already by several LHC experiments. As the LHC experiment sofware stack by definition also includes the LCG/AA one this system could be equally used for LCG/AA software deployment. The actual setup for deployment of the software happens during the initial configuration phase of the virtual machine. In this phase the user has to choose e.g. the LHC experiment he is working on and subsequently the proper files will we provided to him through the CERNVM file system. If needed a special instance only for LCG/AA software would be also feasible to deploy Binary distribution For every LCG Configuration a subset of the release platforms is being used as tar platforms. These special platforms denote the ones for which every software package is being packaged as tar file for the version it is part of in the LCG Configuration released. The tar files are put into a central repository where they can be fetched subsequently by the LHC experiments and used for their software deployment. The details of the different LCG Configurations are being presented in a web page which lists all available configurations and their 4

6 details. This web page can be used for downloading the needed software packages and for their local installation. The only pre-requisite of this way of software deployment is the usage of one of the provided LCG tar platforms. Currently these are slc4 ia32 gcc34, slc4 amd64 gcc34, osx105 ia32 gcc401, win32 vc71 dbg Recompile from source The third option for deploying the LCG/AA software stack is to rebuild the needed packages from source. This option is available for all packages for mostly posix compliant operating systems (Linux, Mac OSX). Every software package in the LCG Applications Area is described with build instructions in the LCGCMT project. The instructions in this project provide an abstraction layer for the build of all packages across platforms. This system is also already in use by non LHC experiments for rebuilding the complete AA software stack for their needed platforms. 4. Conclusion The LCG Applications Area provides basic software constituents which LHC experiments use to build their specific applications on top. The testing of the AA is well integrated with the experiment integration testing allowing early and fast feedback about changes in source code and also allowing quick deployments of new AA releases. This well integrated and tested infrastructure is now also made available to physics experiments outside the scope of LHC. The software can be retrieved in 3 different ways depending on the specific need. [1] [2] [3] [4] [5] [6] [7] [8] [9] 5

Security in the CernVM File System and the Frontier Distributed Database Caching System

Security in the CernVM File System and the Frontier Distributed Database Caching System Security in the CernVM File System and the Frontier Distributed Database Caching System D Dykstra 1 and J Blomer 2 1 Scientific Computing Division, Fermilab, Batavia, IL 60510, USA 2 PH-SFT Department,

More information

BOSS and LHC computing using CernVM and BOINC

BOSS and LHC computing using CernVM and BOINC BOSS and LHC computing using CernVM and BOINC otn-2010-0x openlab Summer Student Report BOSS and LHC computing using CernVM and BOINC Jie Wu (Supervisor: Ben Segal / IT) 1 December 2010 Version 1 Distribution::

More information

ATLAS Nightly Build System Upgrade

ATLAS Nightly Build System Upgrade Journal of Physics: Conference Series OPEN ACCESS ATLAS Nightly Build System Upgrade To cite this article: G Dimitrov et al 2014 J. Phys.: Conf. Ser. 513 052034 Recent citations - A Roadmap to Continuous

More information

Geant4 Computing Performance Benchmarking and Monitoring

Geant4 Computing Performance Benchmarking and Monitoring Journal of Physics: Conference Series PAPER OPEN ACCESS Geant4 Computing Performance Benchmarking and Monitoring To cite this article: Andrea Dotti et al 2015 J. Phys.: Conf. Ser. 664 062021 View the article

More information

Large Scale Software Building with CMake in ATLAS

Large Scale Software Building with CMake in ATLAS 1 Large Scale Software Building with CMake in ATLAS 2 3 4 5 6 7 J Elmsheuser 1, A Krasznahorkay 2, E Obreshkov 3, A Undrus 1 on behalf of the ATLAS Collaboration 1 Brookhaven National Laboratory, USA 2

More information

ATLAS software configuration and build tool optimisation

ATLAS software configuration and build tool optimisation Journal of Physics: Conference Series OPEN ACCESS ATLAS software configuration and build tool optimisation To cite this article: Grigory Rybkin and the Atlas Collaboration 2014 J. Phys.: Conf. Ser. 513

More information

Data and Analysis preservation in LHCb

Data and Analysis preservation in LHCb Data and Analysis preservation in LHCb - March 21, 2013 - S.Amerio (Padova), M.Cattaneo (CERN) Outline 2 Overview of LHCb computing model in view of long term preservation Data types and software tools

More information

Recent Developments in the CernVM-File System Server Backend

Recent Developments in the CernVM-File System Server Backend Journal of Physics: Conference Series PAPER OPEN ACCESS Recent Developments in the CernVM-File System Server Backend To cite this article: R Meusel et al 2015 J. Phys.: Conf. Ser. 608 012031 Recent citations

More information

Using S3 cloud storage with ROOT and CvmFS

Using S3 cloud storage with ROOT and CvmFS Journal of Physics: Conference Series PAPER OPEN ACCESS Using S cloud storage with ROOT and CvmFS To cite this article: María Arsuaga-Ríos et al 05 J. Phys.: Conf. Ser. 66 000 View the article online for

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

CernVM a virtual software appliance for LHC applications

CernVM a virtual software appliance for LHC applications CernVM a virtual software appliance for LHC applications P Buncic 1, C Aguado Sanchez 1, J Blomer 1, L Franco 1, A Harutyunian 2,3, P Mato 1, Y Yao 3 1 CERN, 1211 Geneve 23, Geneva, Switzerland 2 Armenian

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

Global Software Distribution with CernVM-FS

Global Software Distribution with CernVM-FS Global Software Distribution with CernVM-FS Jakob Blomer CERN 2016 CCL Workshop on Scalable Computing October 19th, 2016 jblomer@cern.ch CernVM-FS 1 / 15 The Anatomy of a Scientific Software Stack (In

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

Upgrading your GEANT4 Installation

Upgrading your GEANT4 Installation your GEANT4 Installation Michael H. Kelsey SLAC National Accelerator Laboratory GEANT4 Tutorial, Jefferson Lab 13 Jul 2012 Where Are Upgrades? http://www.geant4.org/ Michael H. Kelsey GEANT4 July 2012

More information

LCG Conditions Database Project

LCG Conditions Database Project Computing in High Energy and Nuclear Physics (CHEP 2006) TIFR, Mumbai, 13 Feb 2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans On behalf of the COOL team (A.V., D.Front,

More information

Benchmarking the ATLAS software through the Kit Validation engine

Benchmarking the ATLAS software through the Kit Validation engine Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

Software installation and condition data distribution via CernVM File System in ATLAS

Software installation and condition data distribution via CernVM File System in ATLAS Journal of Physics: Conference Series Software installation and condition data distribution via CernVM File System in ATLAS To cite this article: A De Salvo et al 2012 J. Phys.: Conf. Ser. 396 032030 View

More information

Configuration and Build System

Configuration and Build System 2 Configuration and Build System Gaudi Framework Tutorial, April 2006 Schedule: Timing Topic 20 minutes Lecture 10 minutes Practice 30 minutes Total Objectives After completing this lesson, you should

More information

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine Journal of Physics: Conference Series OPEN ACCESS Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine To cite this article: Henrik Öhman et al 2014 J. Phys.: Conf.

More information

Initial Explorations of ARM Processors for Scientific Computing

Initial Explorations of ARM Processors for Scientific Computing Initial Explorations of ARM Processors for Scientific Computing Peter Elmer - Princeton University David Abdurachmanov - Vilnius University Giulio Eulisse, Shahzad Muzaffar - FNAL Power limitations for

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February LHC Cloud Computing with CernVM Ben Segal 1 CERN 1211 Geneva 23, Switzerland E mail: b.segal@cern.ch Predrag Buncic CERN E mail: predrag.buncic@cern.ch 13th International Workshop on Advanced Computing

More information

LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook in 2012

LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook in 2012 Journal of Physics: Conference Series LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook in 2012 To cite this article: R Trentadue et al 2012 J. Phys.: Conf. Ser. 396 052067 View the article

More information

Overview of LHCb applications and software environment. Bologna Tutorial, June 2006

Overview of LHCb applications and software environment. Bologna Tutorial, June 2006 1 Overview of LHCb applications and software environment Bologna Tutorial, June 2006 LHCb applications Event model / Physics event model GenParts MCParts Simul. Gauss MCHits Detector Description Digit.

More information

Experience with PROOF-Lite in ATLAS data analysis

Experience with PROOF-Lite in ATLAS data analysis Journal of Physics: Conference Series Experience with PROOF-Lite in ATLAS data analysis To cite this article: S Y Panitkin et al 2011 J. Phys.: Conf. Ser. 331 072057 View the article online for updates

More information

Use of containerisation as an alternative to full virtualisation in grid environments.

Use of containerisation as an alternative to full virtualisation in grid environments. Journal of Physics: Conference Series PAPER OPEN ACCESS Use of containerisation as an alternative to full virtualisation in grid environments. Related content - Use of containerisation as an alternative

More information

CASTORFS - A filesystem to access CASTOR

CASTORFS - A filesystem to access CASTOR Journal of Physics: Conference Series CASTORFS - A filesystem to access CASTOR To cite this article: Alexander Mazurov and Niko Neufeld 2010 J. Phys.: Conf. Ser. 219 052023 View the article online for

More information

PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem

PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem D Berzano, J Blomer, P Buncic, I Charalampidis, G Ganis, G Lestaris, R Meusel CERN PH-SFT CERN, CH-1211 Geneva

More information

Introducing LCG Views. Pere Mato LIM Meeting, 16th January 2016

Introducing LCG Views. Pere Mato LIM Meeting, 16th January 2016 Introducing LCG Views Pere Mato LIM Meeting, 16th January 2016 Motivations Easy runtime environment setup Current methods allow to setup a running environment starting from a top level package/application

More information

Benchmarking AMD64 and EMT64

Benchmarking AMD64 and EMT64 Benchmarking AMD64 and EMT64 Hans Wenzel, Oliver Gutsche, FNAL, Batavia, IL 60510, USA Mako Furukawa, University of Nebraska, Lincoln, USA Abstract We have benchmarked various single and dual core 64 Bit

More information

Evaluation of the Huawei UDS cloud storage system for CERN specific data

Evaluation of the Huawei UDS cloud storage system for CERN specific data th International Conference on Computing in High Energy and Nuclear Physics (CHEP3) IOP Publishing Journal of Physics: Conference Series 53 (4) 44 doi:.88/74-6596/53/4/44 Evaluation of the Huawei UDS cloud

More information

Phronesis, a diagnosis and recovery tool for system administrators

Phronesis, a diagnosis and recovery tool for system administrators Journal of Physics: Conference Series OPEN ACCESS Phronesis, a diagnosis and recovery tool for system administrators To cite this article: C Haen et al 2014 J. Phys.: Conf. Ser. 513 062021 View the article

More information

How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment

How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment Journal of Physics: Conference Series PAPER OPEN ACCESS How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment To cite this article: G Corti et

More information

CernVM - a virtual software appliance for LHC applications

CernVM - a virtual software appliance for LHC applications CernVM - a virtual software appliance for LHC applications C. Aguado-Sanchez 1), P. Buncic 1), L. Franco 1), S. Klemer 1), P. Mato 1) 1) CERN, Geneva, Switzerland Predrag Buncic (CERN/PH-SFT) Talk Outline

More information

Subtlenoise: sonification of distributed computing operations

Subtlenoise: sonification of distributed computing operations Journal of Physics: Conference Series PAPER OPEN ACCESS Subtlenoise: sonification of distributed computing operations To cite this article: P A Love 2015 J. Phys.: Conf. Ser. 664 062034 View the article

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Journal of Physics: Conference Series PAPER OPEN ACCESS Evolution of Database Replication Technologies for WLCG To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042032 View

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Mingw-w64 and Win-builds.org - Building for Windows

Mingw-w64 and Win-builds.org - Building for Windows Mingw-w64 and Win-builds.org - Building for Windows February 2, 2014 1 Mingw-w64 2 3 Section outline Mingw-w64 History, motivations and philosophy What comes with a mingw-w64 tarball Environments to build

More information

Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society Journal of Physics: Conference Series PAPER OPEN ACCESS Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society To cite this article: J A Kennedy

More information

Software Testing Infrastructure status

Software Testing Infrastructure status Software Testing Infrastructure status LCG Software Process & Infrastructure (CERN, 10/23/02) M. Gallas IT-API LCG SPI project: testing 1 Index: Overview Unit-test Unit-test frameworks CppUnit Oval Unit-test

More information

Measurements of the LHCb software stack on the ARM architecture

Measurements of the LHCb software stack on the ARM architecture Journal of Physics: Conference Series OPEN ACCESS Measurements of the LHCb software stack on the ARM architecture To cite this article: S Vijay Kartik et al 2014 J. Phys.: Conf. Ser. 513 052014 Related

More information

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Improved ATLAS HammerCloud Monitoring for Local Site Administration Improved ATLAS HammerCloud Monitoring for Local Site Administration M Böhler 1, J Elmsheuser 2, F Hönig 2, F Legger 2, V Mancinelli 3, and G Sciacca 4 on behalf of the ATLAS collaboration 1 Albert-Ludwigs

More information

Fast access to the CMS detector condition data employing HTML5 technologies

Fast access to the CMS detector condition data employing HTML5 technologies Journal of Physics: Conference Series Fast access to the CMS detector condition data employing HTML5 technologies To cite this article: Giuseppe Antonio Pierro et al 2011 J. Phys.: Conf. Ser. 331 042019

More information

INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE

INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE Andreas Pfeiffer CERN, Geneva, Switzerland Abstract The Anaphe/LHC++ project is an ongoing effort to provide an Object-Oriented software environment for

More information

Automation and Testing for Simplified Software Deployment

Automation and Testing for Simplified Software Deployment CLICdp-Conf-2018-013 03 December 2018 Automation and Testing for Simplified Software Deployment A. Sailer, M. Petric CERN, Geneva, Switzerland Abstract Creating software releases is one of the more tedious

More information

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear

More information

Project 0: Implementing a Hash Table

Project 0: Implementing a Hash Table Project : Implementing a Hash Table CS, Big Data Systems, Spring Goal and Motivation. The goal of Project is to help you refresh basic skills at designing and implementing data structures and algorithms.

More information

Improvements to the User Interface for LHCb's Software continuous integration system.

Improvements to the User Interface for LHCb's Software continuous integration system. Journal of Physics: Conference Series PAPER OPEN ACCESS Improvements to the User Interface for LHCb's Software continuous integration system. Related content - A New Nightly Build System for LHCb M Clemencic

More information

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID The WLCG Motivation and benefits Container engines Experiments status and plans Security considerations Summary and outlook STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID SWISS EXPERIENCE

More information

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 The CORAL Project Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 Outline CORAL - a foundation for Physics Database Applications in the LHC Computing Grid (LCG)

More information

8 Novembre How to install

8 Novembre How to install Utilizzo del toolkit di simulazione Geant4 Laboratori Nazionali del Gran Sasso 8 Novembre 2010 2010 How to install Outline Supported platforms & compilers External software packages and tools Working area

More information

The ALICE Glance Shift Accounting Management System (SAMS)

The ALICE Glance Shift Accounting Management System (SAMS) Journal of Physics: Conference Series PAPER OPEN ACCESS The ALICE Glance Shift Accounting Management System (SAMS) To cite this article: H. Martins Silva et al 2015 J. Phys.: Conf. Ser. 664 052037 View

More information

Geant4 application in a Web browser

Geant4 application in a Web browser Journal of Physics: Conference Series OPEN ACCESS Geant4 application in a Web browser To cite this article: Laurent Garnier and the Geant4 Collaboration 2014 J. Phys.: Conf. Ser. 513 062016 View the article

More information

GStat 2.0: Grid Information System Status Monitoring

GStat 2.0: Grid Information System Status Monitoring Journal of Physics: Conference Series GStat 2.0: Grid Information System Status Monitoring To cite this article: Laurence Field et al 2010 J. Phys.: Conf. Ser. 219 062045 View the article online for updates

More information

Trivial And Non-Trivial Data Analysis for Geant4

Trivial And Non-Trivial Data Analysis for Geant4 Trivial And Non-Trivial Data Analysis for Geant4 Paul Guèye, HU Joseph Perl, SLAC 1 Simplest using text (ASCII) files Analysis Choices Geant4 does not attempt to provide its own data analysis tools, focusing

More information

CSE 4/521 Introduction to Operating Systems

CSE 4/521 Introduction to Operating Systems CSE 4/521 Introduction to Operating Systems Lecture 3 Operating Systems Structures (Operating-System Services, User and Operating-System Interface, System Calls, Types of System Calls, System Programs,

More information

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP THE EUROPEAN ORGANISATION FOR PARTICLE PHYSICS RESEARCH (CERN) 2 THE LARGE HADRON COLLIDER THE LARGE HADRON COLLIDER

More information

ELFms industrialisation plans

ELFms industrialisation plans ELFms industrialisation plans CERN openlab workshop 13 June 2005 German Cancio CERN IT/FIO http://cern.ch/elfms ELFms industrialisation plans, 13/6/05 Outline Background What is ELFms Collaboration with

More information

GLAST AnaGrp 23 Jan Core Meeting Report

GLAST AnaGrp 23 Jan Core Meeting Report Core Meeting Report 17-20 Jan @ SLAC Julie, Navid, Heather, Toby, David made the trip http://wwwglast.slac.stanford.edu/software/meetingbuilder/meetingrpt.asp?mtid=3 Topics Migration to new versions of

More information

CMS - HLT Configuration Management System

CMS - HLT Configuration Management System Journal of Physics: Conference Series PAPER OPEN ACCESS CMS - HLT Configuration Management System To cite this article: Vincenzo Daponte and Andrea Bocci 2015 J. Phys.: Conf. Ser. 664 082008 View the article

More information

Early experience with the Run 2 ATLAS analysis model

Early experience with the Run 2 ATLAS analysis model Early experience with the Run 2 ATLAS analysis model Argonne National Laboratory E-mail: cranshaw@anl.gov During the long shutdown of the LHC, the ATLAS collaboration redesigned its analysis model based

More information

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data D. Barberis 1*, J. Cranshaw 2, G. Dimitrov 3, A. Favareto 1, Á. Fernández Casaní 4, S. González de la Hoz 4, J.

More information

How to install and build an application

How to install and build an application GEANT4 BEGINNERS COURSE GSSI, L Aquila (Italy) 6-10 July 2015 How to install and build an application tutorial course Outline Supported platforms & compilers Required software Where to download the packages

More information

An SQL-based approach to physics analysis

An SQL-based approach to physics analysis Journal of Physics: Conference Series OPEN ACCESS An SQL-based approach to physics analysis To cite this article: Dr Maaike Limper 2014 J. Phys.: Conf. Ser. 513 022022 View the article online for updates

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Stitched Together: Transitioning CMS to a Hierarchical Threaded Framework

Stitched Together: Transitioning CMS to a Hierarchical Threaded Framework Stitched Together: Transitioning CMS to a Hierarchical Threaded Framework CD Jones and E Sexton-Kennedy Fermilab, P.O.Box 500, Batavia, IL 60510-5011, USA E-mail: cdj@fnal.gov, sexton@fnal.gov Abstract.

More information

CS420: Operating Systems

CS420: Operating Systems Threads James Moscola Department of Physical Sciences York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne Threads A thread is a basic unit of processing

More information

CMS users data management service integration and first experiences with its NoSQL data storage

CMS users data management service integration and first experiences with its NoSQL data storage Journal of Physics: Conference Series OPEN ACCESS CMS users data management service integration and first experiences with its NoSQL data storage To cite this article: H Riahi et al 2014 J. Phys.: Conf.

More information

PoS(ACAT08)073. C++ and Data. Axel Naumann CERN Philippe Canal. Fermilab

PoS(ACAT08)073. C++ and Data. Axel Naumann CERN   Philippe Canal. Fermilab CERN E-mail: axel.naumann@cern.ch Philippe Canal Fermilab E-mail: pcanal@fnal.gov High performance computing with a large code base and C++ has proved to be a good combination. But when it comes to storing

More information

AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS

AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS C. Gaspar, CERN, Geneva, Switzerland Abstract The four LHC experiments (ALICE, ATLAS, CMS and LHCb), either by need or by choice have defined different

More information

First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications

First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications Ce.U.B. Bertinoro Italy, 12 17 October 2009 Pere Mato (CERN)

More information

Overview of HEP software & LCG from the openlab perspective

Overview of HEP software & LCG from the openlab perspective Overview of HEP software & LCG from the openlab perspective Andreas Unterkircher, CERN openlab February 2005 Andreas Unterkircher 1 Contents 1. Opencluster overview 2. High Energy Physics (HEP) software

More information

Availability measurement of grid services from the perspective of a scientific computing centre

Availability measurement of grid services from the perspective of a scientific computing centre Journal of Physics: Conference Series Availability measurement of grid services from the perspective of a scientific computing centre To cite this article: H Marten and T Koenig 2011 J. Phys.: Conf. Ser.

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

arxiv: v1 [cs.dc] 7 Apr 2014

arxiv: v1 [cs.dc] 7 Apr 2014 arxiv:1404.1814v1 [cs.dc] 7 Apr 2014 CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment G Lestaris 1, I Charalampidis 2, D Berzano, J Blomer, P Buncic, G Ganis

More information

Trivial Data Analysis for Geant4 Geant4 v9.3p01

Trivial Data Analysis for Geant4 Geant4 v9.3p01 Trivial Data Analysis for Geant4 Geant4 v9.3p01 Joseph Perl, SLAC 1 Simple Analysis This Week Geant4 does not attempt to provide its own data analysis tools, focusing instead on its central mission as

More information

How to install and build an application

How to install and build an application GEANT4 BEGINNERS COURSE GSSI, L Aquila (Italy) 12 nd May 2014 How to install and build an application tutorial course Outline Supported platforms & compilers Required software Where to download the packages

More information

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade

A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade Journal of Instrumentation OPEN ACCESS A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade Recent citations - The Versatile Link Demo Board (VLDB) R. Martín Lesma et al To cite

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

FEATURES EASILY CREATE AND DEPLOY HIGH QUALITY TCL EXECUTABLES TO ANYONE, ANYWHERE

FEATURES EASILY CREATE AND DEPLOY HIGH QUALITY TCL EXECUTABLES TO ANYONE, ANYWHERE EASILY CREATE AND DEPLOY HIGH QUALITY TCL EXECUTABLES TO ANYONE, ANYWHERE TCL DEV KIT (TDK) INCLUDES EVERYTHING YOU NEED FOR FAST DEVELOPMENT OF SELF-CONTAINED, EASILY-DEPLOYABLE APPLICATIONS. TURN YOUR

More information

Data transfer over the wide area network with a large round trip time

Data transfer over the wide area network with a large round trip time Journal of Physics: Conference Series Data transfer over the wide area network with a large round trip time To cite this article: H Matsunaga et al 1 J. Phys.: Conf. Ser. 219 656 Recent citations - A two

More information

A data handling system for modern and future Fermilab experiments

A data handling system for modern and future Fermilab experiments Journal of Physics: Conference Series OPEN ACCESS A data handling system for modern and future Fermilab experiments To cite this article: R A Illingworth 2014 J. Phys.: Conf. Ser. 513 032045 View the article

More information

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation Journal of Physics: Conference Series PAPER OPEN ACCESS Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation To cite this article: R. Di Nardo et al 2015 J. Phys.: Conf.

More information

The DMLite Rucio Plugin: ATLAS data in a filesystem

The DMLite Rucio Plugin: ATLAS data in a filesystem Journal of Physics: Conference Series OPEN ACCESS The DMLite Rucio Plugin: ATLAS data in a filesystem To cite this article: M Lassnig et al 2014 J. Phys.: Conf. Ser. 513 042030 View the article online

More information

10 Gbit/s Challenge inside the Openlab framework

10 Gbit/s Challenge inside the Openlab framework 10 Gbit/s Challenge inside the Openlab framework Sverre Jarp IT Division CERN SJ Feb 2003 1 Agenda Introductions All Overview Sverre Feedback Enterasys HP Intel Further discussions Elaboration of plan

More information

Project 0: Implementing a Hash Table

Project 0: Implementing a Hash Table CS: DATA SYSTEMS Project : Implementing a Hash Table CS, Data Systems, Fall Goal and Motivation. The goal of Project is to help you develop (or refresh) basic skills at designing and implementing data

More information

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine Journal of Physics: Conference Series PAPER OPEN ACCESS ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine To cite this article: Noemi Calace et al 2015 J. Phys.: Conf. Ser. 664 072005

More information

How to install and build an application

How to install and build an application GEANT4 BEGINNERS COURSE GSSI, L Aquila (Italy) 27-30 June 2016 How to install and build an application tutorial course Outline Supported platforms & compilers Required software Where to download the packages

More information

Dataflow Monitoring in LHCb

Dataflow Monitoring in LHCb Journal of Physics: Conference Series Dataflow Monitoring in LHCb To cite this article: D Svantesson et al 2011 J. Phys.: Conf. Ser. 331 022036 View the article online for updates and enhancements. Related

More information

System level traffic shaping in disk servers with heterogeneous protocols

System level traffic shaping in disk servers with heterogeneous protocols Journal of Physics: Conference Series OPEN ACCESS System level traffic shaping in disk servers with heterogeneous protocols To cite this article: Eric Cano and Daniele Francesco Kruse 14 J. Phys.: Conf.

More information

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod

More information