The BaBar Computing Model *

Size: px
Start display at page:

Download "The BaBar Computing Model *"

Transcription

1 SLAC PUB 9964 April 1997 The BaBar Computing Model * N. Geddes Rutherford Appleton Laboratory, Chilton, Didcot, England OX11 0QX Representing the BaBar Collaboration Abstract The BaBar experiment will begin taking data in We will record and fully reconstruct up to 10 9 event decays of the upsilon(4s) resonance per year. We present our current status and plans for acquiring, processing and analysing this data. Contributed to Computing in High Energy Physics, 1997 Berlin, Germany April 7-11, 1997 * Work supported in part by Department of Energy contract DE AC03 76SF00515.

2 The BaBar Computing Model Neil Geddes, Rutherford Appleton Laboratory, Chilton, Didcot, England, OX11 0QX The BaBar experiment will begin data taking in We will record and fully reconstruct up to 10 9 event decays of the upsilon(4s) resonance per year. We present our current status and plans for acquiring, processing and analysing this data. Key words: babar computing model software development OO 1 Overview The BaBar [1] experiment, currently being constructed at SLAC, will study CP violation in the B meson system at the Stanford Linear Accelerator Centers PEP-II accelerator. Due to the rarity of the CP violating decays these studies require large event samples and to acquire these the PEP-II accelerator and BaBar detector will run continuously for over half of each year (\factory" operation). We expect to record 10 9 events per year, totalling approximately 25 TB of data. A large fraction of this data will require detailed analysis to isolate the small signal samples. The BaBar collaboration is geographically widespread and we have developed a computing model to accommodate this far ung expertise in a workable and cost eective way, both during the construction and running of the experiment. We already make extensive use of wide area networks for communication and software development. We also dene and support a standard BaBar development environment which allows full integration of remote collaborators. The data will be recorded at SLAC, giving SLAC a unique position. Demands for detector monitoring and maintainance, physics analysis by collaborators resident at SLAC, and simplications in the data handling will mean that the bulk data processing will be performed online at SLAC. There also exist within the collaboration several sites with current or planned computing facilities (data storage, computing capacity and support) similar to those at SLAC. These \Regional Centres" will serve ascodedevelopment and analysis centres, providing convenient data access to remote collaborators. These centres should Preprint submitted to Elsevier Preprint 15 May 1997

3 also provide a signicant fraction of BaBars Monte Carlo needs and could also be used for data processing, if required. It is envisaged that data analysis will be based around the data store at SLAC or a regional centre, whichever is most convenient. Other institutions may keep local copies of small data samples. 2 Software Development As detailed elsewhere in these proceedings [2] BaBar has adopted an objectoriented approach for its software development, with C++ as the principal programming language. Both of these were relatively new to widespread use in HEP and this has produced practical diculties in retraining and acquiring experience. To support the dual needs of software development and training, an iterative design and implementation process has been adopted. Developers are encouraged to strike a balance between \learning by doing" and \doing while learning". This has been supported by home{grown and commercial courses on C++ and object{oriented analysis and design. We now have a small but growing number (10 ; 20) of competent principal developers. C++ experience is spreading further through the collaboration as we get closer to a functionally complete reconstruction program enabling \physics analysis" type tasks to be undertaken. We are also developing a unied fast/detailed simulation of the BaBar detector based on GEANT4, to replace our current GEANT3 based programs and much of the detailed detector simulation is already written in C++. To support our software development we have created a code management and release system [3] which allows integration of a distributed developer community. Source code management is provided by CVS. The main BaBar CVS repository is based at SLAC but is made accessible to collaborators worldwide via AFS. BaBar software is organised into \packages". Each package contains groups of classes/functions performing a well dened task. Each package is maintained as a separate CVS module and is managed by a package coordinator. The package coordinator is responsible for the development of the package, for testing of the code, and for tagging new self-consistent and tested versions as and when appropriate. Access to CVS modules can be controlled via AFS ACLs if required and package coordinators are notied automatically when modications are made to a package. A full \release" of the BaBar software is built at least every two weeks, based on the then current versions of all packages. A release is a snapshot of a consistent set of package versions, and includes compiled libraries and binaries for each of the supported BaBar platforms (currently, HP-UX10, AIX4, and OSF1.3). If the build is successful the release will become the \test" release and subsequently, if testing proves successful, the \current" or \production" 2

4 release. Releases are tagged by a sequential number and logical links are used to reference \latest", \test", \current" and \production" releases. Releases will remain on disk for as long as is required. Creation of a release is increasingly automated, including code checking, compilation and testing. The latter is based on specic tests provided by package coordinators which can be run automatically, producing output to be compared with a supplied reference copy. Errors from all stages of the release build are ed automatically to package coordinators. Outside release builds, bug reporting and tracking is provided using the GNATS package. 3 Data Acquisition and Reconstruction The \physics" event rate at BaBar is 30Hz. However, large machine backgrounds are expected, associated with the high luminosity of the PEP-II accelerator. A rst level trigger, based on information from the main tracking chambers and the calorimeter, will restrict the read out rate to less than 2kHz, while ensuring high eciency for events of interest (> 99% for B events and > 95% for most other channels). A software lter then uses information from other subdetectors, principally the vertex detector and tracking chambers, to further reject machine backgrounds and reduce the data rate to 100Hz. The online system is based around VME and UNIX. The data is read out from the front end electronics through a uniform VME interface. The full readout spans some 20 VME crates and the aggregate output rate is expected to be around 50MB/second, corresponding to the 2kHz trigger rate. Data are read out in parallel from the VME crates into a farm of UNIX processors which perform the event ltering. At this stage events can also be tagged to simplify subsequent processing. The readout from VME to the online farm is accomplished through use of a none blocking network switch. The data transport uses standard tcp/ip protocols to isolate the software from the underlying fabric, which may be 100BT fast ethernet or ATM OC-3. Event building also occurs during this transfer stage, with the relevantevent fragments from each readout crate being collected in a single farm CPU. Events from the Online farm are automatically forwarded to the prompt reconstruction system. The aim is to process all events within a few hours of their being recorded. The principle aims of the prompt reconstruction are { To eciently tag events for subsequent analysis or reprocessing { To provide rapid and eective monitoring of the detector and accelerator performance, based on quantities relevant to the physics goals of the exper- 3

5 iment. { To make ecient use of manpower through eective automation of the data recording and reconstruction process. { To provide rapid access to physics quantities for the analysis groups. We currently expect that the reconstruction will be performed on farms of commercial unix processors. A prototype farm has been under development at SLAC for some time. Developments to date do not tie us to a single architecture and we continue to monitor developments in SMPs and commodity hardware (PC's). Output from the prompt reconstruction will be written to the primary data store. 4 Data Model With the adoption of object-oriented reconstruction and analysis software a solution must be found to managing the input/output of the produced data (objects). An Object Database Management System provides a natural storage model. A full database system is well matched to the problem and also provides tools for management of large data volumes in a distributed heterogeneous environment. An ODBMS can also provide good integration with C++, while not limiting object retrieval to this language. BaBar have chosen the commercial ODBMS Objectivity/DB for our initial implementation of the data store, following the choice of this same database system for our conditions database. Following initial evaluation studies of Objectivity/DB, we currently have a prototype version of the conditions database under evaluation and a prototype event store under development. The Event Store database design assumes that several types of data may exist for each event (some of the terminology is borrowed from Atlas) [4]: SIM (Simulated Truth) Underlying physics event data generated by Monte Carlo simulations (the Monte Carlo \truth"). RAW (Raw Data) The raw data output from the experiment (or Monte Carlo simulation). This data can not be regenerated. This could already be output from the reconstruction programs (25kB per event) REC (Reconstructed data) Output from the reconstruction programs, can be regenerated from the raw data (100 kb per event). ESD (Event Summary Data) A compressed form of the reconstructed data, corresponding approximately to a traditional DST (10KB per event). AOD (Analysis Object Data) End user physicist data, based on 4-vectors, particle types, etc. (1KB per event). TAG Highly compressed summary data for classifying events and allowing 4

6 single selection queries to be performed (< 100 bytes per event). HDR (Event Header) The event header containing references to the other information in the event, designed to be as compact as possible (< 64 bytes per event). Objects in the data store are accessed through \collections" which contain references to the events within the event store. New collections, e.g. corresponding to some specic event signature, may be created as required and can be accessed a simple name which is stored in an event \dictionary". Data present in the database may be replicated to improve access eciency, or for export. We anticipate exporting complete or partial copies of the database to several Regional Centres supported by the collaboration. Objectivity/DB, already provides tools for integrating a distributed database into a single logical entity. We will investigate these features in the future in relation to their usability with currently available networking, however, we do not assume that this will be required. It is unlikely that we will be able to store our complete event database ( 100Tbytes) on disk. It is, therefore, essential that we eectively integrate the database with the SLAC mass store. This is based upon the current SLAC STK tape silos, upgraded to use StorageTek helical scan tape technologies. This will provide tape storage for up to 600TB. A large fraction, if not all of the most compressed and most frequently accessed data will remain on disk. Less frequently accessed, more detailed, data will be staged from the tape store as required. 5 Analysis Interface To use the full power of the C++ objects output from the reconstruction. A high level toolkit is being developed for physics analysis. The Beta package from BaBar [5] provides a "physicist-compatible" level of abstraction for using reconstructed information to do four-vector analysis, vertexing, particle identication, etc. Although extensible by sophisticated users, it is primarily intended to provide an accessible analysis environment for those new to C++. It is designed to be very easy to use, with simple metaphors for common tasks, and without requiring the user to master the archania of the C++ programming language. It is intended to be complete, in the sense that it permits the analyst to examine anything available within the reconstruction processing and output. Beta provides the analyst with a common interface to fast or detailed Monte Carlo or real data, regardless of how sparsied and compressed, provided that the required information is available. To do this, Beta denes several basic concepts: 5

7 Candidate A candidate is the Beta representation of what is known or assumed about some particle that may have existed in the event. Charged tracks can be candidate pions, neutral clusters can be candidate photons. The output of vertexing two candidates is another candidate, representing the "particle" that might have decayed to those two tracks. All candidates, from dierent sources and with dierent implementations, provide the same, complete interface, so they can be used interchangeably. Operator An operator can combine one or more candidates to form new ones. Vertexing, mass constrained tting, and simple 4{vector addition are all operators. The analyst selects the operator needed at particular stages in the analysis, trading o accuracy and completeness for speed and exibility. Finders Finders search for decayed particles (K 0, D 0, 0...) associators an associator is used to associate candidates with each other (e.g. Monte Carlo truth and reconstructed tracks, charged tracks and clusters etc.). Beta is being actively developed now and is being used to performs realistic and complex analyses as part of a Mock Data challenge currently being undertaken by BaBar. 6 Regional Centres There are currently two sites functioning as regional centres in BaBar at RAL and CCIN2P3. It is planned that there will be another site in Italy and possibly one in the US. The centres currently act as foci for software development, simplifying the distribution and maintanence of BaBar software and the coordination of commercial licenses. These sites also provide signicant facilities for Monte Carlo generation. The Three European centres currently plan to copy slightly dierent versions of the data, based upon anticipated local demand together with the available or planned mass storage infrastructure: CCIN2P3 The full reconstruction output and any raw data that may be required to fully reprocess the events. RAL The Event Summary Data. Full reprocessing will not be possible from this sample, although retting of tracks etc. should be feasible. Italy Analysis Object Data. Collaborators will perform most analyses on the data sets at convenient regional centres (SLAC for US collaborators), possibly copying output Ntuples or small event samples back to their home institution. When requiring access to more detailed data then analysis must be performed at another centre, e.g. SLAC, where the more detailed data is available. ODBMSs in principle allow for full integration of such a distributed system with the data at all centres 6

8 forming a single large database. User queries would automatically be routed to where the data reside. We expect that international networking will be at best unpredictable over the next 5 years and, consequently, this full integration is not a requirement in our data model. We anticipate that, initially at least the regional centres will function as autonomous copies of the master database at SLAC. We will investigate the automation/federation of these databases next year (1998). 7 Conclusions We have developed a computing model which allows widely dispersed developers to collaborate on software development. We have made good progress in adapting to OO programming and are developing an analysis model, based on an ODBMS and a common analysis interface, which can take advantage of exibility and power of this approach. Perhap most signicantly many of our developments to date are already being studied with interest by other colaborations. References [1] D. Boutigny et. al. \BaBar Technical Design Report", SLAC-R [2] R. Jacobsen, \Applying Object-Oriented Software Engineering at the BaBar Collaboration." paper presented at this conference. S.F. Schaner and J.M. LoSecco, \Object-oriented Tracking and Vertexing at BaBar" paper presented at this conference. [3] R. Jacobsen, \The BaBar reconstruction system's distributed development/ centralised release system \, paper presented at this conference. [4] Atlas Collaboration, \ATLAS Computing Technical Proposal", CERN/LHCC [5] R. Jacobsen, \Beta: a high level toolkit for BaBar physics analysis", paper presented at this conference. 7

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

The BABAR Database: Challenges, Trends and Projections

The BABAR Database: Challenges, Trends and Projections SLAC-PUB-9179 September 2001 The BABAR Database: Challenges, Trends and Projections I. Gaponenko 1, A. Mokhtarani 1, S. Patton 1, D. Quarrie 1, A. Adesanya 2, J. Becla 2, A. Hanushevsky 2, A. Hasan 2,

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

Contents 1 Introduction 2 2 Design and Implementation The BABAR Framework Requirements and Guidelines

Contents 1 Introduction 2 2 Design and Implementation The BABAR Framework Requirements and Guidelines SLAC PUB 9253 BABAR PUB 02/005 June 2002 KANGA(ROO): Handling the micro-dst of the BABAR Experiment with ROOT T. J. Adye 1, A. Dorigo 2, R. Dubitzky 3, A. Forti 4, S. J. Gowdy 5;10, G. Hamel de Monchenault

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Network. Department of Statistics. University of California, Berkeley. January, Abstract

Network. Department of Statistics. University of California, Berkeley. January, Abstract Parallelizing CART Using a Workstation Network Phil Spector Leo Breiman Department of Statistics University of California, Berkeley January, 1995 Abstract The CART (Classication and Regression Trees) program,

More information

A Prototype of the CMS Object Oriented Reconstruction and Analysis Framework for the Beam Test Data

A Prototype of the CMS Object Oriented Reconstruction and Analysis Framework for the Beam Test Data Prototype of the CMS Object Oriented Reconstruction and nalysis Framework for the Beam Test Data CMS Collaboration presented by Lucia Silvestris CERN, Geneve, Suisse and INFN, Bari, Italy bstract. CMS

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop 1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

Multi-threaded, discrete event simulation of distributed computing systems

Multi-threaded, discrete event simulation of distributed computing systems Multi-threaded, discrete event simulation of distributed computing systems Iosif C. Legrand California Institute of Technology, Pasadena, CA, U.S.A Abstract The LHC experiments have envisaged computing

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Introduction to Geant4

Introduction to Geant4 Introduction to Geant4 Release 10.4 Geant4 Collaboration Rev1.0: Dec 8th, 2017 CONTENTS: 1 Geant4 Scope of Application 3 2 History of Geant4 5 3 Overview of Geant4 Functionality 7 4 Geant4 User Support

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

LCIO - A persistency framework for linear collider simulation studies

LCIO - A persistency framework for linear collider simulation studies LCIO - A persistency framework for linear collider simulation studies F. Gaede DESY, 22607 Hamburg, Germany T. Behnke DESY and SLAC N. Graf, T. Johnson SLAC, Stanford, CA 94025, USA SLAC-PUB-9992 * Almost

More information

Data reduction for CORSIKA. Dominik Baack. Technical Report 06/2016. technische universität dortmund

Data reduction for CORSIKA. Dominik Baack. Technical Report 06/2016. technische universität dortmund Data reduction for CORSIKA Technical Report Dominik Baack 06/2016 technische universität dortmund Part of the work on this technical report has been supported by Deutsche Forschungsgemeinschaft (DFG) within

More information

Atlas event data model optimization studies based on the use of segmented VArray in Objectivity/DB.

Atlas event data model optimization studies based on the use of segmented VArray in Objectivity/DB. Atlas event data model optimization studies based on the use of segmented VArray in Objectivity/DB. S. Rolli, A. Salvadori 2,A.C. Schaffer 2 3,M. Schaller 2 4 Tufts University, Medford, MA, USA 2 CERN,

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

CDF/DOC/COMP UPG/PUBLIC/4522 AN INTERIM STATUS REPORT FROM THE OBJECTIVITY/DB WORKING GROUP

CDF/DOC/COMP UPG/PUBLIC/4522 AN INTERIM STATUS REPORT FROM THE OBJECTIVITY/DB WORKING GROUP CDF/DOC/COMP UPG/PUBLIC/4522 March 9, 1998 Version 1.0 AN INTERIM STATUS REPORT FROM THE OBJECTIVITY/DB WORKING GROUP W. David Dagenhart, Kristo Karr, Simona Rolli and Krzysztof Sliwa Tufts University

More information

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction. at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent

More information

The BABAR Prompt Reconstruction System

The BABAR Prompt Reconstruction System SLAC-PUB-7977 October 1998 The BABAR Prompt Reconstruction System T. Glanzman, J. Bartelt, T.J. Pavel Stanford Linear Accelerator Center Stanford University, Stanford, CA 94309 S. Dasu University of Wisconsin,

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Internet Performance and Reliability Measurements

Internet Performance and Reliability Measurements SLAC-PUB-9785 Internet Performance and Reliability Measurements Presented at Computing in High-Energy Physics (CHEP 97), 4/7/1997 4/11/1997, Berlin, Germany Stanford Linear Accelerator Center, Stanford

More information

FAMOS: A Dynamically Configurable System for Fast Simulation and Reconstruction for CMS

FAMOS: A Dynamically Configurable System for Fast Simulation and Reconstruction for CMS FAMOS: A Dynamically Configurable System for Fast Simulation and Reconstruction for CMS St. Wynhoff Princeton University, Princeton, NJ 08544, USA Detailed detector simulation and reconstruction of physics

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

Gamma-ray Large Area Space Telescope. Work Breakdown Structure

Gamma-ray Large Area Space Telescope. Work Breakdown Structure Gamma-ray Large Area Space Telescope Work Breakdown Structure 4.1.D Science Analysis Software The Science Analysis Software comprises several components: (1) Prompt processing of instrument data through

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

EventStore A Data Management System

EventStore A Data Management System EventStore A Data Management System Valentin Kuznetsov with Chris Jones, Dan Riley, Gregory Sharp Cornell University CLEO-c The CLEO-c experiment started in 2003 main physics topics are precise studies

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

The Cheetah Data Management System *

The Cheetah Data Management System * SLAC-PUB-5450 March 1991 (E/I) The Cheetah Data Management System * Paul F. Kunz Stanford Linear Accelerator Center Stanford University, Stanford, CA 94309-4349, U.S.A. and Gary B. Word Department of Physics

More information

INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE

INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE Andreas Pfeiffer CERN, Geneva, Switzerland Abstract The Anaphe/LHC++ project is an ongoing effort to provide an Object-Oriented software environment for

More information

1.Remote Production Facilities

1.Remote Production Facilities 1.Remote Production Facilities Over the next five years it is expected that Remote Production facilities will provide the bulk of processing power for the DØ collaboration. It is envisaged that there will

More information

Tracking and flavour tagging selection in the ATLAS High Level Trigger

Tracking and flavour tagging selection in the ATLAS High Level Trigger Tracking and flavour tagging selection in the ATLAS High Level Trigger University of Pisa and INFN E-mail: milene.calvetti@cern.ch In high-energy physics experiments, track based selection in the online

More information

Machine Learning in Data Quality Monitoring

Machine Learning in Data Quality Monitoring CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data

More information

arxiv:hep-ph/ v1 11 Mar 2002

arxiv:hep-ph/ v1 11 Mar 2002 High Level Tracker Triggers for CMS Danek Kotliński a Andrey Starodumov b,1 a Paul Scherrer Institut, CH-5232 Villigen, Switzerland arxiv:hep-ph/0203101v1 11 Mar 2002 b INFN Sezione di Pisa, Via Livornese

More information

Muon Reconstruction and Identification in CMS

Muon Reconstruction and Identification in CMS Muon Reconstruction and Identification in CMS Marcin Konecki Institute of Experimental Physics, University of Warsaw, Poland E-mail: marcin.konecki@gmail.com An event reconstruction at LHC is a challenging

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS)

Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS) Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS) T Gahl 1,5, M Hagen 1, R Hall-Wilton 1,2, S Kolya 1, M Koennecke

More information

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data D. Barberis 1*, J. Cranshaw 2, G. Dimitrov 3, A. Favareto 1, Á. Fernández Casaní 4, S. González de la Hoz 4, J.

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

Fermi National Accelerator Laboratory

Fermi National Accelerator Laboratory a w Fermi National Accelerator Laboratory FERMILAB- Conf-97/077 PC Farms For Offline Event Reconstruction at Fermilab A. Beretvas", M.T. Chengb, P.T. Changb, F. Donno", J. Fromm", D. Holmgren", C.H. Huang",

More information

LHC-B. 60 silicon vertex detector elements. (strips not to scale) [cm] [cm] = 1265 strips

LHC-B. 60 silicon vertex detector elements. (strips not to scale) [cm] [cm] = 1265 strips LHCb 97-020, TRAC November 25 1997 Comparison of analogue and binary read-out in the silicon strips vertex detector of LHCb. P. Koppenburg 1 Institut de Physique Nucleaire, Universite de Lausanne Abstract

More information

The Fourth Level Trigger Online Reconstruction Farm of HERA-B 1

The Fourth Level Trigger Online Reconstruction Farm of HERA-B 1 The Fourth Level Trigger Online Reconstruction Farm of HERA-B Introduction A. Gellrich 2, I.C. Legrand, H. Leich, U. Schwanke, F. Sun, P. Wegner DESY Zeuthen, D-5738 Zeuthen, Germany S. Scharein Humboldt-University,

More information

The ATLAS EventIndex: Full chain deployment and first operation

The ATLAS EventIndex: Full chain deployment and first operation The ATLAS EventIndex: Full chain deployment and first operation Álvaro Fernández Casaní Instituto de Física Corpuscular () Universitat de València CSIC On behalf of the ATLAS Collaboration 1 Outline ATLAS

More information

LCIO: A Persistency Framework and Event Data Model for HEP. Steve Aplin, Jan Engels, Frank Gaede, Norman A. Graf, Tony Johnson, Jeremy McCormick

LCIO: A Persistency Framework and Event Data Model for HEP. Steve Aplin, Jan Engels, Frank Gaede, Norman A. Graf, Tony Johnson, Jeremy McCormick LCIO: A Persistency Framework and Event Data Model for HEP Steve Aplin, Jan Engels, Frank Gaede, Norman A. Graf, Tony Johnson, Jeremy McCormick SLAC-PUB-15296 Abstract LCIO is a persistency framework and

More information

HepRep: a Generic Interface Definition for HEP Event Display Representables

HepRep: a Generic Interface Definition for HEP Event Display Representables SLAC-PUB-8332 HepRep: a Generic Interface Definition for HEP Event Display Representables J. Perl 1 1 SLAC, Stanford University, California Abstract HepRep, the Generic Interface Definition for HEP Event

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

DATABASE SCALABILITY AND CLUSTERING

DATABASE SCALABILITY AND CLUSTERING WHITE PAPER DATABASE SCALABILITY AND CLUSTERING As application architectures become increasingly dependent on distributed communication and processing, it is extremely important to understand where the

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

The ATLAS Trigger Simulation with Legacy Software

The ATLAS Trigger Simulation with Legacy Software The ATLAS Trigger Simulation with Legacy Software Carin Bernius SLAC National Accelerator Laboratory, Menlo Park, California E-mail: Catrin.Bernius@cern.ch Gorm Galster The Niels Bohr Institute, University

More information

PoS(EPS-HEP2017)492. Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector.

PoS(EPS-HEP2017)492. Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector. Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector. CERN E-mail: agnieszka.dziurda@cern.ch he LHCb detector is a single-arm forward spectrometer

More information

Physics Analysis Tools for Beauty Physics in ATLAS

Physics Analysis Tools for Beauty Physics in ATLAS Physics Analysis Tools for Beauty Physics in ATLAS Christos Anastopoulos 8, Eva Bouhova-Thacker 2, James Catmore 2, Steve Dallison 5, Frederick Derue 6, Brigitte Epp 3, Patrick Jussel 3, Anna Kaczmarska

More information

MIP Reconstruction Techniques and Minimum Spanning Tree Clustering

MIP Reconstruction Techniques and Minimum Spanning Tree Clustering SLAC-PUB-11359 July 25 MIP Reconstruction Techniques and Minimum Spanning Tree Clustering Wolfgang F. Mader The University of Iowa, 23 Van Allen Hall, 52242 Iowa City, IA The development of a tracking

More information

The COMPASS Event Store in 2002

The COMPASS Event Store in 2002 The COMPASS Event Store in 2002 V. Duic INFN, Trieste, Italy M. Lamanna CERN, Switzerland and INFN, Trieste, Italy COMPASS, the fixed-target experiment at CERN studying the structure of the nucleon and

More information

Experimental Computing. Frank Porter System Manager: Juan Barayoga

Experimental Computing. Frank Porter System Manager: Juan Barayoga Experimental Computing Frank Porter System Manager: Juan Barayoga 1 Frank Porter, Caltech DoE Review, July 21, 2004 2 Frank Porter, Caltech DoE Review, July 21, 2004 HEP Experimental Computing System Description

More information

Computing at Belle II

Computing at Belle II Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small

More information

experiment E. Pasqualucci INFN, Sez. of University of Rome `Tor Vergata', Via della Ricerca Scientica 1, Rome, Italy

experiment E. Pasqualucci INFN, Sez. of University of Rome `Tor Vergata', Via della Ricerca Scientica 1, Rome, Italy Using SNMP implementing Data Flow and Run Controls in the KLOE experiment E. Pasqualucci INFN, Sez. of University of Rome `Tor Vergata', Via della Ricerca Scientica, 0033 Rome, Italy M. L. Ferrer, W. Grandegger,

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Data preservation for the HERA experiments at DESY using dcache technology

Data preservation for the HERA experiments at DESY using dcache technology Journal of Physics: Conference Series PAPER OPEN ACCESS Data preservation for the HERA experiments at DESY using dcache technology To cite this article: Dirk Krücker et al 2015 J. Phys.: Conf. Ser. 66

More information

National Accelerator Laboratory

National Accelerator Laboratory Fermi National Accelerator Laboratory FERMILAB-Conf-95/328 Integrating Data Acquisition and Offline Processing Systems for Small Experiments at Fermilab J. Streets, B. Corbin and C. Taylor Fermi National

More information

Massive-Scale Data Management using Standards-Based Solutions

Massive-Scale Data Management using Standards-Based Solutions Massive-Scale Data Management using Standards-Based Solutions Abstract Jamie Shiers CERN Geneva Switzerland In common with many large institutes, CERN has traditionally developed and maintained its own

More information

BaBar Computing: Technologies and Costs

BaBar Computing: Technologies and Costs BaBar Computing: Technologies and Costs October 11, 2000 BaBar Offline Systems: August 1999 SLAC-BaBar Data Analysis System 50/400 simultaneous/total physicists, 300 Tbytes per year HARDWARE UNITS End

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

DIAL: Distributed Interactive Analysis of Large Datasets

DIAL: Distributed Interactive Analysis of Large Datasets DIAL: Distributed Interactive Analysis of Large Datasets D. L. Adams Brookhaven National Laboratory, Upton NY 11973, USA DIAL will enable users to analyze very large, event-based datasets using an application

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Operational Aspects of Dealing with the Large BaBar Data Set

Operational Aspects of Dealing with the Large BaBar Data Set Operational Aspects of Dealing with the Large BaBar Data Set Tofigh Azemoon, Adil Hasan, Wilko Kröger, Artem Trunov SLAC Computing Services, Stanford, CA 94025, USA On Behalf of the BaBar Computing Group

More information

Sending Commands and Managing Processes across the BABAR OPR Unix Farm through C++ and CORBA

Sending Commands and Managing Processes across the BABAR OPR Unix Farm through C++ and CORBA Sending Commands and Managing Processes across the BABAR OPR Unix Farm through C++ and CORBA G. Grosdidier 1, S. Dasu 2, T. Glanzman 3, T. Pavel 4 (for the BABAR Prompt Reconstruction and Computing Groups)

More information

Visita delegazione ditte italiane

Visita delegazione ditte italiane Visita delegazione ditte italiane CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Massimo Lamanna/CERN IT department - Data Storage Services group Innovation in Computing in High-Energy

More information

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS NOTE 2001/037 The Compact Muon Solenoid Experiment CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland July 16, 2001 CMS Data Grid System Overview

More information

b-jet identification at High Level Trigger in CMS

b-jet identification at High Level Trigger in CMS Journal of Physics: Conference Series PAPER OPEN ACCESS b-jet identification at High Level Trigger in CMS To cite this article: Eric Chabert 2015 J. Phys.: Conf. Ser. 608 012041 View the article online

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Table 9. ASCI Data Storage Requirements

Table 9. ASCI Data Storage Requirements Table 9. ASCI Data Storage Requirements 1998 1999 2000 2001 2002 2003 2004 ASCI memory (TB) Storage Growth / Year (PB) Total Storage Capacity (PB) Single File Xfr Rate (GB/sec).44 4 1.5 4.5 8.9 15. 8 28

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN CERN E-mail: mia.tosi@cern.ch During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2 10 34 cm 2 s 1 with an average pile-up

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for

More information

Frank Miller, George Apostolopoulos, and Satish Tripathi. University of Maryland. College Park, MD ffwmiller, georgeap,

Frank Miller, George Apostolopoulos, and Satish Tripathi. University of Maryland. College Park, MD ffwmiller, georgeap, Simple Input/Output Streaming in the Operating System Frank Miller, George Apostolopoulos, and Satish Tripathi Mobile Computing and Multimedia Laboratory Department of Computer Science University of Maryland

More information

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine Journal of Physics: Conference Series PAPER OPEN ACCESS ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine To cite this article: Noemi Calace et al 2015 J. Phys.: Conf. Ser. 664 072005

More information

Scientific Computing at SLAC

Scientific Computing at SLAC Scientific Computing at SLAC Richard P. Mount Director: Scientific Computing and Computing Services DOE Review June 15, 2005 Scientific Computing The relationship between Science and the components of

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

Data handling and processing at the LHC experiments

Data handling and processing at the LHC experiments 1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for

More information

Transport and Management of High Volumes of Data through Bounded LAN and WAN Infrastructure at SLAC

Transport and Management of High Volumes of Data through Bounded LAN and WAN Infrastructure at SLAC Transport and Management of High Volumes of Data through Bounded LAN and WAN Infrastructure at SLAC S. Luitz 1, D. Millsom 2, D. Salomoni 3, J.Y. Kim 4, A. Zele 5 Abstract This talk will address how the

More information

arxiv:cond-mat/ Oct 2002

arxiv:cond-mat/ Oct 2002 21 October 2002 The application of the NeXus data format to ISIS muon data arxiv:cond-mat/0210439 21 Oct 2002 S.P. Cottrell, D. Flannery, G. Porter, A.D. Hillier and P.J.C. King ISIS Facility, Rutherford

More information

Construction of a compact DAQ-system using DSP-based VME modules

Construction of a compact DAQ-system using DSP-based VME modules Abstract We have developed a DSP based data-acquisition syustem(daq) system, based on the DSP. The system utilizes VME boards with one or two s. Our intension was to consturct a compact DAQ framework which

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

On the Verge of One Petabyte - the Story Behind the BaBar Database System

On the Verge of One Petabyte - the Story Behind the BaBar Database System SLAC-PUB-9889 * On the Verge of One Petabyte - the Story Behind the BaBar Database System Adeyemi Adesanya, Tofigh Azemoon, Jacek Becla, Andrew Hanushevsky, Adil Hasan, Wilko Kroeger, Artem Trunov, Daniel

More information

A Simulation Model for Large Scale Distributed Systems

A Simulation Model for Large Scale Distributed Systems A Simulation Model for Large Scale Distributed Systems Ciprian M. Dobre and Valentin Cristea Politechnica University ofbucharest, Romania, e-mail. **Politechnica University ofbucharest, Romania, e-mail.

More information

Accelerator Control System

Accelerator Control System Chapter 13 Accelerator Control System 13.1 System Requirements The KEKB accelerator complex has more than 50,000 control points along the 3 km circumference of the two rings, the LER and HER. Some control

More information

Existing Tools in HEP and Particle Astrophysics

Existing Tools in HEP and Particle Astrophysics Existing Tools in HEP and Particle Astrophysics Richard Dubois richard@slac.stanford.edu R.Dubois Existing Tools in HEP and Particle Astro 1/20 Outline Introduction: Fermi as example user Analysis Toolkits:

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information