Experience with Data-flow, DQM and Analysis of TIF Data
|
|
- Miles Harrell
- 5 years ago
- Views:
Transcription
1 Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla, S. Sarkar ECAL-DPG Meeting, CERN
2 Tracker Integration & Slice Test (~ 25%) verify long-term stable operation of the system & data-acquisition finalise PS, DAQ, DCS, Safety systems deploy Online software, Data Quality Monitor collect a significant cosmic data sample useful to develop and deploy Track reconstruction, Cosmic tracking and alignment algorithms establish data movement and analysis in the Grid computing environment Tracker Analysis Center DAQ DQM/IGUANA na ly si s The CMS tracker has been fully integrated at the TIF Commissioning of the 25% of the tracker with Cosmic muons on-going since Feb '07 to A 2
3 TIF Data Processing Overview 1. Conversion of Raw data to EDM compatible format 2. Copying Raw and EDM to Castor DAQ + Filter Farm StorageManager Local Disk Storage (temporary before transfer to CASTOR) DQM Copying to CASTOR / Registration in DBS/DLS Local reconstruction Shipping to Tier 1 / 2 5. Reconstruction with ProdAgent 3. Registration of EDM data in DBS-1/DLS Reconstruction 4. Injection in PhEDEx for the transfer DBS-2 registration underway (both Raw and EDM ) Visualization (IGUANA) Publication Injection Subscription Skimming User analsysis - Data registered in Bari/CERN/FNAL - Publication of Reco data 6. Data analysis via CRAB - Global Monitoring Track TIF data-flow in real-time Migration of TIF data Tier-0 in progress 3
4 Data Quality Monitor DQM is an indispensable tool to continuously monitor the performance of a large number of detectors find problems and find it early saves a lot of head-ache downstream send instant feedback to hardware and reconstruction experts Smooth data-taking will be ensured iff all the above are under control Provides summary information for the shifters and all the imaginable details for the experts 4
5 Different Modes of Running DQM Online, during data taking events from Storage Manager (EventStreamHttpReader) 1 event out of 10 (configurable) Source reading events from file stored on disk (local/castor) Offline DQM Source Collector Quasi-Online DQM Source Client standalone client together with source modules in a single process to achieve full statistics and bookkeeping In all cases full reconstruction of runs together with DQM source For all different modes the output is a Root file with histograms arranged in folders - Collation - Accumulation - Summary Statistical Tool OK Warning Failed Web based visualization 5
6 DQM Shifter View Global Tracks with Reference Pre-defined Layout Start a slide show Check Lite or Detailed text Summary of QTest results 6
7 DQM Expert View Select a part of the detector problem!! Navigate down the folder tree Pin point the culprit(s) 7
8 Tracker Map 2D representation of the tracker painted with generated alarms [M. Mennea, G. Zito] track down culprit modules and click and see detail 8
9 Local Resources: CPU and Storage Two dedicated PCs at Tracker Analysis Center (TAC) cmstkstorage-giga storage processing 2 data volumes each ~ 1 TB used alternately during data-taking Temporary storage; a robust clean-up mechanism in place data volumes are exported to each TAC machine via NFS copies Raw and EDM files to Castor performs DBS/DLS registration cmstac11 converts Raw data to EDM format loads pedestal and noise values from online to offline DB (o2o), crucial for offline processing of data hosts global monitoring of TIF data 9
10 Conversion, Copying to Castor Fully automated with cron jobs and daemons all types of runs are converted (physics, pedestal etc.) Raw files archived in Castor under /castor/cern.ch/cms/testbeam/tac EDM files archived in Castor under /castor/cern.ch/cms/store/tac once the EDM files for a run copied to Castor, a catalog is prepared for DBS/DLS registration in the next step of the chain Experience flat file based book-keeping; only one conversion process runs at a time Castor has its own well-known problems code developed in production environment; initially we had difficult moments NFS slows down processing when a large number of clients access the data volumes 10
11 Registration in DBS-1/DLS A number of daemon processes look continuously for new runs, i.e DBS catalog files created by the previous steps Technicalities a Grid certificate with production role is required to connect to DLS voms-proxy-init -voms cms:/cms/role=production registration scripts based on DBS/DLS API one DBS and DLS instance MCLocal_4/Writer for DBS prod-lfc-cms-central.cern.ch/grid/cms/dls/mclocal_4 for DLS for EDM files provide file size, number of events in a file, checksum Experience no hiccups at all, fast and robust registration repeated for a few runs due to problems in the previous steps 11
12 Raw Tracker data in DBS/DLS Tracker data MTCC data 12
13 Injection in PhEDEx for Transfer Data published to DBS/DLS are injected to the official CMS data movement tool, PhEDEx Data injection is the procedure to write into the PhEDEx database and can be run at a remote Tier site Presently, the injection performed from Bari an official PhEDEx agent and a component of ProdAgent modified to close blocks at the end of the transfer for automatic publication to DLS to work Several daemons continuously look for new tracker data being published to DBS/DLS Once datasets are injected in PhEDEx, any Tier-n site can subscribe and PhEDEx will eventually deliver them Tracker data available at: CERN, FNAL, Bari, Pisa 13
14 Tracker Data in PhEDEx 14
15 PhEDEx Experience If Castor fails to deliver files, PhEDEx may wait indefinitely PhEDEx not supposed to identify and work around mass storage problems for efficiency PhEDEx assigns a group of files at a time for transfer File size mismatch between Castor and PhEDEx TMDB Some EDM files overwritten after injection to PhEDEx multiple Raw-to-EDM conversion processes created problem with file-based book-keeping device reverted to single process a couple of months ago; PhEDEx TMDB updated less than 0.1% files affected CERN to FNAL Raw data transfer affected by other transfers with higher priority (MC Production) eventually importance of tracker data was recognised and transfer streamlined 15
16 Standard Reconstruction Run reconstruction of Raw data in a standard and official way, i.e using a CMSSW release or pre-release but without user patches ProdAgent can be used in the same way it is used for MC production can be run in any remote Tier-1/2 Running with ProdAgent ensures that the reconstructed data are automatically registered to DBS/DLS, ready to be shipped via PhEDEx to other Tiers and analysed with standard distributed computing tools Offline DB (pedestal, noise, cabling information) accessed at remote site via Frontier/Squid cache Currently, reconstruction of new runs triggered automatically by a ProdAgent instance in Bari Jobs run in CERN, FNAL, Bari and Pisa where Raw data are available Reconstructed data registered to DBS/DLS from sites where they are produced 16
17 Reconstruction in Development Environment Performed at FNAL using ProdAgent with releases or prereleases patched with latest development/bug fixes Provides immediate feedback to the tracking developers incorporates corrected geometry, latest algorithm changes with physics run on track reconstruction algorithms and alignment Not fully compatible with the official naming convention for the reconstructed data Reconstructed data transferred to other sites in the usual way Details at 17
18 Reco Tracker Data in DBS/DLS 18
19 Data Analysis with CRAB Both Raw and Reco data published in DBS/DLS can be analysed with CRAB at different Tiers Edit crab.cfg and insert the dataset path of the Run to be analysed. CRAB automatically gets the file list Follow the usual steps setup the CMS environment compile your code provide the usual CMSSW cfg to be used by cmsrun Offline DB accessed via frontier at Tier 1/2 Automate analysis steps via CRAB 19
20 Automated Analysis with CRAB The automated process repeats the following steps for all the interesting physics runs Run Discovery: combine information from Run Summary Page, DBS (and eventually a custom list of runs) Change of CMSSW analysis release evolution of code; new reconstruction feature in newer release Analysis Flag assignment when something changes: an analysis condition, parameter, Creation/Submission of all CMSSW jobs with CRAB Monitoring / Output Retrieval of jobs on Grid the CRAB user is supposed to retrieve the job output from WN, no automatic retrieval on job completion eventual output merging if CRAB divided the job in multiple sub-jobs Run the analysis Root macros on the output Represent results in a way useful to identify trends, problems, etc. publish on the web, allow * easy navigation through several Flags, runs, results * open access for everyone interested to contribute to the analysis. 20
21 Automated Analysis continued 21
22 TIF Data Processing Statistics Technical problem approaching 5M physics events several factors affect the reconstruction phase (finding Grid resources, availability of the proper CMSSW etc.) 22
23 Global Monitoring Follow TIF data movement in real time from local storage to Tier sites We have our own monitor at - essential to track problems early in the long chain Detailed information for a - useful as an easy reference for the TIF data run in each phase - new ideas still flowing in Filters 23
24 Summary DQM, a crucial component to find problems early optimised for both end-users and experts Fully automated data movement, reconstruction and analysis of TIF data somewhat ad-hoc yet robust design; still evolving with time no problems encountered in the last couple of months Data movement/processing limited by conversion efficiency Castor copy efficiency and Castor delivering files to PhEDEx lack of a better DB based book-keeping that would allow multiple conversion processes to run in parallel to speed up the whole chain Documentation CMS IN-2007/014 It was a challenging exercise for the community and very satisfying that we made it in the best possible way 24
Real-time dataflow and workflow with the CMS tracker data
Journal of Physics: Conference Series Real-time dataflow and workflow with the CMS tracker data To cite this article: N D Filippis et al 2008 J. Phys.: Conf. Ser. 119 072015 View the article online for
More informationCMS event display and data quality monitoring at LHC start-up
Journal of Physics: Conference Series CMS event display and data quality monitoring at LHC start-up To cite this article: I Osborne et al 2008 J. Phys.: Conf. Ser. 119 032031 View the article online for
More informationThe CMS data quality monitoring software: experience and future prospects
The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data
More informationThe CMS Computing Model
The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+
More informationCRAB tutorial 08/04/2009
CRAB tutorial 08/04/2009 Federica Fanzago INFN Padova Stefano Lacaprara INFN Legnaro 1 Outline short CRAB tool presentation hand-on session 2 Prerequisities We expect you know: Howto run CMSSW codes locally
More informationCMS data quality monitoring: Systems and experiences
Journal of Physics: Conference Series CMS data quality monitoring: Systems and experiences To cite this article: L Tuura et al 2010 J. Phys.: Conf. Ser. 219 072020 Related content - The CMS data quality
More informationCMS Computing Model with Focus on German Tier1 Activities
CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS
More informationData Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser
Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod
More informationhandling of LHE files in the CMS production and usage of MCDB
handling of LHE files in the CMS production and usage of MCDB Christophe Saout CERN, University of Karlsruhe on behalf of the CMS physics event generators group Christophe M. Saout, CERN, Uni Karlsruhe
More informationStephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)
Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC
More informationKondo GNANVO Florida Institute of Technology, Melbourne FL
Kondo GNANVO Florida Institute of Technology, Melbourne FL OUTLINE Development of AMORE software for online monitoring and data analysis of MT station Preliminary cosmic data results from triple-gem chambers
More informationSpanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"
Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year
More informationInstallation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing
Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of
More informationAgents and Daemons, automating Data Quality Monitoring operations
Journal of Physics: Conference Series Agents and Daemons, automating Data Quality Monitoring operations To cite this article: Luis I Lopera, on behalf of the Dqm Group 2012 J. Phys.: Conf. Ser. 396 052050
More informationChallenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk
Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2
More informationCMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster
CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:
More informationWorkload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova
Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present
More informationThe LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland
The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability
More informationCMS Analysis Workflow
CMS Analysis Workflow Sudhir Malik Fermilab/University of Nebraska- Lincoln, U.S.A. 1 CMS Software CMS so)ware (CMSSW) based on Event Data Model (EDM) - as event data is processed, products stored in the
More informationLHCb Computing Status. Andrei Tsaregorodtsev CPPM
LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz
More informationThe CMS L1 Global Trigger Offline Software
The CMS L1 Global Offline Software Vasile Mihai Ghete Institute for High Energy Physics, Vienna, Austria Seminar 08-09 June 2009, HEPHY Vienna CMS experiment Tracker pixel detector: 3 barrel layers, 2
More informationThe INFN Tier1. 1. INFN-CNAF, Italy
IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),
More informationCouchDB-based system for data management in a Grid environment Implementation and Experience
CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment
More informationData handling and processing at the LHC experiments
1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for
More informationCMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status
CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status High Level Requirements for user analysis computing Code Development Environment Compile, run,
More informationBelle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN
1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the
More informationTracking POG Update. Tracking POG Meeting March 17, 2009
Tracking POG Update Tracking POG Meeting March 17, 2009 Outline Recent accomplishments in Tracking POG - Reconstruction improvements for collisions - Analysis of CRAFT Data Upcoming Tasks Announcements
More informationARC integration for CMS
ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki
More informationChallenges of the LHC Computing Grid by the CMS experiment
2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment
More informationPROOF-Condor integration for ATLAS
PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline
More informationAnalisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI
Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not
More informationThe evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model
Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050
More informationTest Beam Task List - ECAL
Test Beam Task List - ECAL Aim: Identify all tasks essential for run and analysis of beam data Ensure (at least) 1 person commits to produce results in each area Very variable size of tasks easier for
More informationFront-End Electronics Configuration System for CMS. Philippe Gras CERN - University of Karlsruhe
Front-End Electronics Configuration System for CMS Philippe Gras CERN - University of Karlsruhe Outline Introduction Tracker electronics parameters Tracker beam test DCS overview Electronics configuration
More informationData Quality Monitoring for High Energy Physics (DQM4HEP) Version
Data Quality Monitoring for High Energy Physics (DQM4HEP) Version 03-02-00 R. Été, A. Pingault, L. Mirabito Université Claude Bernard Lyon 1 - Institut de Physique Nucléaire de Lyon / Ghent University
More informationThe ATLAS Conditions Database Model for the Muon Spectrometer
The ATLAS Conditions Database Model for the Muon Spectrometer Monica Verducci 1 INFN Sezione di Roma P.le Aldo Moro 5,00185 Rome, Italy E-mail: monica.verducci@cern.ch on behalf of the ATLAS Muon Collaboration
More informationComputing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1
ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)
More informationarxiv: v1 [physics.ins-det] 1 Oct 2009
Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,
More informationCMS Simulation Software
CMS Simulation Software Dmitry Onoprienko Kansas State University on behalf of the CMS collaboration 10th Topical Seminar on Innovative Particle and Radiation Detectors 1-5 October 2006. Siena, Italy Simulation
More informationHall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.
at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent
More informationMonte Carlo Production on the Grid by the H1 Collaboration
Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring
More informationMonitoring of Computing Resource Use of Active Software Releases at ATLAS
1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,
More informationPersistent storage of non-event data in the CMS databases
arxiv:1001.1674v2 [physics.ins-det] 8 Mar 2010 Persistent storage of non-event data in the CMS databases M.De Gruttola 1,2,3, S.Di Guida 1, D.Futyan 4, F.Glege 2, G.Govi 5, V.Innocente 1, P.Paolucci 2,
More informationCompact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005
Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and
More informationThe CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008
The CORAL Project Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 Outline CORAL - a foundation for Physics Database Applications in the LHC Computing Grid (LCG)
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationHEP Grid Activities in China
HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform
More informationLCG Conditions Database Project
Computing in High Energy and Nuclear Physics (CHEP 2006) TIFR, Mumbai, 13 Feb 2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans On behalf of the COOL team (A.V., D.Front,
More informationData services for LHC computing
Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout
More informationLHCb Computing Strategy
LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy
More informationMONitoring Agents using a Large Integrated Services Architecture. Iosif Legrand California Institute of Technology
MONitoring Agents using a Large Integrated s Architecture California Institute of Technology Distributed Dynamic s Architecture Hierarchical structure of loosely coupled services which are independent
More informationData Quality Monitoring Display for ATLAS experiment
Data Quality Monitoring Display for ATLAS experiment Y Ilchenko 1, C Cuenca Almenar 2, A Corso-Radu 2, H Hadavand 1, S Kolos 2, K Slagle 2, A Taffard 2 1 Southern Methodist University, Dept. of Physics,
More informationThe ATLAS Production System
The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and
More informationPerformance quality monitoring system (PQM) for the Daya Bay experiment
Performance quality monitoring system (PQM) for the Daya Bay experiment LIU Yingbiao Institute of High Energy Physics On behalf of the Daya Bay Collaboration ACAT2013, Beijing, May 16-21, 2013 2 The Daya
More informationDetector Control System for Endcap Resistive Plate Chambers
Detector Control System for Endcap Resistive Plate Chambers Taimoor Khurshid National Center for Physics, Islamabad, Pakistan International Scientific Spring March 01, 2010 Contents CMS Endcap RPC Hardware
More informationCMS users data management service integration and first experiences with its NoSQL data storage
Journal of Physics: Conference Series OPEN ACCESS CMS users data management service integration and first experiences with its NoSQL data storage To cite this article: H Riahi et al 2014 J. Phys.: Conf.
More informationThe Run 2 ATLAS Analysis Event Data Model
The Run 2 ATLAS Analysis Event Data Model Marcin Nowak, BNL On behalf of the ATLAS Analysis Software Group and Event Store Group 16 th International workshop on Advanced Computing and Analysis Techniques
More informationEvent cataloguing and other database applications in ATLAS
Event cataloguing and other database applications in ATLAS Dario Barberis Genoa University/INFN 1 Topics Event cataloguing: the new EventIndex DB Database usage by ATLAS in LHC Run2 PCD WS Ideas for a
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationSummary of the LHC Computing Review
Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,
More informationStatus Report of PRS/m
Status Report of PRS/m D.Acosta University of Florida Current U.S. activities PRS/m Activities New PRS organization 1 EMU Software Workshop Workshop held at UCDavis in late February helped focus EMU software
More informationCLAS12 Offline Software Tools. G.Gavalian (Jlab) CLAS Collaboration Meeting (June 15, 2016)
CLAS12 Offline Software Tools G.Gavalian (Jlab) Overview Data Formats: RAW data decoding from EVIO. Reconstruction output banks in EVIO. Reconstruction output convertor to ROOT (coming soon). Data preservation
More informationLHCb Distributed Conditions Database
LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The
More informationL1 and Subsequent Triggers
April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in
More informationHLT Infrastructure Commissioning
HLT Infrastructure Commissioning Steven Robertson Institute of Particle Physics ATLAS NSERC Review Vancouver, B.C. November 14th, 2007 Outline Overview of ATLAS trigger system HLT hardware installation
More informationPoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca S. Lacaprara INFN Legnaro
Status and evolution of CRAB University and INFN Milano-Bicocca E-mail: fabio.farina@cern.ch S. Lacaprara INFN Legnaro W. Bacchi University and INFN Bologna M. Cinquilli University and INFN Perugia G.
More informationFrom raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider
From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference
More informationSoftware and computing evolution: the HL-LHC challenge. Simone Campana, CERN
Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge
More informationPerformance quality monitoring system for the Daya Bay reactor neutrino experiment
Journal of Physics: Conference Series OPEN ACCESS Performance quality monitoring system for the Daya Bay reactor neutrino experiment To cite this article: Y B Liu and the Daya Bay collaboration 2014 J.
More informationPersistent storage of non-event data in the CMS databases
Journal of Instrumentation OPEN ACCESS Persistent storage of non-event data in the CMS databases To cite this article: M De Gruttola et al View the article online for updates and enhancements. Related
More informationALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop
1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data
More informationThe Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland
Available on CMS information server CMS CR -2012/140 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 13 June 2012 (v2, 19 June 2012) No
More informationLarge scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS
Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.
More informationCommon Software for Controlling and Monitoring the Upgraded CMS Level-1 Trigger
Common Software for Controlling and Monitoring the Upgraded CMS Level-1 Trigger Giuseppe Codispoti, Simone Bologna, Glenn Dirkx, Christos Lazaridis, Alessandro Thea, Tom Williams TIPP2017: International
More informationKLOE software on Linux
KLOE software on Linux Offline review L.N.F. March 16, 2001 C. Bloise, P. Valente Linux box : Minimal requirements apentium class PC (Intel Pentium, PII, PIII or AMD K6,K7) RAM and local disk sufficient
More informationComputing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator
Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,
More informationThe GAP project: GPU applications for High Level Trigger and Medical Imaging
The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1
More informationANSE: Advanced Network Services for [LHC] Experiments
ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE
More informationThe ATLAS Trigger Simulation with Legacy Software
The ATLAS Trigger Simulation with Legacy Software Carin Bernius SLAC National Accelerator Laboratory, Menlo Park, California E-mail: Catrin.Bernius@cern.ch Gorm Galster The Niels Bohr Institute, University
More informationStreamlining CASTOR to manage the LHC data torrent
Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch
More informationData Quality Monitoring at CMS with Machine Learning
Data Quality Monitoring at CMS with Machine Learning July-August 2016 Author: Aytaj Aghabayli Supervisors: Jean-Roch Vlimant Maurizio Pierini CERN openlab Summer Student Report 2016 Abstract The Data Quality
More informationComputing at Belle II
Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small
More informationLHCb Computing Resources: 2018 requests and preview of 2019 requests
LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:
More informationPrompt data reconstruction at the ATLAS experiment
Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European
More information1. Introduction. Outline
Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon
More informationEvent Displays and LArg
Event Displays and LArg Columbia U. / Nevis Labs Slide 1 Introduction Displays are needed in various ways at different stages of the experiment: Software development: understanding offline & trigger algorithms
More informationThe National Analysis DESY
The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationData Transfers Between LHC Grid Sites Dorian Kcira
Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference
More informationIBM Content Manager Compliance Solution with IBM System Storage N Series SnapLock devices
IBM Content Manager Compliance Solution with IBM System Storage N Series SnapLock devices Author: Raghuram Tadipatri Senior Manager, IBM tadipatr@us.ibm.com Dated: 5/20/2013 Copyright IBM (2013). This
More informationCMS conditions database web application service
Journal of Physics: Conference Series CMS conditions database web application service To cite this article: Katarzyna Maria Dziedziniewicz et al 2010 J. Phys.: Conf. Ser. 219 072048 View the article online
More informationSubscriptions and Recurring Payments 2.X
Documentation / Documentation Home Subscriptions and Recurring 2.X Created by Unknown User (bondarev), last modified by Unknown User (malynow) on Mar 22, 2017 Installation Set up cron (for eway) Configuration
More informationRDMS CMS Computing Activities before the LHC start
RDMS CMS Computing Activities before the LHC start RDMS CMS computing model Tiers 1 CERN Collaborative centers: RCC RCC MSU, MSU, RRC RRC KI KI Kharkov Minsk JINR Erevan SINP MSU RDMS CMS Tier2 Tbilisi
More informationI Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC
I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri
More informationEUSurvey Installation Guide
EUSurvey Installation Guide Guide to a successful installation of EUSurvey May 20 th, 2015 Version 1.2 (version family) 1 Content 1. Overview... 3 2. Prerequisites... 3 Tools... 4 Java SDK... 4 MySQL Database
More informationThe NOvA software testing framework
Journal of Physics: Conference Series PAPER OPEN ACCESS The NOvA software testing framework Related content - Corrosion process monitoring by AFM higher harmonic imaging S Babicz, A Zieliski, J Smulko
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationAccess to ATLAS Geometry and Conditions Databases
Access to ATLAS Geometry and Conditions Databases Vakho Tsulaia University of Pittsburgh ATLAS South Caucasus Software/Computing Workshop & Tutorial Tbilisi, 2010 10 26 Outline Introduction to ATLAS Geometry
More informationDeploying virtualisation in a production grid
Deploying virtualisation in a production grid Stephen Childs Trinity College Dublin & Grid-Ireland TERENA NRENs and Grids workshop 2 nd September 2008 www.eu-egee.org EGEE and glite are registered trademarks
More informationATLAS Offline Data Quality Monitoring
ATLAS Offline Data Quality Monitoring ATL-SOFT-PROC-2009-003 24 July 2009 J. Adelman 9, M. Baak 3, N. Boelaert 6, M. D Onofrio 1, J.A. Frost 2, C. Guyot 8, M. Hauschild 3, A. Hoecker 3, K.J.C. Leney 5,
More information