RDMS CMS Computing Activities before the LHC start
|
|
- Pearl Richard
- 5 years ago
- Views:
Transcription
1 RDMS CMS Computing Activities before the LHC start RDMS CMS computing model Tiers 1 CERN Collaborative centers: RCC RCC MSU, MSU, RRC RRC KI KI Kharkov Minsk JINR Erevan SINP MSU RDMS CMS Tier2 Tbilisi IHEP ITEP PNPI RAS LPI RAS INR RAS V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow, Russia 2- Joint Institute for Nuclear Research, Dubna, Russia 3 Skobeltsyn Institute of Nuclear Physics, Moscow, Russia Third International Conference Distributed Computing and GRID Technologies in Science and Education Dubna, Russia, June 30 July 4, 2008
2 Outline RDMS CMS Collaboration RDMS CMS Computing Model RDMS CMS Computing Activities - creation of a common distributed CMS grid-infrastructure at the RDMS CMS institutes - CMS Databases - participation in the ARDA project - participation in CMS Service, Data and Software Challenges - RDMS CMS institutes resources and assignment to physical analysis groups Summary
3 Composition of the RDMS CMS Collaboration RDMS - Russia and Dubna Member States CMS Collaboration Russia Russian Federation Institute for High Energy Physics, Protvino Institute for Theoretical and Experimental Physics, Moscow Institute for Nuclear Research, RAS, Moscow Moscow State University, Institute for Nuclear Physics, Moscow Petersburg Nuclear Physics Institute, RAS, St.Petersburg P.N.Lebedev Physical Institute, Moscow Associated members: High Temperature Technology Center of Research & Development Institute of Power Engineering, Moscow Russian Federal Nuclear Centre Scientific Research Institute for Technical Physics, Snezhinsk Myasishchev Design Bureau, Zhukovsky Electron, National Research Institute, St. Petersburg JINR Joint Institute for Nuclear Research, Dubna Dubna Member States Armenia Yerevan Physics Institute, Yerevan Belarus Georgia High Energy Physics Institute, Tbilisi State University, Tbilisi Institute of Physics, Academy of Science,Tbilisi Ukraine Byelorussian State University, Minsk Research Institute for Nuclear Institute of Single Crystals of National Problems, Minsk Academy of Science, Kharkov National Centre for Particle and High National Scientific Center, Kharkov Energy Physics, Minsk Institute of Physics and Technology, Research Institute for Applied Kharkov Physical Problems, Minsk Kharkov State University, Kharkov Bulgaria Institute for Nuclear Research and Nuclear Energy, BAS, Sofia University of Sofia, Sofia Uzbekistan Institute for Nuclear Physics, UAS, Tashkent the RDMS CMS Collaboration was founded in Dubna in September 1994
4 RDMS Participation in CMS Construction RDMS Full Responsibility RDMS Participation ME1/ 1 ME SE EE HE HF FS
5 RDMS Participation in CMS Project Full responsibility including management, design, construction, installation, commissioning, maintenance and operation for: Endcap Hadron Calorimeter, HE 1st Forward Muon Station, ME1/1 Participation in: Forward Hadron Calorimeter, HF Endcap ECAL, EE Endcap Preshower, SE Endcap Muon System, ME Forward Shielding, FS
6 RDMS activities in CMS Design, production and installation Calibration and alignment H (150 GeV) fi Z 0 Z 0 fi 4 m Reconstruction algorithms Data processing and analysis Monte Carlo simulation
7 desktops portables LHC Computing Model Tier-0 (CERN) Filter raw data small centres Reconstruction summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1 MSU IC JINR ITEP Cambridge Budapest Prague Santiago Tier-2 Weizmann IHEP TRIUMF Legnaro RAL Tier-1 IN2P3 FNAL CNAF FZK CSCS Tier-1 Rome Permanent storage and management of raw, ESD, calibration data, meta-data, analysis data and databases grid-enabled data service Data-heavy analysis Re-processing raw ESD ESD-AOD selection National, regional support Tier-2 Simulation, digitization, calibration of simulated data End-user analysis PIC Kharkov BNL ICEPP Minsk PNPI NIKHEF
8 RuTier2 Cluster RuTier2 Conception: Cluster of institutional computing centers with Tier2 functionality operating for all four experiments - ALICE, ATLAS, CMS and LHCb Participating institutes: Moscow Moscow region St.Petersburg Novosibirsk ITEP, SINP MSU, RRC KI, LPI, MEPhI JINR, IHEP, INR RAS PNPI, SPbSU BINP RDMS CMS Advanced Tier2 Cluster is a part of RuTier2 Basic functions: analysis; simulations; users data support; calibration; reconstruction algorithm development The CMS data needed: ~5-10% of RAW DATA; ESD/DST for AOD selections design and checking; AOD for analysis - the volume of data depending on a concrete task
9 Tiers 1 CERN RDMS CMS computing model: RDMS CMS Advanced Tier2 Cluster as a part of RuTier2 RuTier2 Grid access to CERN 2004: 155 Mbs 2005: 310 Mbs 2007: 1-2 Gbs Data processing&analysis scenario is developing in a context of estimation of resources on a basis of selected physics channels in which the RDMS CMS plans to participate. Collaborative centers: RCC RCC MSU, MSU, RRC RRC KI KI Kharkov Minsk JINR Erevan SINP MSU RDMS CMS Tier2 Tbilisi RDMS CMS computing centers IHEP ITEP PNPI RAS LPI RAS INR RAS Institutes in Russia: IHEP, ITEP, JINR, SINP MSU (as 4 basic centers), LPI RAS, INR RAS, PNPI RAS Collaborative RDMS Institutes: ErPhI (Armenia), HEPI (Georgia) KIPT (Ukraine), NCPHEP (Belarus)
10 CMS Computing model
11 RDMS CMS Tier2 Cluster Features -the facilities (CPU and data storage resources) for: CMS Monte-Carlo production (30% of RDMS resources) - MC simulation of the standard CMS MC samples including the detector simulation and the first path reconstruction. The MC data will be moved to CERN or(and) stored locally. a number of analysis tasks and for detectors calibration, HLT and offline reconstruction algorithms and analysis tools development. - connectivity between the RDMS institutes: >1 Gbit/s Ethernet - replication of databases (conditions) used in reconstruction. Processing of a part of raw data in addition to ordinary functions of Tier-2 (likemaintaining andanalysis of AODs, data simulation and user support) It is needed both for a calibration/alignment of some CMS detector systems for which the RDMS is responsible and for creation and testing of reconstruction software applied to someparticular physics channels. somepart of RAW,ESD/DST and the proper set of AOD data will be transferred and kept at the RDMS CMS Tier2 cluster at the mass storage system (MSS), located in one or two institutes.
12 Creation of a common distributed CMS grid- infrastructure at the RDMS CMS institutes - IHEP, ITEP, JINR, SINP MSU (as 4 basic centers) and INR RAS, LPI RAS, PNPI RAS, KIPT (Ukraine) have computing infrastructure integrated into the global EGEE/WLCG infrastructure and queues for CMS VO enabled; SEs for CMS data storage provided - CMSSW (specialized CMS package for data processing and simulation) installed at all the RDMS grid-sites - CMS VOBOX services provided - Phedex (the data placement and the file transfer system for the CMS experiment servers installed - Squid servers installed (for to access "conditions data" worldwide - see
13 Participation in CMS Databases design and support HE Calibration and Raw Data Flow JINR Dubna API JINR DS DCache XML files in concerned format; Pedestals Leveling coefficient Timing Raw Data by GRID Protocols GeV /ADC ( in progress ) SX 5 API HE DB CERN OMDS - Online Master Data Storage ORCON - Offline Reconstruction Conditions DB ONline subset ORCOF - Offline Reconstruction Conditions DB OFfline subset Conditions for HLT Data storage and data management SX5 Data Storage Disk Pool OMDS Conditions data ORCON ORCOF Conditions data Castor CERN DAQ Trigger 0 HLT Offline reconstruction Raw Data by GRID Protocols CERN - JINR Data management for MTCC: common schema of realization HE Calibration DB Status System is online Full calibration cycle support Integrated into CMS computing environment ~30000 records ~500Mb ~600Gb raw data transferred to JINR Castor CERN Dat a Buffer file Storage With AFS Data Storage Disk Pool with AFS Management and metadata DATA DA TA Data management information system UI GRID { JINR DS DCache Management and metadata it is necessary to provide facilities for DBs replication at the RDMS CMS T2 Cluster DATA
14 Participation in ARDA Project ARDA - A Realisation of Distributed Analysis for LHC ://lcg.web.cern.ch/lcg/activities/arda/arda.htmlarda.html For CMS Dashboard (which is monitoring system for CMS Virtual Organization distributed computing system ): ( ) Monitoring of data transfers between T1 and T2 CMS centers Monitoring of production jobs with MonALISA a) monitoring of errors (unique system for LCG and local farms) b) monitoring of the WN, SE, network, installed software c) participation in the design of the monitoring tables in Dashboard Portability of Monitor system Participation in the CRAB-Task Manager integration ( CRAB - CMS Remote Analysis Builder; ) Development of the methods for the organizing and accessing data for the CMS Dashboard user interface Development of the monitoring package to keep track on Grid jobs running via Condor workload management system the system is now being tested before its deployment on CMS sites
15 CMS Job Monitoring in CMS Dashboard RB R-GMA Monalisa RB CE WN Submission tools Snapshot Statistics Job info. Plots R-GMA Client API Dashboard Web UI PHP Collector (RGMA) Dashboard database sqlite pg Web Service Interface Collector (Monalisa) Postgres sqlite Others? Constantly retrieve job information RDMS CMS staff participates in ARDA activities on monitoring for CMS monitoring development: monitoring of errors due gridenvironment and automatic job restart in these cases; monitoring of a number of events done; the further expansion to private simulation
16 CMS jobs running at JINR WLCG site in 2008 year (information obtained from the CMS dashboard)
17 RDMS CMS Participation in CMS Computing Activities during In the frames of CMS Computing, Software, Data and Analysis challenges (DC04, CSA06, CSA07) and also during Common Computing Readiness Challenge of 2008 (CCRC08) LCG installation of CMSSW; Phedex and Squid servers installation; participation in Phedex data transfers (load test transfers and heart beat transfers; this spring the links to the RDMS CMS sites have been certified at the rate of 20MBs); participation in CMS event simulation: for example, in autumn 2006: 2 RDMS CMS sites (JINR and SINP) produced physical events it is 1 % of total amount (~50 millions events) Job Robot (user analysis jobs) testing at ITEP,JINR,SINP, IHEP and RRC KI The Common Computing Readiness Challenge of 2008 (CCRC08) aimed to test the full scope of the data handling and analysis activities needed for LHC data-taking operations in 2008 See
18 CERN-PROD as T1 center for RDMS (since February, 2007) March 19, 2007 Transfer rate MBs 99% of successful transfers and 990 GB have been transferred October-November, 2006 (CNAF as FTS T1 for JINR) Transfer rates less than 2 MBs Ratio of successful transfers 14-42%
19 Example of Current CERN-JINR Transfer Rates ( )
20 CMS Job Robot testing The JobRobot is a program, currently operated from a machine at CERN, that creates CRAB jobs, submitts them to specific SE sites, and collects them while keeping track of the corresponding information. Its main objective is to test how sites are responding to job processing in order to detect possible problems and correct them as soon as possible see JINR (since ), IHEP (since ) and ITEP (since ) started their participation in CMS Job Robot site testing. During October 2007 March jobs were submitted to the RDMS CMS sites (JINR, ITEP and IHEP) and 91 % of jobs ended successfully (compare: 79.7% of jobs submitted to all the CMS sites ended successfully) SINP and RRC KI successfully joined these activities since May
21 RDMS CMS T2 Cluster Resources and CMS Analysis Tasks By June, 2008 we provide 725 KSI2K CPU resources and 150 TB SE disk space for CMS. A significant increasing of resources is expected by autumn, this year. By the moment Heavy Ions analysis task is assigned to the RDMS CMS T2 cluster and after expanding storage capacities, two more tasks (Exotica and Muons) are expected to be assigned.
22 User Training and Induction Courses for CMS users on submitting jobs to the LCG-infrastructure are regularly conducting
23 SUMMARY RDMS CMS Computing Model has been defined as Tier2 Cluster of institutional computing centers with a partial T1 functionality Data processing & analysis scenario has been developed in a context of estimation of resources on a basis of selected physics channels in which the RDMS CMS plans to participate. The very solution that CERN will serve as T1 center for RDMS would be a strong basement to provide the RDMS CMS computing model requirements. CERN-T1 serves as FTS server for Russian sites since February, The proper RDMS CMS Grid infrastructure has been constructed at RuTier2 and the infrastructure has been successfully tested during years. RDMS RuTier2 CPU and storage resources are sufficient for analysis of first data after the LHC start and for simulation in the 2008 year
24 BACKUP SLIDES
25 CMS Job Robot lessons Starting in October,2007 the participation in CMS JOb Robot testing at JINR, we have encounted JINR local network overloading. That time all 80 computing nodes (4 Core each) (splited by 3 racks) were connected to the main LAN router via 3 1GbE links and 12 SE pool nodes all via 1 1GbE link only. The attempt to improve the situation by connecting all the computing nodes by the only 1Gbs link and each of 12 Se pool nodes by 1 GbE link each did not give a desirable result. In really this time we ended up with complitely overloaded 1GbE links to 3 WN's racks. The fact is that CMS jobs submitted by Job Robot were some kind of a typical CMS job while a small sample of events is extracted from a large.root-file and the new (smaller) file is created for the further physical analysis. These jobs require only several minutes of CPU time but reading of.root-files of > 2GB size causes data transfers 3 times more throughout the local network. As result, local network overloading causes troubles with TCP/IP, SNMP and SSH network protocols and a low efficiency of job execution (CMS jobs of 3 minutes CPU time have more than 1.5 hour of wall clock in the batch).
26 CMS Job Robot lessons (cont.) Thus we had make a decision to reconfigurate the JINR computing center local network structure by constructing a separate dedicated subnetwork for the disk raids, computing farm and a number of the NFS servers. That reconfiguration required to install new Procurve 3500yl- 48G as main LAN router and several Procurve G/48G switches. At presend all racks with WNs and SE pools has been installed with 4-8 1GbE links to the main LAN router (802.3ad - link aggregation/trunking). As result, these actions on the reconfiguration and the attendant upgrade provided the proper environment for LHC jobs and, in particular, for CMS jobs
RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE
RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationThe CMS Computing Model
The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationSpanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"
Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year
More informationMonte Carlo Production on the Grid by the H1 Collaboration
Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring
More information150 million sensors deliver data. 40 million times per second
CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger
More informationLHCb Distributed Conditions Database
LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The
More informationLHC Grid Computing in Russia: present and future
Journal of Physics: Conference Series OPEN ACCESS LHC Grid Computing in Russia: present and future To cite this article: A Berezhnaya et al 2014 J. Phys.: Conf. Ser. 513 062041 Related content - Operating
More informationExperience of Data Grid simulation packages using.
Experience of Data Grid simulation packages using. Nechaevskiy A.V. (SINP MSU), Korenkov V.V. (LIT JINR) Dubna, 2008 Contant Operation of LCG DataGrid Errors of FTS services of the Grid. Primary goals
More informationUW-ATLAS Experiences with Condor
UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS
More informationFederated data storage system prototype for LHC experiments and data intensive science
Federated data storage system prototype for LHC experiments and data intensive science A. Kiryanov 1,2,a, A. Klimentov 1,3,b, D. Krasnopevtsev 1,4,c, E. Ryabinkin 1,d, A. Zarochentsev 1,5,e 1 National
More informationWorkload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova
Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present
More informationIEPSAS-Kosice: experiences in running LCG site
IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of
More informationCLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER
CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER V.V. Korenkov 1, N.A. Kutovskiy 1, N.A. Balashov 1, V.T. Dimitrov 2,a, R.D. Hristova 2, K.T. Kouzmov 2, S.T. Hristov 3 1 Laboratory of Information
More informationFirst Experience with LCG. Board of Sponsors 3 rd April 2009
First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large
More informationMonitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY
Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.
More informationThe LHC Computing Grid
The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current
More informationI Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011
I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds
More informationBatch Services at CERN: Status and Future Evolution
Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge
More informationGRID Activity at Russia and JINR
GRID Activity at Russia and JINR Korenkov Vladimir (LIT,JINR) MMCP-2009, Dubna,, 07.07.09 Grid computing Today there are many definitions of Grid computing: The definitive definition of a Grid is provided
More informationAustrian Federated WLCG Tier-2
Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1
More informationCMS Computing Model with Focus on German Tier1 Activities
CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS
More informationChallenges of the LHC Computing Grid by the CMS experiment
2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment
More informationDESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities
DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group
More informationFederated Data Storage System Prototype based on dcache
Federated Data Storage System Prototype based on dcache Andrey Kiryanov, Alexei Klimentov, Artem Petrosyan, Andrey Zarochentsev on behalf of BigData lab @ NRC KI and Russian Federated Data Storage Project
More informationThe LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland
The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability
More informationData Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project
Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air
More informationSummary of the LHC Computing Review
Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,
More informationHEP Grid Activities in China
HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform
More informationThe INFN Tier1. 1. INFN-CNAF, Italy
IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),
More informationTier2 Centre in Prague
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the t Academy of Sciences of the Czech Republic Outline Supported groups Hardware Middleware and software Current status 2 Particle
More informationMeta-cluster of distributed computing Dubna-Grid
Meta-cluster of distributed computing Dubna-Grid Creation of a multi-purpose town scale network of distributed computing Peter Zrelov, JINR on behalf of the Dubna-Grid Collaboration AGREEMENT Between -
More informationThe Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop
The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will
More informationThe Grid: Processing the Data from the World s Largest Scientific Machine
The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),
More informationIvane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU
Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:
More informationConference The Data Challenges of the LHC. Reda Tafirout, TRIUMF
Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment
More informationCMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February
UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB
More informationExperience of the WLCG data management system from the first two years of the LHC data taking
Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz
More informationStephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)
Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC
More informationThe evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model
Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050
More informationCERN and Scientific Computing
CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport
More informationThe JINR Tier1 Site Simulation for Research and Development Purposes
EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes
More informationPROOF-Condor integration for ATLAS
PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline
More informationLHCb Computing Resource usage in 2017
LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April
More informationDatabase monitoring and service validation. Dirk Duellmann CERN IT/PSS and 3D
Database monitoring and service validation Dirk Duellmann CERN IT/PSS and 3D http://lcg3d.cern.ch LCG Database Deployment Plan After October 05 workshop a database deployment plan has been presented to
More informationStorage and I/O requirements of the LHC experiments
Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,
More informationThe LHC Computing Grid
The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires
More informationI Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC
I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri
More informationHTC/HPC Russia-EC. V. Ilyin NRC Kurchatov Institite Moscow State University
HTC/HPC Russia-EC V. Ilyin NRC Kurchatov Institite Moscow State University some slides, with thanks, used available by Ian Bird (CERN) Alexey Klimentov (CERN, BNL) Vladimir Voevodin )MSU) V. Ilyin meeting
More informationPhysics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010
Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for
More informationAGIS: The ATLAS Grid Information System
AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS
More informationThe LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008
The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments
More informationExperience with Data-flow, DQM and Analysis of TIF Data
Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,
More information1. Introduction. Outline
Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon
More informationARC integration for CMS
ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki
More informationISTITUTO NAZIONALE DI FISICA NUCLEARE
ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko
More informationForschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss
Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz
More informationGrid Computing Activities at KIT
Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy
More informationHigh Energy Physics data analysis
escience Intrastructure T2-T3 T3 for High Energy Physics data analysis Presented by: Álvaro Fernandez Casani (Alvaro.Fernandez@ific.uv.es) IFIC Valencia (Spain) Santiago González de la Hoz, Gabriel Amorós,
More informationAnalisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI
Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not
More informationJINR cloud infrastructure
Procedia Computer Science Volume 66, 2015, Pages 574 583 YSC 2015. 4th International Young Scientists Conference on Computational Science V.V. Korenkov 1,2, N.A. Kutovskiy 1,2, N.A. Balashov 2, A.V. Baranov
More informationFrom raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider
From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference
More informationLCG data management at IN2P3 CC FTS SRM dcache HPSS
jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique
More informationThe CMS L1 Global Trigger Offline Software
The CMS L1 Global Offline Software Vasile Mihai Ghete Institute for High Energy Physics, Vienna, Austria Seminar 08-09 June 2009, HEPHY Vienna CMS experiment Tracker pixel detector: 3 barrel layers, 2
More informationChallenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk
Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2
More informationEvaluation of the computing resources required for a Nordic research exploitation of the LHC
PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson
More informationThe creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM
The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE
More informationCC-IN2P3: A High Performance Data Center for Research
April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion
More informationALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop
1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data
More informationarxiv: v1 [physics.ins-det] 1 Oct 2009
Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,
More informationData Transfers Between LHC Grid Sites Dorian Kcira
Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference
More informationCHIPP Phoenix Cluster Inauguration
TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationClouds at other sites T2-type computing
Clouds at other sites T2-type computing Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are used in a variety of ways for Tier-2 type computing MC simulation, production
More informationDistributed Data Management on the Grid. Mario Lassnig
Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems
More informationTier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow
Tier-2 structure in Poland R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Plan LHC experiments in Poland Tier-2 centers Current activities - SC3 Expectations
More information30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy
Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of
More informationATLAS Experiment and GCE
ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors
More informationThe High-Level Dataset-based Data Transfer System in BESDIRAC
The High-Level Dataset-based Data Transfer System in BESDIRAC T Lin 1,2, X M Zhang 1, W D Li 1 and Z Y Deng 1 1 Institute of High Energy Physics, 19B Yuquan Road, Beijing 100049, People s Republic of China
More informationEvent cataloguing and other database applications in ATLAS
Event cataloguing and other database applications in ATLAS Dario Barberis Genoa University/INFN 1 Topics Event cataloguing: the new EventIndex DB Database usage by ATLAS in LHC Run2 PCD WS Ideas for a
More informationPhysics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011
Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Location (1) Building 513 (opposite of restaurant no. 2) Building 513 (1) Large building with 2700 m 2
More informationThe Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS
The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS DESY Computing Seminar Frank Volkmer, M. Sc. Bergische Universität Wuppertal Introduction Hardware Pleiades Cluster
More information2012 STANDART YEAR CERN Tier0 CAF Tier 1 Total Tier1 Tier1ex Tier2ex Total ex Total CPU (MSI2K)
18 déc. 07 2007 NEW LHC SCHEDULE CERN Tier0 CAF Tier 1 Tier1 Tier1ex Tier2ex ex 0.0675 0.0304 0.0246 0.0979 3.2160 3.1914 3.1914 3.2894 0 CPU (MSI2K) 0.0205 0.0093 0.0075 0.0298 0.9777 0.9702 0.0000 0.9702
More informationCouchDB-based system for data management in a Grid environment Implementation and Experience
CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment
More informationVirtualizing a Batch. University Grid Center
Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum
More informationData oriented job submission scheme for the PHENIX user analysis in CCJ
Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content
More informationInstallation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing
Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of
More informationClouds in High Energy Physics
Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service
More informationEdinburgh (ECDF) Update
Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1
More informationCMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster
CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:
More informationPhilippe Charpentier PH Department CERN, Geneva
Philippe Charpentier PH Department CERN, Geneva Outline Disclaimer: These lectures are not meant at teaching you how to compute on the Grid! I hope it will give you a flavor on what Grid Computing is about
More informationThe ATLAS Tier-3 in Geneva and the Trigger Development Facility
Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online
More informationStorage Resource Sharing with CASTOR.
Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing
More informationComputing at Belle II
Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small
More informationWLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.
WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear
More informationLHCb Computing Resources: 2018 requests and preview of 2019 requests
LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:
More informationDepartment of Physics & Astronomy
Department of Physics & Astronomy Experimental Particle Physics Group Kelvin Building, University of Glasgow, Glasgow, G12 8QQ, Scotland Telephone: +44 (0)141 339 8855 Fax: +44 (0)141 330 5881 GLAS-PPE/2004-??
More informationTier-2 DESY Volker Gülzow, Peter Wegner
Tier-2 Planning @ DESY Volker Gülzow, Peter Wegner DESY DV&IT 1 Outline LCG- requirements and concept Current status and plans for DESY Conclusion Plans for Zeuthen DESY DV&IT 2 LCG requirements and concepts
More information