Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"
|
|
- Dorcas Gibson
- 5 years ago
- Views:
Transcription
1 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT)
2 Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget concentrate on last year results Goals (as on the proposal): Main objective is to be prepared early in 2008 for first physics run Provide resources equivalent to 5% of the total CMS needs for T2 as agreed for WLCG MoU Provide the corresponding services with a good quality Provide a computing environment to the Spanish CMS HEP community for an appropriate profit of LHC physics potential Contribute to CMS with the required resources for common computing tasks Training of specialized manpower for operation, maintenance and future upgrade of the centre Additional important role for our T2: User support to physicists for software and computing issues Encourage and help the integration of users into grid analysis tools
3 Spanish T2 Activities Remark: very good integration of physics, detector and software needs! not only participating in challenges as computer exercise, commissioning the system, but already providing services and resources From the start of CSA06 had a very active and successful activity Good quality of data transfers Several exercises run (alignment and analysis ) Successful integration of a significant number of physicists Usage maintained after the CSA06: more or less continuous flat usage even outside challenges Contributed to CSA07 within the limited tasks expected for T2 Continuous contribution to physics analysis Providing resources and services for real data (cosmic) analysis: Detector and software commissioning muon reconstruction alignment and calibration tasks MTCC data was (and is) being analyzed at our T2 GR (Global Run) Significant contribution to MC production during these years (above the 5% share)
4 Cosmic Data MTCC, Magnet Test and Cosmic Challenge during 2006 CMS magnet was tested data was taken for a fraction of the detector The Spanish T2 was one of the few sites where data was transferred a total 20 TB transferred and published for physics (still available) Widely used by Spanish community still being used several CMS notes only real data ongoing analysis (cosmic muon charge ratio) muon reconstruction alignment and calibration tasks important for detector and software commissioning Similar work going on with Global Run end 2007 and more coming soon
5 Data transfers Spanish Tier2 has participated in the different data transfer tests during these two-year period data was also transferred for analysis under Spanish physicist request Over 1 PB was imported and over 400 TB were exported connected to all CMS T1 Transfer to Spanish T2 Transfer from Spanish T2
6 Transfer quality Good transfer quality during CSA07 obviously some bad periods during debugging in most cases the problem was on T1 side
7 Transfer quality two examples of average quality over a year compared to other T2 sites from CNAF from PIC
8 Commissioned links CMS requires to maintain a reasonable efficiency and activity to keep links active If not fulfilled links are deactivated for production transfer Driven by challenges and physics requirements need to have the link active to the T1 where the data is sitting According to computing model: download from all T1 (not T0) and upload to one T1 (PIC) In practice: link from T0 allowed and needed for some tasks link from all T1 not (yet) necessary upload to at least two T1 (as backup) Spanish sites have commissioned most of the links (all the needed ones) the non-active links were never attempted because unnecessary (not requested by CMS and no interesting data stored at the sites)
9 Site Name To T1 From T1 Current number of active links Site Name To T1 From T1 T2_AT_Vienna 2 2 T2_BE_IIHE 3 5 T2_BE_UCL 3 4 T2_BR_SPRACE 0 0 T2_BR_UERJ 1 1 T2_CH_CSCS 4 6 T2_CN_Beijing 2 3 T2_DE_DESY 5 8 T2_DE_RWTH 4 6 T2_EE_Estonia 5 3 T2_ES_CIEMAT 2 5 T2_ES_IFCA 3 8 T2_FI_HIP 1 3 T2_FR_GRIF_DAPNIA 0 0 T2_FR_GRIF_LAL 2 1 T2_FR_GRIF_LLR 2 0 T2_FR_GRIF_LPNHE 2 0 T2_HU_Budapest 1 1 T2_IN_TIFR 0 1 T2_IT_Bari 4 6 T2_IT_Legnaro 2 5 T2_IT_Pisa 4 7 T2_IT_Rome 2 4 T2_PT_LIP_Coimbra 2 2 T2_PT_LIP_Lisbon 1 2 T2_RU_IHEP 0 0 T2_RU_ITEP 1 1 T2_RU_JINR 1 1 T2_RU_PNPI 0 0 T2_RU_RRC_KI 0 0 T2_RU_SINP 0 0 T2_TR_METU 0 1 T2_TR_ULAKBIM 0 0 T2_TW_Taiwan 3 6 T2_UK_London_Brunel 2 3 T2_UK_London_Imperial 2 3 T2_UK_London_QMUL 0 0 T2_UK_London_RHUL 0 1 T2_UK_SouthGrid_Bristol 2 1 T2_UK_SouthGrid_RALPPD 1 3 T2_US_Caltech 1 5 T2_US_Florida 1 5 T2_US_MIT 1 5 T2_US_Nebraska 4 8 T2_US_Purdue 1 5 T2_US_UCSD 2 3 T2_PL_Warsaw 3 3 T2_US_Wisconsin 1 7
10 MC production CIEMAT Continuous and efficient production at CIEMAT. At IFCA, interrupted due to CMS incompatibility with DPM storage 24 Mevt (~ 7% of all production, 10% if T1 excluded) in 2007, Contribution to production integration and operation (LCG1 team, Spanish and Portuguese sites)
11 Site availability Scheduled downtime
12 Job Load Continuous activity production tests analysis nearly 7000 jobs/week in 2008 Jobs/6 days at IFCA Jobs/6 days at CIEMAT Sch. dowtime
13 Job load Averaging over the last year: Spanish T2 supported 6.2% of total T2 load Spanish T2 supported 7.4% of analysis jobs Analysis jobs CIEMAT Any type of jobs IFCA
14 Analysis activity User analysis (not central production) jobs in 2008 both the Spanish sites among the more active (of >40 sites) mainly driven by local physicists community
15 Services Both sites support different EGEE and CMS dedicated services: each imply some resources often duplicated for tests imply a significant of manpower for installation and maintenance CMS Services: PhEDEx for data transfer SQuidserver for FroNTier (condition DB) cache software repository EGEE GRID Services CE+UI+RB Torque + Maui scheduler SE in particular was a very demanding issue (sill not totally solved) CASTOR/DPM/DCache/Storm
16 Budget and resources Project started with a significant reduction on the budget Very tight on personnel, the full budget (and more) was spent and support from institutes many and very demanding tasks that need dedicated expert (technical) manpower! More requirements on general infrastructure and resources for services However: LHC did not run in 2007 institutes provided resources for running in 2006 and 2007, purchase delayed to the end of 2007, significant saving other sources of funding contributing Resources for 2008 purchased, installed and operational or being tested Will be fully operational for June (effective storage, including formatting, raid, spares...) CIEMAT IFCA T2-ES 5% of MoU CPU (KSI2k) Disk Storage (TB)
17 Computer resources at sites A description of the current resources at the sites follow Note that part of the resources are provided by the institute or other projects not on FPA budget cannot be compromised for CMS can often be used for CMS (as done for most of 2006/2007)
18 Computing GEANT GEANT RedIris RedIris Other services: 4 User Interfaces SW repository FroNTier / Squid 1 Gbps CIEMAT CIEMAT Internal Internalnetwork network 1 Gbps Gbps Storage Element: dcache 1.7 SRM v1 50 TB of disk + 30 TB to add (Castor) Disk servers in RAID 5 No tape Dell PowerEdge 1850 Supermicro X7DB8 Computing Element: lcg-ce, PBS 336 cores: SL4 116 Xeon 5160, 3.2 GHz 64 Opteron 270, 2 GHz 156 Xeon, 3.0 GHz 680 KSpecInt
19 Storage /pnfs/ciemat.es/data/... /cms/prod cms prod permanent 35 TB srm.ciemat.es /cms/prod/scratch cms prod volatile 2.31 TB /cms/ cms (rest) permanent 2.03 TB PNFS /cms/scratch cms (rest) volatile 0.46 TB /ops ops volatile 0.46 TB dcache admin /dteam dteam volatile 0.45 TB Currently 24 TB of CMS data (9 TB CSA07 skims)
20 Processing Power: WN s & Services at IFCA (non CMS dedicated) Cluster eifca 90 IBM x336 (2 Xeon 3.2GHz, 2GB RAM, 2 GE) 180 cores Cluster INGRID 16 IBM x3550: 2 Intel (dual core) Xeon 5160, 3.0GHz, 4GB RAM, 2GE+Infiniband) 4 IBM x3550: 2 Intel (quad core) Xeon 5345, 2.33GHz, 4GB RAM, 2GE+Infiniband) 96 cores EGEE GRID Services CE+UI+RB per project 1 common batch system: Torque + Maui Nodes for interactive access and development 20 IBM 206, Pentium IV 3.2 GHz
21 Resources: Processing power (CMS specific) New cluster recently installed (purchased with FPA budget) 2 IBM Blade Center E 14 servers/bladecenter 4 power supply/bladecenter 2 10Gbit copper Gbit fibre/bladecenter 28 servers IBM Blade HS21 XM 2 Intel Quad Core 2.33 GHz 16GB RAM 1 HD SAS 73GB/10k 2 GE per blade 224 cores 515 KSi2K in total
22 Resources: Storage DPM (Disk Pool Manager) 1 server : Mysql DB 3 disk servers: HP Proliant 370 (8 sata disks) 3 disk servers: IBM 3500 (8 sata disks) 3 disk servers: HP Proliant 320s1 (12 disks) 44 TB RAID5 / 72 Hard Disks / 1+8 Servers GPFS based SE (installation in progress) 2 servers : GPFS at IFCA 1 server : SRM 2 fiber switchs, SAN (Storage Area Network) 7 Disk Arrays: IBM 1 ds exp Hard Disks 750GBo = 84TBo 10 Disk Arrays: IBM 2 ds exp Hard Disks 500GBo = 75TBo (30%) > 100 TB for CMS
23 Additional benefits The activity has produced several publications CMS-note-2004/030 DC04 at the Spanish sites CMS-note-2006/087 SC3 at the Spanish sites CMS-note-2007/022 CSA06 at the Spanish sites CMS-note-2005/019 Implementation of MC production in LCG CMS-note-2007/016 Integration and operational experience in MC production in LCG CHEP 2007 contributions: Exercising CMS dataflows and workflows in computing challenges at the Spanish Tier-1 and Tier-2 sites A software and computing prototype for CMS Muon System alignment 'MC production in the WLCG computing Grid' The good performance of the Federated T2 has also produced a better visibility in CMS, which lead to some of its member being proposed for coordination of different groups: Integration and commissioning coordinator Site commissioning (part of PADA subgroup) Responsibility for running production in Spain and Portugal As physicists the main outcome it that we already have provided resources and user support for developing physics: TDR, CMS notes...
24 Conclusions Spanish T2 during these last 2 years has clearly fulfilled all of its goals: Sites are prepared for first physics run in the coming months have provide over 5% of the total CMS needs for T2 the last couple of years have read 5% of resources as agreed for WLCG MoU services are established and working with a good quality Spanish users have made a good usage of the resources and had a better situation to perform the analysis Have contributed above the 5% for simulation Have started to train specialized manpower for operation, maintenance and future upgrade of the centre Need to maintain and increase the manpower! closeness of computing and physicists has proved to work very well User support to physicists provided and feed-back received large optimization and efficiency to define and obtain data samples Resources and services are ready for 2008, waiting for LHC first collisions!
The INFN Tier1. 1. INFN-CNAF, Italy
IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),
More informationChallenges of the LHC Computing Grid by the CMS experiment
2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment
More informationForschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss
Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz
More informationAustrian Federated WLCG Tier-2
Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1
More informationCMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February
UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationTier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow
Tier-2 structure in Poland R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow Plan LHC experiments in Poland Tier-2 centers Current activities - SC3 Expectations
More information150 million sensors deliver data. 40 million times per second
CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger
More informationDESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities
DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationSystem upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.
System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationThe creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM
The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE
More informationConstant monitoring of multi-site network connectivity at the Tokyo Tier2 center
Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University
More informationwhere the Web was born Experience of Adding New Architectures to the LCG Production Environment
where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop
More informationHEP Grid Activities in China
HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform
More informationI Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011
I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds
More informationStorage and I/O requirements of the LHC experiments
Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,
More informationGrid Computing Activities at KIT
Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy
More informationEdinburgh (ECDF) Update
Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1
More informationThe CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers
Journal of Physics: Conference Series The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers To cite this article: D Bonacorsi et al 2010 J. Phys.: Conf. Ser. 219 072027 View
More informationThe LHC Computing Grid
The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current
More informationTier2 Centre in Prague
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the t Academy of Sciences of the Czech Republic Outline Supported groups Hardware Middleware and software Current status 2 Particle
More informationHigh-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011
High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationHigh Energy Physics data analysis
escience Intrastructure T2-T3 T3 for High Energy Physics data analysis Presented by: Álvaro Fernandez Casani (Alvaro.Fernandez@ific.uv.es) IFIC Valencia (Spain) Santiago González de la Hoz, Gabriel Amorós,
More informationDistributing storage of LHC data - in the nordic countries
Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed
More informationIvane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU
Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:
More informationSummary of the LHC Computing Review
Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,
More informationAnalisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI
Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not
More informationVirtualizing a Batch. University Grid Center
Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum
More informationIBM System Storage SAN Made Simple
T h e T o o l t o Y o u r I T N e e d s! ISSUE #18 / MARCH 2007 SINGAPORE EDITION Feature Model IBM System x IBM System x are designed to deliver exceptional availability, simplified manageability, outstanding
More informationNCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan
NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and
More informationCompact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005
Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and
More informationComputing / The DESY Grid Center
Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT
More informationOpportunities A Realistic Study of Costs Associated
e-fiscal Summer Workshop Opportunities A Realistic Study of Costs Associated X to Datacenter Installation and Operation in a Research Institute can we do EVEN better? Samos, 3rd July 2012 Jesús Marco de
More informationPIC Tier-1 Report. 1st Board for the follow-up of GRID Spain activities Madrid 5/03/2008. G. Merino and M. Delfino, PIC
PIC Tier-1 Report 1st Board for the follow-up of GRID Spain activities Madrid 5/03/2008 G. Merino and M. Delfino, PIC New Experiment Requirements New experiment requirements for T1s on 18-Sep-07, C-RRB
More informationIEPSAS-Kosice: experiences in running LCG site
IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of
More informationThe LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland
The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability
More informationBuilding resiliency with IBM System x and IBM System Storage solutions
Building resiliency with IBM System x and IBM System Storage solutions As your business grows and your infrastructure becomes more complex, a Storage Area Network (SAN) can help you reduce operating expenses,
More informationThe CMS Computing Model
The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+
More informationI Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC
I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri
More informationParallel Storage Systems for Large-Scale Machines
Parallel Storage Systems for Large-Scale Machines Doctoral Showcase Christos FILIPPIDIS (cfjs@outlook.com) Department of Informatics and Telecommunications, National and Kapodistrian University of Athens
More informationPhysics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010
Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for
More informationSPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2
EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment
More informationLarge scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS
Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.
More informationConsiderations for a grid-based Physics Analysis Facility. Dietrich Liko
Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce
More informationCHIPP Phoenix Cluster Inauguration
TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch
More informationExperience with Data-flow, DQM and Analysis of TIF Data
Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,
More informationThe Legnaro-Padova distributed Tier-2: challenges and results
The Legnaro-Padova distributed Tier-2: challenges and results Simone Badoer a, Massimo Biasotto a,fulviacosta b, Alberto Crescente b, Sergio Fantinel a, Roberto Ferrari b, Michele Gulmini a, Gaetano Maron
More informationTier2 Centers. Rob Gardner. University of Chicago. LHC Software and Computing Review UC San Diego Feb 7-9, 2006
Tier2 Centers Rob Gardner University of Chicago LHC Software and Computing Review UC San Diego Feb 7-9, 2006 Outline ATLAS Tier2 centers in the computing model Current scale and deployment plans Southwest
More informationThe evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model
Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050
More informationWorkload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova
Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present
More informationPreparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch
Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first
More informationTier-2 DESY Volker Gülzow, Peter Wegner
Tier-2 Planning @ DESY Volker Gülzow, Peter Wegner DESY DV&IT 1 Outline LCG- requirements and concept Current status and plans for DESY Conclusion Plans for Zeuthen DESY DV&IT 2 LCG requirements and concepts
More informationLong Term Data Preservation for CDF at INFN-CNAF
Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University
More informationCERN and Scientific Computing
CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport
More informationBenoit DELAUNAY Benoit DELAUNAY 1
Benoit DELAUNAY 20091023 Benoit DELAUNAY 1 CC-IN2P3 provides computing and storage for the 4 LHC experiments and many others (astro particles...) A long history of service sharing between experiments Some
More informationThe LHC Computing Grid Project in Spain (LCG-ES) Presentation to RECFA
* Collaboration of the Generalitat de Catalunya, IFAE and Universitat Autònoma de Barcelona Port d The LHC Computing Grid Project in Spain (LCG-ES) Presentation to RECFA Prof. Manuel Delfino Coordinating
More informationDESY. Andreas Gellrich DESY DESY,
Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was
More informationRDMS CMS Computing Activities before the LHC start
RDMS CMS Computing Activities before the LHC start RDMS CMS computing model Tiers 1 CERN Collaborative centers: RCC RCC MSU, MSU, RRC RRC KI KI Kharkov Minsk JINR Erevan SINP MSU RDMS CMS Tier2 Tbilisi
More informationClouds at other sites T2-type computing
Clouds at other sites T2-type computing Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are used in a variety of ways for Tier-2 type computing MC simulation, production
More informationThe Software Defined Online Storage System at the GridKa WLCG Tier-1 Center
The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)
More informationIBM System x3200 Entry server for retail, file and print or network infrastructure applications
T h e T o o l t o Y o u r I T N e e d s! ISSUE #19 / APRIL 2007 MALAYSIAN EDITION Feature Model IBM System x IBM System x are designed to deliver exceptional availability, simplified manageability, outstanding
More informationThe grid for LHC Data Analysis
The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data
More informationComputing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1
ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)
More informationCluster Setup and Distributed File System
Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto
More informationChallenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk
Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2
More informationThe ATLAS Tier-3 in Geneva and the Trigger Development Facility
Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online
More informationirods usage at CC-IN2P3: a long history
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules irods usage at CC-IN2P3: a long history Jean-Yves Nief Yonny Cardenas Pascal Calvat What is CC-IN2P3? IN2P3:
More informationarxiv: v1 [physics.ins-det] 1 Oct 2009
Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,
More informationThe ATLAS Production System
The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and
More informationUniversity of Johannesburg South Africa. Stavros Lambropoulos Network Engineer
University of Johannesburg South Africa Stavros Lambropoulos Network Engineer History of the UJ Research Cluster User Groups Hardware South African Compute Grid (SA Grid) Status Applications Issues Future
More informationData Transfers Between LHC Grid Sites Dorian Kcira
Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference
More informationDocuShare 6.6 Customer Expectation Setting
Customer Expectation Setting 2011 Xerox Corporation. All Rights Reserved. Unpublished rights reserved under the copyright laws of the United States. Contents of this publication may not be reproduced in
More informationATLAS COMPUTING AT OU
ATLAS COMPUTING AT OU Outline HORST SEVERINI OU DOE REVIEW FEBRUARY 1, 2010 Introduction US ATLAS Grid Computing and Open Science Grid (OSG) US ATLAS Tier 2 Center OU Resources and Network Summary and
More informationCMS Computing Model with Focus on German Tier1 Activities
CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS
More informationPhysics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011
Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Location (1) Building 513 (opposite of restaurant no. 2) Building 513 (1) Large building with 2700 m 2
More informationFirst Experience with LCG. Board of Sponsors 3 rd April 2009
First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large
More informationThe Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop
The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will
More informationThe LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008
The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments
More informationDatabase Services at CERN with Oracle 10g RAC and ASM on Commodity HW
Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN
More informationSMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES
Jan - Mar 2009 SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES For more details visit: http://www-07preview.ibm.com/smb/in/expressadvantage/xoffers/index.html IBM Servers & Storage Configured
More informationCPU Performance/Power Measurements at the Grid Computing Centre Karlsruhe
CPU Performance/Power Measurements at the Grid Computing Centre Karlsruhe SPEC Colloquium, Dresden, 2007-06-22 Manfred Alef Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz
More informationXEN and KVM in INFN production systems and a comparison between them. Riccardo Veraldi Andrea Chierici INFN - CNAF HEPiX Spring 2009
XEN and KVM in INFN production systems and a comparison between them Riccardo Veraldi Andrea Chierici INFN - CNAF HEPiX Spring 2009 Outline xen kvm Test description Benchmarks Conclusions Riccardo.Veraldi@cnaf.infn.it
More informationConference The Data Challenges of the LHC. Reda Tafirout, TRIUMF
Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment
More informationGrid Computing at Ljubljana and Nova Gorica
Grid Computing at Ljubljana and Nova Gorica Marko Bračko 1, Samo Stanič 2 1 J. Stefan Institute, Ljubljana & University of Maribor 2 University of Nova Gorica The outline of the talk: Introduction Resources
More informationComputing for LHC in Germany
1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''
More information1. Introduction. Outline
Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon
More informationReview of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal
Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear
More informationLHC Grid Computing in Russia: present and future
Journal of Physics: Conference Series OPEN ACCESS LHC Grid Computing in Russia: present and future To cite this article: A Berezhnaya et al 2014 J. Phys.: Conf. Ser. 513 062041 Related content - Operating
More informationVMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef
VMs at a Tier-1 site EGEE 09, 21-09-2009 Sander Klous, Nikhef Contents Introduction Who are we? Motivation Why are we interested in VMs? What are we going to do with VMs? Status How do we approach this
More informationRUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE
RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,
More informationCIT 668: System Architecture. Scalability
CIT 668: System Architecture Scalability 1. Scales 2. Types of Growth 3. Vertical Scaling 4. Horizontal Scaling 5. n-tier Architectures 6. Example: Wikipedia 7. Capacity Planning Topics What is Scalability
More informationService Level Agreement Metrics
E Service Level Agreement Metrics SLA SA1 Working Group Łukasz Skitał Central European ROC ACK CYFRONET AGH Introduction Objectives To provide formal description of resources/services provided by Resource
More informationData oriented job submission scheme for the PHENIX user analysis in CCJ
Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content
More informationFigure 1: cstcdie Grid Site architecture
AccessionIndex: TCD-SCSS-T.20121208.098 Accession Date: Accession By: Object name: cstcdie Grid Site Beowulf Clusters and Datastore Vintage: c.2009 Synopsis: Complex of clusters & storage (1500 cores/600
More informationLHCb Computing Resource usage in 2017
LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April
More informationHIGH PERFORMANCE SANLESS CLUSTERING THE POWER OF FUSION-IO THE PROTECTION OF SIOS
HIGH PERFORMANCE SANLESS CLUSTERING THE POWER OF FUSION-IO THE PROTECTION OF SIOS Proven Companies and Products Fusion-io Leader in PCIe enterprise flash platforms Accelerates mission-critical applications
More informationNAF & NUC reports. Y. Kemp for NAF admin team H. Stadie for NUC 4 th annual Alliance Workshop Dresden,
NAF & NUC reports Y. Kemp for NAF admin team H. Stadie for NUC 4 th annual Alliance Workshop Dresden, 2.12.2010 NAF: The Overview Picture and current resources Yves Kemp NAF 2.12.2010 Page 2 ATLAS NAF
More information