CMS LHC-Computing. Paolo Capiluppi Dept. of Physics and INFN Bologna. P. Capiluppi - CSN1 Catania 18/09/2002

Size: px
Start display at page:

Download "CMS LHC-Computing. Paolo Capiluppi Dept. of Physics and INFN Bologna. P. Capiluppi - CSN1 Catania 18/09/2002"

Transcription

1 CMS LHC-Computing Paolo Capiluppi Dept. of Physics and INFN Bologna P. Capiluppi - CSN1 Catania 18/09/2002

2 Outline Milestones and CMS-Italy Responsibilities CCS (Core Computing and Software) milestones Responsibilities (CMS Italy) Productions (Spring 2002) Goals and main issues Available resources Work done Data Challenge 04 Goals and plans CMS Italy participation and plans (preliminary) LCG role Tier1 and Tier2s (and Tier3s) LCG and Grid What s LCG Grid Real Results and Strategies Conclusion 2

3 Milestones (CCS and externals) Milestones (CCS and externals) DAQ TDR End EU-DataGrid / EU-DataTAG Projects End US-GriPhyN Project Data Challenge 04 (5%) LCG-1 phase 1 LCG-1 phase 2 End US-PPDG Project CCS TDR Physics TDR End LCG Phase 1 LCG-3 Data Challenge 05 November 2002 December 2003 December 2003 February 2004 June 2003 November 2003 December 2004 November 2004 December 2005 December 2005 December 2004 April 2005 Data Challenge 06 April

4 CCS Level 2 milestones DC04 CCS Level 2 milestones (15 in total) 4 Most details defined at Level 1 (Level 2 milestones straightforward) See next slide

5 CCS Organigram (June 2002) CCS Organigram (June 2002) CCS PM David Stickland Technical Coordinator Lucas Taylor Resource Manager Ian Willers Regional Center Coordination Lothar Bauerdick Production Processing & Data Mgmt Tony Wildish Architecture Frameworks &Toolkits Vincenzo Innocente Computing & Software Infrastructure Nick Sinanis GRID Integration Claudio Grandi CMS Librarian Shaun Ashby 5

6 Boards of CCS Boards of CCS CMS CMS Collaboration Collaboration Board Board acts acts as as Institution Institution Board Board for for CCS CCS 4 meetings meetings per per year year CCS-TB CCS-TB (Open (Open mtg, mtg, 6/yr) 6/yr) Technical Technical Board Board Advises Advises PM PM L1,L2 L1,L2 managers managers + Cross Cross Project Project mgrs mgrs + CCS-SC T1/T2 CCS-SC T1/T2 Reps Reps (Closed (Closed mtg, mtg, weekly) weekly) Steering Steering Committee Committee L1,L2 L1,L2 Managers Managers + CCS-FB CCS-FB Cross-project Cross-project mgrs (Closed mgrs (Closed mtg, mtg, 6/yr) 6/yr) +Co-opted +Co-opted experts Finance experts Finance Board Board CCS CCS Management+ Management+ Funding Funding Agency Agency reps reps 6

7 CMS-Italy official Responsibilities CMS-Italy official Responsibilities CCS SC (Core Computing and Software Steering Committee) Grid Integration Level 2 manager (Claudio Grandi) INFN contact (Paolo Capiluppi) CCS FB (CCS Financial Board) INFN contact (Paolo Capiluppi) PRS (Physics Reconstruction and Software) Being recruited/refocused for the Physics TDR Muons (Ugo Gasparini) Tracker/b-tau (Lucia Silvestris) LCG (LHC Computing Grid Project) SC2 (Software and Computing Steering Committee) (Paolo Capiluppi alternate of David Stickland) Detector Geometry & Material Description RTAG (Requirements Technical Assessment Group) chairperson (Lucia Silvestris) HEPCAL (HEP Common Application Layer) RTAG (Claudio Grandi) CCS Production Team INFN contact (Giovanni Organtini) 7

8 Spring 2002 Production (and Summer extension) Goal of Spring 2002 Production: DAQ TDR Simulations and Studies ~6 million events simulated, then digitized at different luminosities NoPU (2.9M), 2x1033 (4.4M), 1034 (3.8M) CMSIM started in February with CMS125 Digitization with ORCA-6, started in April First analysis completed (just!) in time for the June CMS week Extension of activities: Summer 2002 Production Ongoing ntuple-only productions High-p t jets for the e-γ group (10 M) Non-recycled pileup for the JetMet group (300 K) Over 20 TB of data produced CMS-wide Most available at CERN, lots at FNAL, INFN FNAL, INFN, UK also hosting analysis Some samples analyzed at various T2s (Padova/Legnaro, Bologna, ) Production tools obligatory: IMPALA, BOSS, DAR, RefDB BOSS is an official CMS production Tool : INFN developed (A. Renzi and C. Grandi) and maintained (C. Grandi)! 8

9 (what are those acronyms?) (what are those acronyms?) IMPALA Uses RefDB assignments to create batch jobs locally in the RCs, uses BOSS to submit them BOSS Run-time tracking of job progress and interface to local scheduler DAR Distribution After Release, installs CMS software on farms RefDB Interface for PRS groups to request datasets, for production centres to update the status of their assignments, and for the production coordinator to monitor the overall progress of production 9

10 Spring02: CPU Resources 11 RCs (~20 sites) About 1000 CPUs and 30 people CMS-wide Some new sites & people, but lots of experience too UCSD 3% Moscow 10% INFN 18% UFL 5% Wisconsin 18% Bristol 3% RAL 6% HIP 1% Caltech 4% FNAL 8% IN2P3 10% IC 6% CERN 15% 10

11 2002 CMS Computing CMS-Italy available Resources as of August 2002 # CPUs # Boxes Average CPU (MHz) 2002 resources status Total Si2000 TBs on disk servers TBs on disks nodes Farm (% of use) Bari Bologna Catania Firenze Legnaro Milano Napoli Padova Pavia Perugia Pisa Roma Torino Total Tier1 CNAF

12 Production in the RCs Production in the RCs RC name CMSIM (K) 2x1033 (K) 1034 (K) Objy size (TB) CERN Bristol/RAL Caltech Fermilab INFN (9 sites) IN2P3 200 Moscow (4 sites) UCSD UFL Wisconsin Imperial College Thanks to: Giovanni Organtini (Rm), Luciano Barone (Rm), Alessandra Fanfani (Bo), Daniele Bonacorsi (Bo), Stefano Lacaprara (Pd), Massimo Biasotto (LNL), Simone Gennai (Pi), Nicola Amapane (To), et al. 12

13 CMS-Italy 2003 Milestones (INFN) 2003 Milestones One half of the sites Grid enabled for production LCG production prototypes ready (Tier1+Tier2) New CMS analysis environment installed and working (selected sites) One half of the sites working with the new persistency 5% DataChallenge participation of Tier1 and half of Tier2 Scadenza Aprile Giugno Giugno Ottobre Dicembre RC Hit NoPU 2x1033PU 1034PU Bologna Legnaro Pisa Roma Nb of Events (M)

14 6 million events CMSIM 1.2 seconds per event for 4 months Feb. 8 th June 6 th 14

15 4 million events 2x seconds per event, 2 months April 12 th June 6 th 15

16 3.5 million events seconds per event, 2 months April 10 th June 6 th 16

17 DC04, 5% Data Challenge DC04, 5% Data Challenge Definition Is 5% of running, or 25% of 2x10 33 (Startup) One month data taking at Cern, 50 M events It represents a factor 4 over Spring 2002, consistent with the goal of doubling complexity each year to reach a full-scale (for LHC startup) test by Spring 2006 Called DC04 (and the others DC05, DC06) to get over the % confusion More importantly, Previous challenges have mostly been about doing the Digitization This one will concentrate on the reconstruction, data distribution and early analysis phase Move the issue of Analysis Model out of the classroom and into the spotlight 17

18 Setting the Goals of DC04 Setting the Goals of DC04 As defined to the LHCC, the milestone consists of: CS April % Data challenge complete (Now called DC04) The purpose of this milestone is to demonstrate the validity of the software baseline to be used for the Physics TDR and in the preparation of the Computing TDR. The challenge comprises the completion of a 5% data challenge, which successfully copes with a sustained datataking rate equivalent to 25Hz at a luminosity of 0.2 x cm -2 s -1 for a period of 1 month (approximately 5 x 10 7 events). The emphasis of the challenge is on the validation of the deployed grid model on a sufficient number of Tier-0, Tier-1, and Tier-2 sites. We assume that 2-3 of the Tier- 1 centers and 5-10 of the Tier-2 centers intending to supply computing to CMS in the 2007 first LHC run would participate to this challenge. 18

19 Pre-Challenge DC04: Two Phases DC04: Two Phases (Must be successful) Large scale simulation and digitization Will prepare the samples for the challenge Will prepare the samples for the Physics TDR work to get fully underway Progressive shakedown of tools and centers All centers taking part in challenge should participate to pre-challenge The Physics TDR and the Challenge depend on successful completion Ensure a solid baseline is available, worry less about being on the cutting edge Challenge Reconstruction at T0 (CERN) Distribution to T1s (Must be able to fail) Subsequent distribution to T2s Assign streams and analyses to people at T1 and T2 centers Some will be able to work entirely within one center Others will require analysis of data at multiple-centers GRID tools tested for data movement and job migration 19

20 DC04 Setting the Scale DC04 Setting the Scale Pre-Challenge Challenge Aim is 1 month of running at 25 Hz, 20 hours per day 50 Million reconstructed events (passing L1 Trigger and mostly passing HLT, but some background samples also required)) Simulation (GEANT4!) 100TB 300 ksi95.months 1GHz P3 is 50 SI95 Working assumption that most farms will be at 50SI95/CPU in late 2003 Six months running for 1000 CPUS (Worldwide) (Actually aim for more CPU s to get production time down) Digitization 75TB 15 ksi95.months 175MB/s Pileup bandwidth (if allow two months for digitization) Reconstruction at T0-CERN 25TB 23 ksi95 for 1 month (460 50SI95/CPU) Analysis at T1-T2s Design a set of tasks such that offsite requirement during challenge is about twice that of the T0 20

21 Building a Real Plan for DC04 Building a Real Plan for DC04 Organization: The sites that will participate to each stage of the challenge must be identified and their contributions quantified. The roles of the prototype T0, T1 and T2 in the challenge should be clarified. Goals: Establish Physics TDR goals of the Production Establish Analysis Model Goals of the Challenge Software: Establish the baseline for Persistency of Event Data and Meta-Data for this challenge Create SPROM work-plan to meet Simulation requirements Create RPROM work-plan to meet Reconstruction requirements LCG: Coordinate dates of Challenge with LCG Establish any additional requirements on LCG-1 functionality Production Establish a baseline production environment capable of managing the pre-challenge Establish with GRID Projects and LCG the extent of GRID products available for the prechallenge production. Establish and monitor milestones to track this. Establish a baseline production environment capable of managing DC04 Establish the Monitoring systems required to measure the performance of DC04 and to identify bottlenecks both during the challenge and in its subsequent assessment. Establish a deployment model for the monitoring 21

22 Overview of Resource Requirements Estimates for CPU and Storage Requirements for CMS Data Challenge DC04 Year.Quarter 03Q3 03Q4 04Q1 04Q2 Computing Power (ksi95 Months) Total Requirement for Simulation Total Requirement for Digitization 15 Total Requirement for Reconstruction 25 Total Requirement for Analysis Total Previewed CERN/LCG capacity (Eck) CERN T CERN T1 (Challenge related only) Offsite T1+T2 (Challenge only) Storage (TeraBytes) Data Generated CERN Data Generated Offsite Data Transferred to CERN Sum Data Stored CERN Active Data at CERN Assumed Number of Active Offsite T Sum Data Stored Offsite

23 CSM-Italy and DC04 CSM-Italy and DC04 Participation to the Challenge: ~ 20% contribution. Use of 1 Tier1 (common) and 3-4 Tier2s All Italian sites will possibly participate to pre-challenge phase Use all available and validated (CMS-certified) Grid tools for the pre-challenge phase Coordinate resources within LCG for both pre-challenge and challenge phases, where possible (Tier1/INFN must be fully functional: ~70 CPU Boxes, ~20 TB) Use the CMS Grid Integrated environment for the Challenge (February 2004) Participate to the preparation of: Build the necessary resources and define the Italian commitments Define the Data Flow Model Validation of Grid tools Integration of Grid and Production tools (review and re-design) 23

24 CMS-Italy DC04 Preparation CMS-Italy DC04 Preparation Use the tail of Summer Production to test and validate resources and tools (grid and non-grid) November/December 2002 Participate to the Production-Tools Review Now (Claudio Grandi, Massimo Biasotto) Hopefully contribute to the new tools development (early 2003) Make available the new software at all the Sites (T1, T2s, T3s) Use some of the resources to test and validate Grid Integration Already in progress at the Tier1 (CMS resources) and at Padova Commit and validate (for CMS) the resources for DC04 See following slide Define the participation to the LCG-1 system See following slide 24

25 CMS Italy DC04 preliminary plans All the current and coming resources of CMS Italy will be available for the DC04, possibly within the LCG Project Small amount of resources requested for 2003 Smoothly integrate the resources into LCG-1 Continue to use dedicated resources for tests of Grid and Production tools Integration Needs for the funding of the others 3-4 Tier2s Request for common CMS Italy sub-judice in 2003: Present a detailed plan and a clear Italian commitment to CMS 60 CPUs and 6 TBytes disk + Switches Will complete already existing Farms We are particularly low in disk storage availability Essential for physics analysis 25

26 Name & location of Regional Centre Experiments that are served by the resources noted below CMS Italy DC04 LCG preliminary plans INFN- Laboratori Nazionali di Legnaro (LNL) CMS Preliminary commitment of possibly available resources: years Processor Farm No. of processors planned installed Disk Storage Estimated total capacity (TB) Tier1 plans common to all Experiments See F. Ruggieri s Presentation LNL partially funded in 2002 (24 CPUs, 3 TB) for LCG participation. The remaining resources are CMS directly funded. 26

27 DC04 Summary DC04 Summary With the DAQ TDR about to be completed, the focus moves to the next round of preparations The Data Challenge series to reach full scale tests in 2006 The baseline for the Physics TDR The prototypes required for CMS to write a CCS TDR in 2004 Start to address the analysis model Start to test the data and task distribution models Perform realistic tests of the LCG GRID implementations Build the distributed expertise required for LHC Computing DC04 will occupy us for most of the next 18 months 27

28 LCG LCG = LHC Computing Grid project (PM: Les Robertson) CERN-based coordination effort (hardware, personnel, software, middleware) for LHC Computing; Worldwide! (Tier0, Tier1s and Tiers2s) Funded by participating Agencies (INFN too) Two phases: Preparation and setting-up (including tests, R&D and support for Experiments activities) Commissioning of LHC Computing System Five (indeed four!) areas of activity for Phase 1: Applications (common software and tools) (Torre Wenaus) Fabrics (hardware, farms tools and architecture) (Bernd Panzer) Grid Technologies (middleware development) (Fabrizio Gagliardi) Grid Deployment (resources management and run) (Ian Bird) Grid Deployment Board (agreements and plans) (Mirco Mazzucato) Many Boards: POB(Funding), PEB(Executive), SC2(Advisory),. 28

29 LHCC Reports Reviews The LHC Computing Grid Project Structure The LHC Computing Grid Project Project Overview Board Common Computing RRB Resource Matters Project Manager Project Execution Board Requirements, Monitoring Software and Computing Committee (SC2) implementation teams RTAG 29

30 LCG Funding - Materials LCG Funding - Materials Changes to this table are again both positive and negative INFN provides now 150k CHF per contract year for 4 fellows Plans to add 6 more fellows for beginning 2003 Belgium has cut down its contribution to 400k Finland has pulled back its offer Chris Eck, 3 rd Sep 2002 Source Summary of Funding - Materials at CERN Materials in kchf SUM of Funding Materials Belgium Germany Greece Italy-INFN Spain UK-PPARC Enterasys Intel (2) CERN SUM

31 Grid projects (CMS-Italy leading roles) Integration of Grid Tools and Production tools almost done (Italy, UK, Fr main contributions) (Thanks to CNAF people and DataTAG personnel) We can submit (production) jobs to the DataGrid testbed via the CMS Production tools (modified IMPALA/BOSS/RefDB) Prototypes working correctly on DataTAG test layout Will test large scale on DataGrid/LCG Production Testbed Will measure performances to compare with summer production classic jobs (November 2002) Integration of EU/US Grid/production tools Already in progress in the GLUE activity Most of the design (not only for CMS) is ready. Implementation in progress. Target for (first) delivery by end of

32 Experiment Software Dataset Definition Logical components diagram Data Materializer Job Job Definition creation Software release New dataset request Production on demand Job submission Data Management System Dataset Input Specification Update dataset metadata Production monitoring Dataset Algorithm Specification Dataset Catalogue Input data location Workload Management System Job Catalogue Retrieve Resource status Software Release Manager Software Repository Data management operations Resource Monitoring System Resource Directory Job assignment to resources Job Monitoring System Job Monitoring Job type definition Definition Job Book-keeping Copy data Storage Service Data Publish Resource status Computing Service Job output filtering Write data Read data Push data or info Pull info 32 SW download & installation

33 CMKIN/SIM ORCA Dataset Definition IMPALA Job IMPALA scripts Schema Filter files creation Spring 2002 diagram Software release New dataset request Fetch request parameters Job submission Production web portal Dataset Input Specification Write dataset summary Production monitoring Job type definition Dataset Algorithm Specification RefDB Input data location Local Batch System (or Grid Scheduler) Scheduler Job catalog Retrieve Resource status SCRAM/DAR DAR files CVS repository Data management operations Web page with links to RC home pages Resource Directory Job assignment to resources BOSS BOSS DB Job submission Copy data AMS POSIX GDMP Data Publish Resource status Farm node (or GRAM) Job output filtering Write data Read data Push data or info Pull info 33 SW download & installation

34 Proposal for a DC04 diagram Experiment Software Dataset Definition VDT Planner IMPALA/MOP Job DAG/JDL +scripts Job Monitoring Definition creation Software release New dataset request Production on demand EDG UI VDT Client Job submission REPTOR/Giggle + Chimera? Dataset Input Specification Update dataset metadata Production monitoring Job type definition Dataset Algorithm Specification Dataset Catalogue Input data location EDG Workload Management System EDG L&B Retrieve Resource status REPTOR/Giggle? PACMAN? Dataset Catalogue Data management operations MDS LDAP Job assignment to resources BOSS&R-GMA BOSS-DB Copy data EDG SE VDT Server Data Publish Resource status EDG CE VDT server Job output filtering Write data Read data Push data or info Pull info 34 SW download & installation

35 Conclusion CMS Italia e leader nel Computing di CMS Pensiamo di averlo dimostrato e vogliamo continuare Chiediamo il supporto della CSN1 per realizzare il Data Challenge 04, e quelli che seguiranno 35

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

CMS HLT production using Grid tools

CMS HLT production using Grid tools CMS HLT production using Grid tools Flavia Donno (INFN Pisa) Claudio Grandi (INFN Bologna) Ivano Lippi (INFN Padova) Francesco Prelz (INFN Milano) Andrea Sciaba` (INFN Pisa) Massimo Sgaravatto (INFN Padova)

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

THE Compact Muon Solenoid (CMS) is one of four particle

THE Compact Muon Solenoid (CMS) is one of four particle 884 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 52, NO. 4, AUGUST 2005 Distributed Computing Grid Experiences in CMS J. Andreeva, A. Anjum, T. Barrass, D. Bonacorsi, J. Bunn, P. Capiluppi, M. Corvo, N.

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

The European DataGRID Production Testbed

The European DataGRID Production Testbed The European DataGRID Production Testbed Franck Bonnassieux CNRS/UREC ENS-Lyon France DataGrid Network Work Package Manager Franck.Bonnassieux@ens-lyon.fr Presentation outline General DataGrid project

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Distributed Computing Grid Experiences in CMS Data Challenge

Distributed Computing Grid Experiences in CMS Data Challenge Distributed Computing Grid Experiences in CMS Data Challenge A.Fanfani Dept. of Physics and INFN, Bologna Introduction about LHC and CMS CMS Production on Grid CMS Data challenge 2 nd GGF School on Grid

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri

More information

Grid Challenges and Experience

Grid Challenges and Experience Grid Challenges and Experience Heinz Stockinger Outreach & Education Manager EU DataGrid project CERN (European Organization for Nuclear Research) Grid Technology Workshop, Islamabad, Pakistan, 20 October

More information

Future Developments in the EU DataGrid

Future Developments in the EU DataGrid Future Developments in the EU DataGrid The European DataGrid Project Team http://www.eu-datagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Overview Where is the

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Status Report of PRS/m

Status Report of PRS/m Status Report of PRS/m D.Acosta University of Florida Current U.S. activities PRS/m Activities New PRS organization 1 EMU Software Workshop Workshop held at UCDavis in late February helped focus EMU software

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

Department of Physics & Astronomy

Department of Physics & Astronomy Department of Physics & Astronomy Experimental Particle Physics Group Kelvin Building, University of Glasgow, Glasgow, G12 8QQ, Scotland Telephone: +44 (0)141 339 8855 Fax: +44 (0)141 330 5881 GLAS-PPE/2004-??

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Distributed production managers meeting. Armando Fella on behalf of Italian distributed computing group

Distributed production managers meeting. Armando Fella on behalf of Italian distributed computing group Distributed production managers meeting Armando Fella on behalf of Italian distributed computing group Distributed Computing human network CNAF Caltech SLAC McGill Queen Mary RAL LAL and Lyon Bari Legnaro

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

Real-time dataflow and workflow with the CMS tracker data

Real-time dataflow and workflow with the CMS tracker data Journal of Physics: Conference Series Real-time dataflow and workflow with the CMS tracker data To cite this article: N D Filippis et al 2008 J. Phys.: Conf. Ser. 119 072015 View the article online for

More information

CRAB tutorial 08/04/2009

CRAB tutorial 08/04/2009 CRAB tutorial 08/04/2009 Federica Fanzago INFN Padova Stefano Lacaprara INFN Legnaro 1 Outline short CRAB tool presentation hand-on session 2 Prerequisities We expect you know: Howto run CMSSW codes locally

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air

More information

Europe and its Open Science Cloud: the Italian perspective. Luciano Gaido Plan-E meeting, Poznan, April

Europe and its Open Science Cloud: the Italian perspective. Luciano Gaido Plan-E meeting, Poznan, April Europe and its Open Science Cloud: the Italian perspective Luciano Gaido (gaido@to.infn.it) Plan-E meeting, Poznan, April 27 2017 Background Italy has a long-standing expertise and experience in the management

More information

The Legnaro-Padova distributed Tier-2: challenges and results

The Legnaro-Padova distributed Tier-2: challenges and results The Legnaro-Padova distributed Tier-2: challenges and results Simone Badoer a, Massimo Biasotto a,fulviacosta b, Alberto Crescente b, Sergio Fantinel a, Roberto Ferrari b, Michele Gulmini a, Gaetano Maron

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

A New approach to Control Systems

A New approach to Control Systems A New approach to Control Systems Presented on behalf of the!chaos development team !CHAOS project idea !CHAOS project idea Claudio Bisegni INFN-LNF !CHAOS project idea Overall objectives: enhancement

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

CMS event display and data quality monitoring at LHC start-up

CMS event display and data quality monitoring at LHC start-up Journal of Physics: Conference Series CMS event display and data quality monitoring at LHC start-up To cite this article: I Osborne et al 2008 J. Phys.: Conf. Ser. 119 032031 View the article online for

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

arxiv: v1 [cs.dc] 20 Jul 2015

arxiv: v1 [cs.dc] 20 Jul 2015 Designing Computing System Architecture and Models for the HL-LHC era arxiv:1507.07430v1 [cs.dc] 20 Jul 2015 L Bauerdick 1, B Bockelman 2, P Elmer 3, S Gowdy 1, M Tadel 4 and F Würthwein 4 1 Fermilab,

More information

ELFms industrialisation plans

ELFms industrialisation plans ELFms industrialisation plans CERN openlab workshop 13 June 2005 German Cancio CERN IT/FIO http://cern.ch/elfms ELFms industrialisation plans, 13/6/05 Outline Background What is ELFms Collaboration with

More information

Experience with Data-flow, DQM and Analysis of TIF Data

Experience with Data-flow, DQM and Analysis of TIF Data Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status High Level Requirements for user analysis computing Code Development Environment Compile, run,

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca S. Lacaprara INFN Legnaro

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca   S. Lacaprara INFN Legnaro Status and evolution of CRAB University and INFN Milano-Bicocca E-mail: fabio.farina@cern.ch S. Lacaprara INFN Legnaro W. Bacchi University and INFN Bologna M. Cinquilli University and INFN Perugia G.

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

Performance of R-GMA for Monitoring Grid Jobs for CMS Data Production

Performance of R-GMA for Monitoring Grid Jobs for CMS Data Production 2005 IEEE Nuclear Science Symposium Conference Record N14-207 Performance of R-GMA for Monitoring Grid Jobs for CMS Data Production R. Byrom, D. Colling, S. M. Fisher, C. Grandi, P. R. Hobson, P. Kyberd,

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

Benchmarking the ATLAS software through the Kit Validation engine

Benchmarking the ATLAS software through the Kit Validation engine Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

CMS users data management service integration and first experiences with its NoSQL data storage

CMS users data management service integration and first experiences with its NoSQL data storage Journal of Physics: Conference Series OPEN ACCESS CMS users data management service integration and first experiences with its NoSQL data storage To cite this article: H Riahi et al 2014 J. Phys.: Conf.

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Geant4 in a Distributed Computing Environment

Geant4 in a Distributed Computing Environment Geant4 in a Distributed Computing Environment S. Guatelli 1, P. Mendez Lorenzo 2, J. Moscicki 2, M.G. Pia 1 1. INFN Genova, Italy, 2. CERN, Geneva, Switzerland Geant4 2005 10 th user conference and collaboration

More information

Overview of HEP software & LCG from the openlab perspective

Overview of HEP software & LCG from the openlab perspective Overview of HEP software & LCG from the openlab perspective Andreas Unterkircher, CERN openlab February 2005 Andreas Unterkircher 1 Contents 1. Opencluster overview 2. High Energy Physics (HEP) software

More information

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS NOTE 2001/037 The Compact Muon Solenoid Experiment CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland July 16, 2001 CMS Data Grid System Overview

More information

Optimization of Italian CMS Computing Centers via MIUR funded Research Projects

Optimization of Italian CMS Computing Centers via MIUR funded Research Projects Journal of Physics: Conference Series OPEN ACCESS Optimization of Italian CMS Computing Centers via MIUR funded Research Projects To cite this article: T Boccali et al 2014 J. Phys.: Conf. Ser. 513 062006

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

The Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS CR -2017/188 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 29 June 2017 (v2, 07 July 2017) Common

More information

DIRAC Distributed Infrastructure with Remote Agent Control

DIRAC Distributed Infrastructure with Remote Agent Control DIRAC Distributed Infrastructure with Remote Agent Control E. van Herwijnen, J. Closier, M. Frank, C. Gaspar, F. Loverre, S. Ponce (CERN), R.Graciani Diaz (Barcelona), D. Galli, U. Marconi, V. Vagnoni

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium Storage on the Lunatic Fringe Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium tmruwart@dtc.umn.edu Orientation Who are the lunatics? What are their requirements?

More information

Grid and Cloud Activities in KISTI

Grid and Cloud Activities in KISTI Grid and Cloud Activities in KISTI March 23, 2011 Soonwook Hwang KISTI, KOREA 1 Outline Grid Operation and Infrastructure KISTI ALICE Tier2 Center FKPPL VO: Production Grid Infrastructure Global Science

More information

Visita delegazione ditte italiane

Visita delegazione ditte italiane Visita delegazione ditte italiane CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Massimo Lamanna/CERN IT department - Data Storage Services group Innovation in Computing in High-Energy

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

PoS(ACAT2010)029. Tools to use heterogeneous Grid schedulers and storage system. Mattia Cinquilli. Giuseppe Codispoti

PoS(ACAT2010)029. Tools to use heterogeneous Grid schedulers and storage system. Mattia Cinquilli. Giuseppe Codispoti Tools to use heterogeneous Grid schedulers and storage system INFN and Università di Perugia E-mail: mattia.cinquilli@pg.infn.it Giuseppe Codispoti INFN and Università di Bologna E-mail: giuseppe.codispoti@bo.infn.it

More information

Magic-5. Medical Applications in a GRID Infrastructure Connection. Ivan De Mitri* on behalf of MAGIC-5 collaboration

Magic-5. Medical Applications in a GRID Infrastructure Connection. Ivan De Mitri* on behalf of MAGIC-5 collaboration Magic-5 Medical Applications in a GRID Infrastructure Connection * on behalf of MAGIC-5 collaboration *Dipartimento di Fisica dell Università di Lecce and INFN Lecce, Italy ivan.demitri@le.infn.it HEALTHGRID

More information

Grid Computing at the IIHE

Grid Computing at the IIHE BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke

More information

DIRAC Distributed Infrastructure with Remote Agent Control

DIRAC Distributed Infrastructure with Remote Agent Control Computing in High Energy and Nuclear Physics, La Jolla, California, 24-28 March 2003 1 DIRAC Distributed Infrastructure with Remote Agent Control A.Tsaregorodtsev, V.Garonne CPPM-IN2P3-CNRS, Marseille,

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

Clouds in High Energy Physics

Clouds in High Energy Physics Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

A High Availability Solution for GRID Services

A High Availability Solution for GRID Services A High Availability Solution for GRID Services Álvaro López García 1 Mirko Mariotti 2 Davide Salomoni 3 Leonello Servoli 12 1 INFN Sezione di Perugia 2 Physics Department University of Perugia 3 INFN CNAF

More information

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data D. Barberis 1*, J. Cranshaw 2, G. Dimitrov 3, A. Favareto 1, Á. Fernández Casaní 4, S. González de la Hoz 4, J.

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information