I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

Size: px
Start display at page:

Download "I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC"

Transcription

1 I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri di Fisica delle Alte Energie Apr Pavia (Italy)

2 Outline WLCG and MoU Tiers' roles and responsibility in a WLCG-enabled world Technical participation of a Tier in WLCG Service Challenges (SC) Roles of Tiers in SCs Roles of LHC experiments in SCs SC achievements, open issues, and milestones in 2006 Conclusions 2

3 WLCG Worldwide LHC Computing Grid (WLCG) Purpose: to provide the computing resources needed to process and analyze the data gathered by the LHC experiments The LCG project is assembling at multiple computing centres the main offline data-storage and computing resources needed by LHC exps, and operating these resources in a shared grid-like manner Main goal: provide common tools and implement uniform means of accessing resources MoU [*] for collaboration in the Deployment and Exploitation of the Worldwide LHC Computing Grid between the Parties [#] [*] CERN-C-RRB /Rev. 21 March 2006 [#] see next slide After LCG Phase 1 (technology development and tests leading to a production prototype), the WLCG MoU governs the execution of the LCG Phase-2 (deployment and exploitation of LCG as a Service) Defines the program of work, distribution of duties and responsibilities to the Parties as well as the Computing Resources Levels they will offer to the LHC exps (also organizational, managerial and financial guidelines). 3

4 The WLCG MoU parties a) CERN: Tier-0: receives raw data from exps online computing farms and records them on permanent MSS + 1st-pass reco + distribute them to Tier-1 s CERN Analysis Facility functionality of a combined T1-T2 centre, except that it does not offer permanent storage of back-up copies of raw data a) all Institutions participating in the provision of the WLCG with a T1 and/or T2 computing centre (federations included): Tier-1 s: provide a distributed permanent back-up of the raw data, storage and mgmt of data needed during the analysis process + offer a grid-enabled data service data intensive analysis and reprocessing may undertake national or regional support tasks, and contribute to Grid Operations Services Tier-2 s provide well-managed, grid-enabled disk storage and concentrate on tasks such as simulation, end-user analysis and high-performance parallel analysis 4

5 Tiers architecture (it applies to all LHC exps, though) ~PBytes/sec Online System ~100 MBytes/sec 1 TIPS is approximately 25,000 SpecInt95 equivalents Tier 1 There is a bunch crossing every 25 nsecs. There are 100 triggers per second Each triggered event is ~1 MByte in size France Regional Centre ~622 Mbits/sec or Air Freight (deprecated) Germany Regional Centre Tier 0 Offline Processor Farm ~20 TIPS Italy Regional Centre ~100 MBytes/sec CERN Computer Centre FermiLab ~4 TIPS ~622 Mbits/sec ~622 Mbits/sec Tier 2 Caltech ~1 TIPS Tier2 Centre Tier2 Centre Tier2 Centre Tier2 Centre ~1 TIPS ~1 TIPS ~1 TIPS ~1 TIPS Physics data cache Institute Institute ~0.25TIPS Physicist workstations Institute ~1 MBytes/sec Institute Tier 4 Physicists work on analysis channels. Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server [ Image courtesy of Harvey Newman, Caltech ] 5

6 Minimal computing capacities for Tier-0 (i.e. minimal Computing Resources and Service Levels to qualify for WLCG membership) In support of offline computing systems of LHC exps according to their Computing Models, CERN shall mainly supply the following services: operation of the Tier-0 facility, providing: high bandwidth network connectivity from exp. area to offline computing facility; recording / permanent storage (MSS) of one copy of raw data throughout exp lifetime; distribution of an agreed share of raw data to each Tier-1 centre; 1st pass calibration and alignment processing, including sufficient buffer storage of associated calibration samples for up to 24 hrs + event reco according to policies agreed with exps; storage of the reco data on disk and in MSS; distribution of an agreed share of the reco data to each Tier-1 centre; operation of the CERN Analysis Facility, providing: data-intensive analysis, high-performance access to current versions of the exps real/simulated datasets; end-user analysis. i.e. all functionalities of a combined T1/T2, except from permanent storage of back-up copies of raw data provision of base services for Grid Coordination and Operation: Overall management and coordination of the LHC Grid integration, certification, distribution, support for software required for Grid operations support, at several levels: network issues, databases, tools, libraries, infrastructures, VOs-management, etc 6

7 Minimal computing capacities for Tier-1 s (i.e. minimal Computing Resources and Service Levels to qualify for WLCG membership) Tier-1 centres form an integral part of data handling service of LHC exps They undertake to provide services: on a long-term basis (initially at least 5 yrs) upgrading also, to keep pace with expected growth of LHC data volumes and analysis activities with high level of reliability/availability + rapid responsiveness to problems Wide pletora of services provided by each Tier-1 to LHC exps they serve: i. acceptance of agreed share of raw data from Tier-0, keeping up with DAQ; ii. acceptance of agreed share of 1st-pass reco data from Tier-0; iii. acceptance of processed and simulated data from other WLCG centres; iv. recording + archival storage of accepted share of raw data ( distributed back-up ); v. recording + maintenance of processed/simulated data on MSS vi. provision of managed storage space for permanent/tmp storage of files and dbs; vii. provision of access to the stored data by other WLCG centres; viii. operation of a data-intensive analysis facility; ix. provision of other services according to agreed exps requirements; x. ensure high-capacity network bandwidth and services for data exchange with Tier-0 (as part of an overall plan agreed amongst exps, Tier-0 and Tier-1 s); xi. ensure network bandwidth and services for data exchange with Tier-1 s and Tier-2 s (as part of an overall plan agreed amongst the exps, Tier-1 s and Tier-2 s); xii. administration of databases (and more ) required by exps at Tier-1 s. All storage/computational services shall be grid enabled according to standards agreed between LHC exps and the regional centres 7

8 Minimal computing capacities for Tier-2 s (i.e. minimal Computing Resources and Service Levels to qualify for WLCG membership) Tier-2 services shall be provided by centres (or federations of centres): as a guideline, Tier-2 s are expected to be capable of fulfilling at least a few % of the resource requirements of the LHC exps that they serve. Services provided by each Tier-2 are: i. provision of managed storage space for permanent/tmp storage of files and dbs; ii. provision of access to the stored data by other WLCG centres; iii. operation of an end-user analysis facility; iv. provision of other services, e.g. simulation, according to agreed exp requirements; v. ensure network bandwidth and services for data exchange with Tier-1 s (as part of an overall plan agreed amongst exps, Tier-1 s and Tier-2 s); As for T1 s: grid-enabling of all storage/computational services 8

9 Technical participation to WLCG The technical participation of the Parties is defined separately in terms of: Computing Resources levels that they pledge to provide to one/more LHC exps that they serve; Computing Services levels that they pledge to the WLCG collaboration. In both cases having secured the necessary funding for iron objects and warm bodies Crucial to define (and apply) metrics: associate with each element a set of key qualitative measures: e.g. Computing Resource Level associated to Networking shall include I/O throughput and average availability in terms of [time running]/[scheduled up-time] e.g. Computing Service level imply reliability, availability, responsiveness to problems, Let s dig into both resources and services 9

10 Resources : the pledged Computing Capacities 10

11 Services : required components at Tiers T0 Service Only some VOs? Class T1 Service Only some VOs? Class SRM 2.1 C SRM 2.1 H/M LFC LFC FTS CE RB LHCb ALICE, ATLAS C H C C C LFC FTS CE Site BDII ALICE, ATLAS H/M H/M H/M H/M Global BDII C R-GMA H/M Site BDII H T2 Service Only some VOs? Class Myproxy C SRM 2.1 M/L VOMS HC LFC ATLAS, ALICE M/L R-GMA H CE M/L databases C/H Site BDII M/L R-GMA M/L Class Description Downtime Reduced Degraded Availability SERVICE LEVELS definition: C H M L U Critical High Medium Low Unmanaged 1 hour 4 hours 6 hours 12 hours None 1 hour 6 hours 6 hours 24 hours None 4 hours 6 hours 12 hours 48 hours None 99% 99% 99% 98% None Downtime defines time between start of problem and restoration of service at min capacity (i.e. basic function but capacity < 50%) Reduced defines time between the start of the problem and the restoration of a reduced capacity service (i.e. >50%) Degraded defines the time between the start of the problem and the restoration of a degraded capacity service (i.e. >80%) Availability defines the sum of the time that the service is down compared with the total time during the calendar period for the service. Site wide failures are not considered as part of the availability calculations. 99% means a service can be down up to 3.6 days a year in total. 98% means up to a week in total. None means the service is running unattended 11

12 Services : the MoU availability targets (to be reviewed by the operational boards of the WLCG Collaboration) T0 T2 s T1 s The response times in the above table refer only to the maximum delay before action is taken to repair the problem. The mean time to repair is also a very important factor that is only covered in this table indirectly through the availability targets. All of these parameters will require an adequate level of staffing of the services, including on-call coverage outside of prime shift. 12

13 What s out of DAQ to MSS at CERN? CERN must be capable of accepting data from exps DAQ systems and recording it on magnetic tapes at a long-term sustained average rate of 1.6 GB/s during the p-p run and 1.85 GB/s during the heavy-ion run. The target DAQ-MMS rate achievable at Dec 05 is 750 MB/s, rising to the full nominal data rate of 1.6 GB/s by Aug 06. Experiment SIM SIMESD RAW Trigger RECO AOD TAG ALICE 400KB 40KB 1MB 100Hz 200KB 50KB 10KB ATLAS 2MB 500KB 1.6MB 200Hz 500KB 100KB 1KB CMS 2MB 400KB 1.5MB 150Hz 250KB 50KB 10KB LHCb 400KB 25KB 2KHz 75KB 25KB 1KB 13

14 Nominal MoU data rates at each T1 [ Note: throughput is measured network-tape at each T1, and disk-network at CERN ] VO-view different from WLCG-view acceptance of an agreed share of raw data from T0 (keeping up with DAQ) + 1st-pass reco data ALICE mostly rely on GridKa; ATLAS on BNL,IN2P3,NL; CMS on FNAL,CNAF; LHCb on IN2P3,NL 14

15 Service Challenges A mechanism by which the readiness of the overall LHC computing infrastructure to meet the exps requirements is measured and if(/where) necessary corrected understand what it takes to run a real and wide Grid service joined effort with Tiers community to trigger resources deployment, drive activity planning and encourage distrubuted know-how based on realistic use patterns ramp-up essential grid services to target levels of reliability, availability, scalability, end-to-end performance A long and hard path, done in several steps (see next slide) Service Challenge 1 and 2 building up the necessary data mgmt infrastructure to perform reliable T0-T1 transfers and permanent production services with the appropriate throughput: basically build up T0-T1 infrastructure and services to handle production transfers and data flows no T2 involved, no exp-specific sw and no offline use-cases included SC2 met its throughput goal (100 MB/s/site, 500 MB/s sustained out of CERN), but not its service goals Service Challenge 3 and 4 bringing T2s into loop and fully address exps use-cases towards full production services: add site/exps goals, e.g. focus on DM, batch prod and real data access addressing problems beyond set-up (throughput) goals: add a service phase 15

16 WLCG deployment schedule Apr05 SC2 Complete June05 - Technical Design Report Jul05 SC3 Throughput Test Sep05 - SC3 Service Phase Dec05 Tier-1 Network operational Apr06 SC4 Throughput Test May06 SC4 Service Phase starts Sep06 Initial LHC Service in stable operation Apr07 LHC Service commissioned SC2 SC3 preparation setup service SC4 LHC Service Operation cosmics You (IFAE 2006) are here First physics First beams Full physics run 16

17 Service Challenge 3 The first SC with exps-oriented objectives when: Jul 05 - Dec 05 (+ Jan 06) who: T0, all T1 s, small nb of T2 s Set-up ( throughput ) phase (Jul 05) Throughput targets for each T1 are 150 MB/s network-disk and 60 MB/s network/tape, with CERN capable of supporting 1 GB/s for the transfers to disk and 400 MB/s for those to tape at the T1s. Some T1s also support T2s, which may upload simulated data and download analysis data. Service phase (Sep-Dec 05) Stable operation during which exps are committed to carry out tests of their sw chains and computing models Includes additional sw components, including a grid WMS, Grid catalog, mass storage mgmt services and a file transfer service Re-run of the throughput phase (Jan 06) 17

18 SC3 throughput phase (Jul 05) Jul 05 throughput tests did not meet targets (~50% higher than SC2) on average: ~1/2 target, scarce stability Network-disk target: 150MB/s/T1 1GB/s (CERN) Network-tape target: 60MB/s/T1 (not all sites, though) important step to gain experience with the services before SC4 since then: SRM on each site, dcache , glite FTS 1.4, CASTOR fixes, a lot of debugging in transfers, network upgrades: all sw upgrades released and deployed 18

19 SC3 T0-T1 disk-disk re-run (Jan 06) substantial improvement in recent SC3 throughput re-run some sites exceeded the targets all sites clearly have a much better handle on the systems but still a lot of work to do daily average rate still far away, demostrate sustainability of met/exceeded targets, test recovery from problems, and get rid of heroic efforts! Limited by hw config at CERN to ~1 GB/s Example: CNAF inbound 19

20 + + + = Day-view Week-view 20

21 Current T0-T1 disk-disk transfer rates Centre ALICE ATLAS CMS LHCb Achieved (nominal) p-p data rates into T1 disk-disk (SRM) [MB/s] ASGC, Taipei 80 (100) (have hit 140) CNAF, Italy 200 PIC, Spain >30 (100) (network constraints) IN2P3, Lyon 200 GridKA, Germany 200 RAL, UK 200 (150) BNL, USA 150 (200) FNAL, USA >200 (200) TRIUMF, Canada 140 (50) NIKHEF/SARA, NL 250 (150) Nordic Data Grid Facility 150 (50) Meeting or exceeding nominal disk-disk rate Meeting target rate for SC3 disk-disk re-run These rates must be sustained to tape 24 hours a day, ~100 days/yr. So: not appropriate to focus on ~% discrepancies. Still missing are: stability + extra capacity required to cater for outages / Backlogs (namely running for 24-48h periods at ~2 x nominal). This is currently WLCG biggest data management challenge. 21

22 Tier-0 ESD1 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day Other Tier-1s T1 T1 RAW 1.6 GB/file 0.02 Hz 1.7K f/day 32 MB/s 2.7 TB/day AODm1 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day AODm2 500 MB/file Hz 3.1K f/day 18 MB/s 1.44 TB/day RAW 1.6 GB/file 0.02 Hz 1.7K f/day 32 MB/s 2.7 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day RAW ESD2 AODm Hz 3.74K f/day 44 MB/s 3.66 TB/day AOD2 10 MB/file 0.2 Hz 17K f/day 2 MB/s 0.16 TB/day AODm2 500 MB/file Hz 0.34K f/day 2 MB/s 0.16 TB/day T1-T1 transfers Take ATLAS average T1 data flow as example highest inter-t1 rates due to multiple ESD copies Reprocessing ~1 month after data taking (better calibs) + at the end of yr with better calibs+algos Tape disk buffer CPU farm disk storage ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day ATLAS T1 s at work Is T1-T1 traffic shaping a constant network load or only peaks? much exp-specific and Computing-Model dependent AOD2 AODm2 10 MB/file 500 MB/file 0.2 Hz Hz 17K f/day 0.34K f/day 2 MB/s 2 MB/s 0.16 TB/day 0.16 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AODm1 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day AODm2 500 MB/file Hz 3.1K f/day 18 MB/s 1.44 TB/day AODm2 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day Tier-2s T1 T1 Courtesy: D.Barberis (ATLAS) Other Tier-1s T1 T1 22

23 T1 T2 transfers Take CMS data flow as example CMS T1 T2 transfers likely to be bursty and driven by analysis demands ~10/100 MB/s for worst/best connected T2 s (2006) CMS T2 T1 transfers almost entirely fairly continuous simulation transfers ~ 10MB/s (NOTE: aggregate input rate into T1 s comparable to rate from T0!) tape 280 MB/s (RAW, RECO, AOD) 225 MB/s (RAW) Tier M-SI2K 0.4 PB disk 4.9 PB tape 5 Gbps WAN WNs 225 MB/s (RAW) 280 MB/s (RAW, RECO, AOD) Tier-1s Tier-1 40 MB/s (RAW, RECO, AOD) Tier-0 Tier-1s Tier-2s AOD AOD 48 MB/s (MC) 240 MB/s (skimmed AOD, Some RAW+RECO) 60 MB/s (skimmed AOD, Some RAW+RECO) 12 MB/s (MC) Tier MSI2K 1.0 PB disk 2.4 PB tape ~10 Gbps WAN WNs Tier M-SI2K 0.2 PB disk 1 Gbps WAN ~800 MB/s (AOD skimming, data reprocessing) Up to 1 GB/s (AOD analysis, calibration) NB: averaged sustained throughputs. Courtesy: J.Hernandez (CMS) WNs 23

24 INFN candidate Tier-2 sites at SC3 times (Oct 05) Torino (ALICE): FTS, LFC (LCG 2.6.0) Storage Space: 2 TB on dcache Milano (ATLAS): FTS, LFC Storage space: 5.29 TB on DPM Pisa (ATLAS/CMS): FTS (+PhEDEx), LFC, POOL f.c., PubDB Storage space: 5(+5) TB on DPM Legnaro (CMS): FTS (+PhEDEx), POOL f.c., PubDB Storage space: 4 TB on DPM Bari (ALICE/CMS): FTS (+PhEDEx), POOL f.c., PubDB, LFC, dcache, DPM Storage space: 1.4(+4) TB mostly on dcache Catania (ALICE): Storage space: 1.8 TB on DPM and classic SE LHCb CNAF 24

25 ALICE: job submissions (Jan-Feb 06) SUM CNAF 25

26 LHCb, ATLAS: FTS testing (Oct-Nov 05) LHCb: config of an entire CNAF-to-T1 s channel matrix for replication of ~1 TB of stripped DTSs data ATLAS: production phase started Nov 05 e.g. INFN T1: 5932 files copied and registered at CNAF: 89 Failed replication events, 14 No replicas found 26

27 CMS: data transfer (Jul 05 Sep-Nov 05) Intense data transfer via PhEDEx INFN: CNAF T1 (>10 TB), Bari proto-t2 (>7 TB), LNL T2 (>4 TB) CNAF: SRM/Castor-1, Bari: SRM/dCache, LNL: SRM/DPM Ex.: automatic daily report on the SC m.l. Job submission via CMS SC3 job robot, using CRAB Job destination pattern shows a 1st (very draft) ex. of load balancing depending on data availability (i.e. publishing) at Tiers Top 20 CEs (not only SC jobs here) >70k SC3 jobs submitted using CRAB (over >350k, by users in Q34/2005) ORCA exit status =0 ORCA exit status 0 Aborted due to Grid problems 27

28 Take CMS as example: T2 s in the game CMS SC3 traffic: LNL inbound A B B A B C D E CMS SC3 traffic: Bari inbound C D E C D E 28

29 Service Challenge 4 Aims to demonstrate that all of the offline data processing requirements expressed in the exps Computing Models, from raw data taking through to data access, can be handled within Grid at the full nominal data rate of the LHC when: Apr 06 - Sep 06 who: T0, all T1 s, majority of T2 s It will become the initial production service for LHC and made available to the exps for final testing, commissioning and processing of cosmic ray data Set-up ( throughput ) phase (Apr 06) Throughput demonstration sustaining for 3 weeks the target data rates at each site Target is a stable, reliable data transfer to T1 s at target rates to any supported SRM implementation (dcache, Castor, ) + factor 2 for backlogs/peaks. Service phase (May-Sep 06) get the basic sw components required for the initial LHC data processing service into the loop Target is to show capability to support full Computing Models of each LHC exp, from simulation to end-user batch analysis at Tier-2 s 29

30 Example: SC4-phase1 disk-tape rates Centre ALICE ATLAS CMS LHCb Target Data Rate MB/s Canada, TRIUMF 50 France, CC-IN2P3 75 Germany, GridKA 75 Italy, CNAF 75 Netherlands, NIKHEF/SARA 75 Nordic Data Grid Facility 50 Spain, PIC Barcelona 75 Taipei, ASGC 75 UK, RAL 75 USA, BNL 75 USA, FNAL 75 Apr 06: still using SRM 1.1 & current tape technology 30

31 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec SC milestones 2006 (by WLCG) SC3 disk-disk transfers repeat (nominal rates capped at 150 MB/s) full T1 involvement, ~1 GB/s out of CERN achieved, good rate to sites CHEP Workshop; T1-T1 Use Cases, SC3 disk-tape transfers repeat (~50 MB/s) all required sw baseline services deployed (SRM, LFC, FTS, CE, RB, bdii, ) target of 50 MB/s/site sustained with ~5 current-technology drives was met (not all sites, though) Detailed plan for SC4 service agreed (mw + DM service enhancements) Crucial period for Castor-2 at CERN. glite 3.0 release: beta testing plenty of testing of crucial components, also at T0: very important SC4 disk-disk (nominal rates) and disk-tape (reduced rates) tests [last was Feb06] glite 3.0 release: available for distribution disk-disk: all T1 s; BNL,FNAL,CNAF,FZK,IN2P3 go up to 200 MB/s; disk-tape: all sites, MB/s installation, configuration and testing of glite 3.0 release at sites mostly on preparation for Jun06 start of SC4 production tests by experiments, focus on T1 Use Cases CERN T2 Workshop: identification of key T2 s Use Cases and Milestones workshop will define T2 targets: a lot of work ahead for exps and Tiers in Q34/2006 T0-T1 disk-tape (full nominal rates) [last was Apr06] all T1 s; rates MB/s depending on VOs supported and resources provided start T1-T1, T1-T2, T2-T1; T2 Milestones (+ debugging of tape results if needed) all T1 s; T2 s; all VOs; all offline use cases LHCC review (+ rerun of disk-tape tests if required) WLCG Service officially opened. Computing capacities continue to build up IFAE 2006 is here 1st WLCG conference. All sites shall have network, tape hw ~ready for production Final service / mw review leading to early 2007 (+ upgrades for LHC data taking?) 31

32 Summary WLCG: more and more ready to provide infrastructure/components to LHC exps Tiers: more and more ready to operate WLCG-enabled services to realize the Computing Models of the LHC exps WLCG is approaching the regime through several SCs involving Tiers we need to practise repeatedly, and expand the scope of testing SC after SC full DAQ-T0-T1-T2 chain + tape backend + drive with exp software the services delivered by WLCG must match the MoU targets and the exps requirements Focus: prepare infrastructures for robust services delivery define metrics for success not good enough? re-implement, and re-measure until it works. First data will arrive next year. NOT an option to have things fully working later. 32

33 Acknowledgements This talk reflects the hard work of a large nb of people across many sites and within LHC exps Collaborations, so extended thanks goes to: all people in WLCG Collaboration all people in network, storage, farming, at WLCG sites all people (from exps and sites) specifically working on SCs INFN SC coordination: Tiziana Ferrari (CNAF), Michele Michelotto (PD) 33

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

LHC Computing Models

LHC Computing Models LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

PIC Tier-1 Report. 1st Board for the follow-up of GRID Spain activities Madrid 5/03/2008. G. Merino and M. Delfino, PIC

PIC Tier-1 Report. 1st Board for the follow-up of GRID Spain activities Madrid 5/03/2008. G. Merino and M. Delfino, PIC PIC Tier-1 Report 1st Board for the follow-up of GRID Spain activities Madrid 5/03/2008 G. Merino and M. Delfino, PIC New Experiment Requirements New experiment requirements for T1s on 18-Sep-07, C-RRB

More information

Tier-2 DESY Volker Gülzow, Peter Wegner

Tier-2 DESY Volker Gülzow, Peter Wegner Tier-2 Planning @ DESY Volker Gülzow, Peter Wegner DESY DV&IT 1 Outline LCG- requirements and concept Current status and plans for DESY Conclusion Plans for Zeuthen DESY DV&IT 2 LCG requirements and concepts

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Service Availability Monitor tests for ATLAS

Service Availability Monitor tests for ATLAS Service Availability Monitor tests for ATLAS Current Status Work in progress Alessandro Di Girolamo CERN IT/GS Critical Tests: Current Status Now running ATLAS specific tests together with standard OPS

More information

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status High Level Requirements for user analysis computing Code Development Environment Compile, run,

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

CMS Computing Model with Focus on German Tier1 Activities

CMS Computing Model with Focus on German Tier1 Activities CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

Distributed Data Management on the Grid. Mario Lassnig

Distributed Data Management on the Grid. Mario Lassnig Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

The European DataGRID Production Testbed

The European DataGRID Production Testbed The European DataGRID Production Testbed Franck Bonnassieux CNRS/UREC ENS-Lyon France DataGrid Network Work Package Manager Franck.Bonnassieux@ens-lyon.fr Presentation outline General DataGrid project

More information

The Legnaro-Padova distributed Tier-2: challenges and results

The Legnaro-Padova distributed Tier-2: challenges and results The Legnaro-Padova distributed Tier-2: challenges and results Simone Badoer a, Massimo Biasotto a,fulviacosta b, Alberto Crescente b, Sergio Fantinel a, Roberto Ferrari b, Michele Gulmini a, Gaetano Maron

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

The ATLAS EventIndex: Full chain deployment and first operation

The ATLAS EventIndex: Full chain deployment and first operation The ATLAS EventIndex: Full chain deployment and first operation Álvaro Fernández Casaní Instituto de Física Corpuscular () Universitat de València CSIC On behalf of the ATLAS Collaboration 1 Outline ATLAS

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

Support for multiple virtual organizations in the Romanian LCG Federation

Support for multiple virtual organizations in the Romanian LCG Federation INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information

More information

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Abstract. The Data and Storage Services group at CERN is conducting

More information

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING M. KACI mohammed.kaci@ific.uv.es 2nd EGAN School, 03-07 December 2012, GSI Darmstadt, Germany GRID COMPUTING TECHNOLOGY THE EUROPEAN GRID: HISTORY

More information

Philippe Charpentier PH Department CERN, Geneva

Philippe Charpentier PH Department CERN, Geneva Philippe Charpentier PH Department CERN, Geneva Outline Disclaimer: These lectures are not meant at teaching you how to compute on the Grid! I hope it will give you a flavor on what Grid Computing is about

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

Experience with Data-flow, DQM and Analysis of TIF Data

Experience with Data-flow, DQM and Analysis of TIF Data Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Experience of Data Grid simulation packages using.

Experience of Data Grid simulation packages using. Experience of Data Grid simulation packages using. Nechaevskiy A.V. (SINP MSU), Korenkov V.V. (LIT JINR) Dubna, 2008 Contant Operation of LCG DataGrid Errors of FTS services of the Grid. Primary goals

More information

LCG data management at IN2P3 CC FTS SRM dcache HPSS

LCG data management at IN2P3 CC FTS SRM dcache HPSS jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

PoS(EGICF12-EMITC2)106

PoS(EGICF12-EMITC2)106 DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011 High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

RDMS CMS Computing Activities before the LHC start

RDMS CMS Computing Activities before the LHC start RDMS CMS Computing Activities before the LHC start RDMS CMS computing model Tiers 1 CERN Collaborative centers: RCC RCC MSU, MSU, RRC RRC KI KI Kharkov Minsk JINR Erevan SINP MSU RDMS CMS Tier2 Tbilisi

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air

More information

Distributing storage of LHC data - in the nordic countries

Distributing storage of LHC data - in the nordic countries Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

Real-time dataflow and workflow with the CMS tracker data

Real-time dataflow and workflow with the CMS tracker data Journal of Physics: Conference Series Real-time dataflow and workflow with the CMS tracker data To cite this article: N D Filippis et al 2008 J. Phys.: Conf. Ser. 119 072015 View the article online for

More information

2012 STANDART YEAR CERN Tier0 CAF Tier 1 Total Tier1 Tier1ex Tier2ex Total ex Total CPU (MSI2K)

2012 STANDART YEAR CERN Tier0 CAF Tier 1 Total Tier1 Tier1ex Tier2ex Total ex Total CPU (MSI2K) 18 déc. 07 2007 NEW LHC SCHEDULE CERN Tier0 CAF Tier 1 Tier1 Tier1ex Tier2ex ex 0.0675 0.0304 0.0246 0.0979 3.2160 3.1914 3.1914 3.2894 0 CPU (MSI2K) 0.0205 0.0093 0.0075 0.0298 0.9777 0.9702 0.0000 0.9702

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

A European Vision and Plan for a Common Grid Infrastructure

A European Vision and Plan for a Common Grid Infrastructure A European Vision and Plan for a Common Grid Infrastructure European Grid Initiative www.eu-egi.org Why Sustainability? Scientific applications start to depend on Grid infrastructures (EGEE, DEISA, ) Jobs/month

More information

The National Analysis DESY

The National Analysis DESY The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

The Global Grid and the Local Analysis

The Global Grid and the Local Analysis The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information