PIC Tier-1 Report. 1st Board for the follow-up of GRID Spain activities Madrid 5/03/2008. G. Merino and M. Delfino, PIC

Size: px
Start display at page:

Download "PIC Tier-1 Report. 1st Board for the follow-up of GRID Spain activities Madrid 5/03/2008. G. Merino and M. Delfino, PIC"

Transcription

1 PIC Tier-1 Report 1st Board for the follow-up of GRID Spain activities Madrid 5/03/2008 G. Merino and M. Delfino, PIC

2 New Experiment Requirements New experiment requirements for T1s on 18-Sep-07, C-RRB Fall CPU (ksi2k) ALICE ATLAS CMS LHCb Total Disk (TB) ALICE ATLAS CMS LHCb Total Tape (TB) ALICE ATLAS CMS LHCb Total

3 Experiment requirements change We can compare how the requirements have changed from Oct-2006 to Sep-2007: ATLAS disk and tape has increased slightly CMS has very big changes (less CPU and Tape and more Disk) Requirements (New-Old)/Old CPU (ksi2k) ATLAS 0% 0% 0% CMS -23% -4% -6% LHCb 0% 2% 1% Disk (TB) ATLAS 8% 6% 2% CMS 29% 14% 12% LHCb 0% 0% 0% Tape (TB) ATLAS 5% 6% 4% CMS -25% -36% -37% LHCb 0% 0% 0% 3

4 PIC Pledges updated Applying the algorithm to compute the PIC pledges from the experiment requirements that was agreed with each of the experiments and used in the MEC project proposal: CPU (ksi2k) ATLAS CMS LHCb TOTAL Disk (TB) ATLAS CMS LHCb TOTAL Tape (TB) ATLAS CMS LHCb TOTAL

5 PIC Pledges: new vs old Comparing the PIC pledges resulting from the new requirements w.r.t. those resulting from the old requirements, find that PIC will have to provide less CPU and Tape and more Disk: CPU (ksi2k) New Old Diff -9% -2% -5% Disk (TB) New Old Diff 14% 8% 5% Tape (TB) New Old Diff -17% -24% -24% Applying the same cost estimations than those in the project proposal, the impact of changing to the new pledges on the cost for results minimal (170 K cheaper, out of 5,8 M ) 5

6 PIC capacity growth plan towards 2008 MoU Pledges Will look each category in details in the next slides CPU MoU-08: 1509 ksi2k Disk MoU-08: 967TB Tape MoU-08: 953TB 6

7 CPU power calibration According to vendor CPU power specifications, by the end of 2006 PIC CPU capacity was greater than the MoU-2007 pledge (501 ksi2k) System based on HP servers, Intel Xeon Dual Core GHz Before summer 2007, the LCG MB agreed that all sites should adopt a common way to run the specint2000 benchmark: One benchmark per core, low opt gcc flags, +50% factor PIC bought the spec2k benchmark, calibrated the CPUs in Sep-07 The result was a drop of about 30% measured vs. vendor power New CPU calibration was applied from 1 st Oct 2007 onwards Total CPU power for PIC Tier-1 with new calibration ~ 480 ksi2k About 5% below the MoU-2007 pledge 7

8 CPU Capacity ramp-up A CPU purchase was initiated in Q to ramp-up towards MoU pledge and compensate for the small deficit in 2007 Based on HP blades, Intel Xeon quad core GHz Problem: Mismatch of electrical hookup prevented immediate deployment Fix will be put into racks during yearly electrical shutdown 18/03/08 An extension of the HP blade centers system is being launched that should ramp PIC capacity over MoU-2008 pledge by May 8

9 CPU Accounting PIC CPU accounting from Jan-06 until Feb-08 from WLCG Reports ksi2k*day (walltime) ene-06 mar-06 LHCb CMS ATLAS Inst. capacity (for T1) MoU Pledge WN OS migration SL3 SL4 may-06 jul-06 sep-06 nov-06 ene-07 mar-07 may-07 jul-07 sep-07 nov-07 ene-08 mar-08 SpecInt2000 normalization change Oct07 Planned HP blades deployment may-08 9

10 CPU Efficiency 120,00% cput/wallt 100,00% 80,00% 60,00% 40,00% 20,00% 0,00% ene-06 mar-06 ATLAS CMS LHCb may-06 jul-06 sep-06 nov-06 ene-07 mar-07 may-07 jul-07 sep-07 nov-07 ene-08 ATLAS: low eff in Sep-07 due to a problem of the ATLAS jobs using the WN temporary local scratch area CMS: low eff in Sep-07 due to a problem with CMS s/w affecting mostly to the CSA07 merging jobs ALL: low effs in Oct-Dec 2007 are an overall problem. FEW users sending LOTS of jobs with efficiency close to ZERO PIC is publishing user level accounting in APEL since months This should help experiment managers to spot/ban inefficient users 10

11 Disk Storage Service From Jun-06 PIC is running a SRM-disk service based on dcache The service was running stably Oct-06 Jun-07 providing 80-90TB Jun-07: service upgraded to dcache-1.7, improved stability The main issue (gridftp-doors hanging) disappeared Purchases of new disk in the 1st half of 2007: Sun X4500 servers Apr-07 Oct-07: Deployment of 14 new Sun X4500 servers Allocated capacity ramped-up to 280TB New capacity purchased during Q new Sun X4500 servers Deployed Feb-08: being used today for CCRC08 Expect to ramp-up capacity up to near 600TB 11

12 Disk Accounting 1200 LHCb CMS ATLAS Allocated capacity (for T1) +13 Sun X4500 Planned TB disk MoU Pledge (with 70% eff) +14 Sun X new Dell PowerVault SAS 0 ene-06 mar-06 may-06 jul-06 sep-06 nov-06 ene-07 mar-07 may-07 jul-07 sep-07 nov-07 ene-08 mar-08 may-08 Start of SRM-disk svc with dcache-1.6 Upgrade to dcache 1.7 Upgrade to dcache 1.8 Configuration of SRMv2.2 Ready for CCRC08 12

13 Sun X4500 disk servers Disk servers with 48 HDs each (500 or 1000 GB) We run them with Solaris-10 OS and ZFS file system Good integration with dcache (java) Good measured performance in the dcache Production service One of the X4500 servers last week receiving data and sending it to tape at rates MB/s both ways sustained 13

14 Storage LAN: Distributed resilient network architecture In order to fully exploit the throughput capacity of each server, we are connecting them using 3 bridged switches every 18 disk servers 24 Gbps available bandwidth for pool2pool replication 20 Gbps available bandwidth to the outside Originally designed for resilient Tier-0 data receptor, will now be expanded to all dcache disk servers 14

15 Disk Capacity Planning 15

16 Tape Storage Service Early 2007, seeing that the deployment of Castor2 was more difficult than originally foreseen, decided to explore an alternative solution for the MSS: dcache + Enstore Apr-07: decided to go for a dcache + Enstore solution for the MSS and drop Castor Excellent support from FNAL for deploying the new Enstore service Collaboration agreement PIC-FNAL for future developments Sep-07: SRM-tape service based on dcache + Enstore in production, ready for the start of CMS CSA07 16

17 Castor Decommission CMS: Migration done (easy one) In Q CMS confirmed that all of the data on Tape at that time could be deleted Only 500 GB were staged and kept on disk until Jan-08 for few users LHCb: Migration of about 20TB finished in Feb-2008 Very good coordination with LHCb: smooth migration Castor ready to be decommissioned for LHCb ATLAS: Everything ready to migrate ~16TB Waiting for the end of CCRC08 February tests to release some manpower on both sides By the end of March we should have completed the Castor decommission for the 3 LHC experiments 17

18 MSS roadmap mid-2006 mid-2007 ~200TB 9949b allocated. Will be unavailable during STK robot upgrade Currently ~400TB LTO3 ready to be allocated 18

19 Tape Service Metrics Enstore performance tested during CCRC08 (with 4 LTO3 drives) Writing up to near 11 TB/day Reading up to near 9 TB/day reads writes 19

20 Tape Service Metrics In January 2008 the MB agreed that Tier-1s should report some MSS metrics in a format proposed by CERN PIC was one of the first Tier-1s to report MSS metrics Now collecting this data for post-ccrc08 analysis 20

21 Tape Usage Accounting feb-08 LHCb CMS ATLAS Installed capacity MoU Pledge Castor freeze Enstore in production abr-06 jun-06 ago-06 oct-06 dic-06 feb-07 abr-07 jun-07 ago-07 oct-07 dic-07 TB tape

22 Costs tracking: CPU In December 2007, B. Panzer released an update of the CERN cost trends document PASTA-05 (Project Proposal Oct-06) PASTA-07 Euros/ksik PIC Purchases We can compare this new cost estimation (PASTA-07) w.r.t. the one used Oct-06 in the project proposal (PASTA-05, agreed by all the projects) and the actual cost of the CPU purchases executed 0 dic-04 dic-05 dic-06 dic-07 dic-08 dic-09 dic-10 dic-11 dic-12 22

23 Costs tracking: Disk Comparison of the the cost estimations and the actual cost of the Disk purchases executed Euros/TB PASTA-05 (Project Proposal Oct-06) PASTA-07 PIC Purchases dic-04 dic-05 dic-06 dic-07 dic-08 dic-09 dic-10 dic-11 dic-12 23

24 Site Reliability Site Reliability has been measured for T0 and T1s since May-06 Very useful tool in pushing the sites towards production services PIC reliability has been mostly over the average PIC is one of the 3 only sites reaching the target since July % 95% 90% Reliability 85% 80% 75% 70% 65% PIC T0/T1 average Target 60% may-06 jul-06 sep-06 nov-06 ene-07 mar-07 may-07 jul-07 sep-07 nov-07 ene-08 24

25 Wide Area Network LHC-OPN Shared 10GE lambda for the LHC-OPN and the NREN connection Unique physical path for the fiber's last mile 2 VLANs over 10 GE 2 Gbps (Best Effort) Traffic PIC Tier Gbps LHC- OPN Traffic Tier-0 PIC Traffic PIC Tier-1 25

26 Network s future at PIC The 10Gbps LHC-OPN connection In the near future the last mile unique path problem will be solved Along 2009 PIC will have a 10Gbps backup link The 2Gbps non-opn link will be upgraded to 10Gbps by Anella Científica Deployment of LAN redundancy with Spanning Tree 26

27 LHC-OPN deployment Oct-06: 10 Gbps lambda to RedIRIS POP at Barcelona May-07: Last mile to PIC deployed Jun-07: 10Gbps certification tests To be able to make use of the 10Gbps lambda to CERN we needed to move to a new IP address range (new AS) Sep-07: GridFTP doors moved to the new IP range First LHC incoming data traffic flowing through the new OPN link Nov-07: All of the storage servers moved to the new IP range First LHC outgoing data traffic flowing through the OPN link 27

28 WAN performance: import The network traffic entering PIC has been sustained at the level of 2Gbps in the last days showing good stability On Feb 25 reached a peak of about 4 Gbps 28

29 WAN performance: import CMS writing data to PIC at MB/s during several days 29

30 WAN performance: export CMS Data export from PIC to more than 20 sites at MB/s during several days 30

31 LCG Spain Network Working Group With the LHC start approaching the load on the LCG Tiers is ramping up, and this has an impact on the Academic Networks Up to now, the performance of the Spanish network for the LCG sites has been very good To ensure that this situation is maintained, it has been agreed with the Director of the Spanish NREN RedIRIS to create a Network Design and Operation Working Group Initial membership could be a representative of each Spanish LCG Tier, and a representative of RedIRIS Leave the door open for future inclusion of other relevant actors Regional Academic Networks Portuguese sites and networks 31

32 WLCG High Level Milestones: 24x7 PIC does not plan to have staff on site 24x7 Deploy critical services in a resilient way and on robust h/w Critical failures will trigger notification to a person on-call that will be able to intervene remotely or, if really needed, go to PIC Manager on Duty (weekly shift) main duties: First line incident solving: will react to critical alarms, follow recovery procedures Single contact point for support: regularly poll support mail, and redirect issues to relevant experts Calendar 2005: Phase-0 started. Shift only working hours. Oct-07: MoD procedures doc to operate Tier-1 services released Dec-07: Phase-1 of MoD on-call started MoD will work with service experts to ensure that relevant alarms are deployed and associated procedures documented Phase-2 of MoD on-call: start date yet to be set Alarms and procedures for Production Services stable 32

33 WLCG HLMs: VO-boxes SLAs Sites and experiments agreed that a new host was needed at the T1s to deploy experiment-specific agents in providing various services (mainly data management): VO-boxes Initial concern at sites for such services, since the responsibility boundary experiment vs. site was fuzzy The MB agreed that T1s should sign SLAs with the experiments defining the responsibilities and procedures to follow in the event of problems Status at PIC LHCb: Proposal presented in Oct-07, SLA signed in Jan-08 CMS: Preliminary draft proposed by mid-07, but no progress since then Now we propose CMS an SLA similar to the one signed with LHCb ATLAS: ATLAS VO-boxes are still being run at CERN, not at the (at least european) Tier1s Until the schedule for deploying the VO-box in production at PIC is clarified from ATLAS, the SLA is paused 33

34 LCG-3D service Oracle databases synchronized among T0-T1s (Oracle streams) ATLAS will run the CondDB (and possibly TAGs) on it LHCb will run the CondDB and the LFC on it Up and running at PIC since April Successfully tested: CondDB by both experiments Oracle-streams, monitoring, backup recovery procedures Last year we called for a tender for a specific (Oracle-certified) h/w platform to deploy these DDBB clusters The plan is to deploy all of the Oracle DDBB backends for any other service on the same RAC platform as the LCG-3D FTS and LFC-atlas, for the moment Final h/w for the production instances is being deployed now 28TB Netapp FiberChannel disk 9 Fujitsu Siemens blade servers (dual-dual Xeon) 2 Brocade FiberChannel switches 34

35 File Transfer Service Currently running on two servers: One head node with the agents and web service One Oracle DB backend node Robustization: Will split the head node into five hosts Web-service: 2 load-balanced nodes Agent daemons: spread across 3 hosts Oracle DB backend in a RAC 35

36 LFC File Catalogue Currently only providing this service for ATLAS. Running on two servers: One head node with the front-end daemon One MySQL DB backend node Robustization: Will split the head node into two load balanced hosts Migrate de DB backend to Oracle RAC Will deploy a duplicate of this solution for the LHCb LFC 36

37 WLCG activities involving all Tier-1s In the last year, PIC Tier-1 has been participating in the WLCG testing activities driven by the LHC experiments Main Goal: ramp-up the load on the infrastructure towards nominal values and test stability + performance of the services In Sep-07 the project decided to schedule two test periods in 1st half Seen as the last opportunity to try and test the infrastructure under realistic conditions before data taking CCRC08: Combined Computing Readiness Challenge phases: Feb-08 and May-08 In the following slides we will briefly present the most relevant results obtained by PIC, emphasizing those of the recently run CCRC08 37

38 ATLAS T0 Throughput test Oct 2007 From 16 until 22 October 2007 ATLAS did a throughput test exporting data from CERN to the 10 Tier-1s at the same time Coincidence with CMS CSA07 transfers ATLAS managed to exceed 1GB/s out from CERN CERN PIC reached 130 MB/s daily average for ATLAS, with very low error rate 38

39 ATLAS Cosmic Runs ATLAS is regularly taking cosmic muon data to test the whole detector and data acquisition chain The M4 run (23/08/ /09/2007) was the first test of exporting real data to the Tier-1s About 13 TB of data successfully transferred to PIC and then to the Tier-2s By the end of August the Enstore tape system was still not in production at PIC M4 data was first stored on disk migrated to tape few weeks later The M5 run (22/10/ /11/2007) Nominal shares for Tier-1s were imposed: ~3 TB to PIC Enstore service already in production this time the data was sent directly to tape 39

40 ATLAS CCRC Feb08 Two first weeks setting up SRM tokens and solving problems at T0 Started real activities on week-3 (17-23 Feb 2008) Data produced with load generator pushed out of CERN Started with 25% of peak rate, then increased in steps to 100% Data distributed to all 10 T1s according to MoU PIC Results of week-3: Sustained 700 MB/s for 2 days Many peaks above 1.1GB/s for several hours 6 out of the 10 Tier-1 ran smoothly PIC was one of them Throughput Errors 40

41 ATLAS CCRC08: PIC T2s distribution On the 26-Feb, the data distribution from PIC to the regional T2s was tested with successful results The data arrived with very good efficiency to most of the T2s 41

42 CMS Activity: continuous massive data transfers CMS started last year a program of continuous data transfers among all its sites Goal: Test the service stability under load conditions for long periods This has been an extremely useful tool for deploying and understanding how to operate the Data Transfer Services In the last year: PIC imported from any other CMS site: 1520 TB ~10% of the total imports to all T1s PIC exported to any other CMS site: 1400 TB ~6% of the total exports from all T1s A total of almost 3 PB import + export from PIC in the last year ~250 TB/month, ~8 TB/day 42

43 CMS CCRC Feb08 T0 T1 targets To disk: 40% of nominal rate (25MB/s for PIC) sustained for 3 days To tape: migrate the above data within 1 week with stable pattern T1 T1 targets 50% of the overall 2008 nominal outbound rate to T1s (4MB/s to each T1 for PIC) Exchange data with at least 3 T1s and at least 1 must not be in the same continent T1 T2 targets Demonstrate aggregate T1 T2 target to regional T2s only For PIC: 19,4 MB/s to T2-Spain and 3,1 MB/s to T2-Portugal T2 T1 targets Traffic only from regional T2s according to the Computing Model For PIC: 1,4 MB/s from T2-Spain and 0,5 MB/s from T2-Portugal 43

44 CMS CCRC08: T0 PIC test As presented by D. Bonacorsi on CMS computing meeting 14-Feb 44

45 CMS CCRC08: PIC T1s test As presented by D. Bonacorsi on CMS computing meeting 14-Feb 45

46 CMS CCRC08: PIC T2s test As presented by D. Bonacorsi on CMS computing meeting 14-Feb 46

47 CMS CCRC08: T2s PIC test As presented by D. Bonacorsi on CMS computing meeting 14-Feb 47

48 LHCb CCRC08 Raw data distribution to Tier-1s Share according to resource pledges from sites Test performed during 3rd week of Feb-08 (OK for 6 out of 7 T1s) Data reconstruction at Tier-0 and Tier-1s Production of rdst stored locally and data access using SRMv2.2) Report from yesterday: OK at PIC with two issues Wrong mapping of the production role to the local batch accounts Eventual no available space (dcache bug affecting several sites) Both promptly addressed and solved by PIC experts 48

49 CCRC08 Combined transfers The 4 experiments managed to export data from CERN to all the Tier-1s simultaneously for several days in February CERN total export rate exceeded 2 GB/s for several hours on Feb 22nd 49

50 CCRC08 Combined transfers Looking at the sites breakdown one can see that a) PIC is visible b) reaching nominal transfer rates from CERN 50

51 PIC support to LHCb central services DIRAC is the software infrastructure that manages the use of LHCb computing resources The Monitoring and Accounting DIRAC systems have historically been running at PIC due to the important implication of the Spanish groups (UB and USC) in the development of DIRAC Since the start of the Spanish collaboration to the DIRAC project, there has been full support from PIC to these activities, in term of providing the necessary severs to run DIRAC services Last Jan-08 PIC signed a formal agreement with LHCb to provide support for these LHCb central services, hosting the following nodes: Database server DIRAC server Web server Development server 51

52 Experiment liaisons at PIC The successful operation of a multi-experiment Tier-1 requires to have one person on-site per experiment acting as liaison Two liaisons currently at PIC: ATLAS (X.Espinal) and CMS (J.Flix) LHCb liaison was half-funded by LHCb-T2 project Funding available at the Tier-1 from the start of current project. Recruiting process is ongoing The experiment liaisons are doing a very good job and service to the collaboration: ATLAS, X. Espinal: Co-coordinator of operations shifts, member of the central MCprod and Data Processing team, member of the DDM regional team for Spain and Portugal CMS, J. Flix: Co-coordinator of the CCRC08 Tier-1 reprocessing tests, Coordinator of the CCRC08 FTS-related issues, Deputy member of the CMS CRB LHCb, A. Casajús: very useful coordination work for VObox SLA and Castor-Enstore migration The PIC Tier-1 ATLAS and CMS liaisons organize regular meetings with Spanish and Portuguese sites to coordinate T1-T2 operations 52

53 Some issues of concern Delay of the CPU ramp-up due to the electrical problem with blade centers and impact it has in our ability to stress the Storage service Confident that we will catch up after the electrical shutdown in Easter Procedure for experiments to update requirements in the C-RRBs CMS change of ~25% for 2008 received in the very last moment Would like to see the reqs./pledges update calendar described in the MoU honored (may be once the RSG will be operative) Achieve and maintain adequate reliabilities and availabilities for the complete integrated chain T0 + Spanish T1 + ATLAS/CMS/LHCb associated T2s in Spain and Portugal The current success of the WLCG is largely thanks to the EGEE underlying infrastructure it builds on EGEE-III is about to start, and will be the last one before EGI Spanish LCG sites need to push and be active in the recently created Red Nacional de e-ciencia and follow the NGI creation 53

54 Summary PIC has participated in all of the tests executed within WLCG in the last years to demonstrate readiness for LHC data taking Service reliability has been improving with time and stays above the average for Tier-1s Successful results in the WLCG readiness tests of the Tier-1 services for the three experiments An example could be the successful deployment of the new SRMv2.2 service on time for the CCRC08, while Deploying a completely new MSS backend (Enstore) Migrating the experiments data from Castor to Enstore Developing a new driver to be able to use IBM robot with Enstore Steadily ramping-up the service capacity The procurement execution is slightly delayed. Will not reach the 2008 MoU pledges by 1st April but confident that will make it by May 54

55 Backup Slides

56 Top-down cost estimate (from the Project proposal Jun-2007) Recalling the PIC new pledges: PIC T1 Capacity 2006 (*) CPU (ksi2k) Disk (TB) Tape (TB) The cost estimation can be derived from the previous slide table: Assuming equipment is decommissioned after 3 years PIC T1 Cost ( ) CPU Disk Tape Total Total cost : Funding request for h/w: (savings from prev. Project + PIC-BASE) Funding obtained ~ for h/w 56

57 Tier-1 cost estimation with new pledges C-RRB Fall-07 Applying exactly the same cost estimations than those in the project proposal, the impact of changing to the new pledges for was minimal Tier1 CPU Tier1 Disk Tier1 Tape TOTAL (euros) Total cost : To be compared to the total cost with the original pledges in the proposal:

58 Tier-1 cost estimation with new pledges C-RRB Fall-07 and new PASTA-07 intermedio Applying new cost estimations published by B. Panzer on Dec-2007: Tier1 CPU Tier1 Disk Tier1 Tape TOTAL (euros) Total cost : Good news is that cost for 2009, 2010 seems more stable 58

59 Tier-1 cost estimation with new pledges C-RRB Fall-07 and new PASTA-07 (bare) Applying new cost estimations published by B. Panzer on Dec-2007: Tier1 CPU Tier1 Disk Tier1 Tape TOTAL (euros) Total cost :

60 Tier-2 Availability Tier-2 sites availability/reliability is being officially tracked by the LCG project since October 2007 (reports produced) 110% 100% 90% Availability 80% 70% 60% 50% 40% 30% PIC ES-ATLAS-T2 ES-CMS-T2 ES-LHCb-T2 oct-07 nov-07 dic-07 ene-08 60

61 WLCG High Level Milestones The Tier-0 and Tier-1s progress is closely tracked within the MB Capacity and Reliability milestones High Level Milestones The status of HLMs is reported quarterly to the project OB 61

62 LCG 2007 Accounting Report 62

63 CPU cumulative accounting?? Gráfico de accounting cumulativo?? Hacerlo también modificando el installed nov-06 oct-07 por 30% menos a ver qué tal mal vamos con la calibración nueva 63

64 CPU Eff Summary Tier1s

65 Old Experiment Requirements Old experiment requirements, from Oct-2006: (Used in the proposal for the MEC projects ) CPU (ksi2k) required ALICE ATLAS CMS LHCb Total Disk (Tbytes) required ALICE ATLAS CMS LHCb Total Tape (Tbytes) required ALICE ATLAS CMS LHCb Total

66 How is PIC connected to non-opn sites? Traffic PIC - T2s flows through RedIRIS 2 Gbps connection with l Anella Científica (RREN) / RedIRIS (NREN) RedIRIS network is fully redundant Non-OPN traffic with PIC All traffic with PIC OPN&non-OPN 66

67 CMS export to Regional T2s Data export to each regional T2s at the level of >20MB/s daily average 67

68 ATLAS Mcprod contribution Distribution of MCprod walltime for jobs run in Jan-Feb 2008 in the ATLAS PIC cloud Main contributors: UAM, PIC, IFIC Good success rate 68

69 CMS upload to PIC from Regional T2s Upload from Regional T2s to PIC demonstrated at the level of 10MB/s daily for every site, sustained during several days 69

70 Upload/export PIC-T2s last week Upload to PIC: CIEMAT and Coimbra great quality Lisbon is having some problems IFCA medium quality Export from PIC: CIEMAT great quality Coimbra has some problems LISBON and IFCA did not restart downloads 70

71 Issue exporting CMS LoadTest data from PIC Export from PIC: From 24 til 29 Feb observed a decrease in the quality of exports from PIC to all of the sites We suspect the reason is that these export tests are using a quite small sample of test files Many sites trying to read the same files again and again dcache detects those files as very hot and, somehow is not handling the situation properly Solved on the 29th Feb by restarting a dcache cell 71

72 DDT link commissioning STAR-IFCA (presented by JFlix in CMS-PIC meeting 18-Dec-2007) SWE in DDT (use of T1-STAR channels): PIC really stable IFCA Tier-2 was able to commision all import links from all Tier-1 centers using PIC FTS (via STAR-IFCA channel, all except ASGC). RAL,FNAL PIC* INP3 We asked SWE Tier-2s beginning December to use T1-STAR channels from File Origins (to accommodate to CMS plan): IFCA change * PIC IFCA transfers routed through 622Mbps path. RedIRIS contacted. Rate drop T1-STAR channels need hard debugging. Only way to keep some Tier-1 import links commissioned is by using PIC FTS. 72

73 Total Network Traffic MRTG monitoring (total input/output to/from PIC), last week On Monday 25-Feb we exceeded 5Gbps (30 min. Average) input to PIC 73

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

High Energy Physics data analysis

High Energy Physics data analysis escience Intrastructure T2-T3 T3 for High Energy Physics data analysis Presented by: Álvaro Fernandez Casani (Alvaro.Fernandez@ific.uv.es) IFIC Valencia (Spain) Santiago González de la Hoz, Gabriel Amorós,

More information

LCG data management at IN2P3 CC FTS SRM dcache HPSS

LCG data management at IN2P3 CC FTS SRM dcache HPSS jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique

More information

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011 High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

Pushing the Limits. ADSM Symposium Sheelagh Treweek September 1999 Oxford University Computing Services 1

Pushing the Limits. ADSM Symposium Sheelagh Treweek September 1999 Oxford University Computing Services 1 Pushing the Limits ADSM Symposium Sheelagh Treweek sheelagh.treweek@oucs.ox.ac.uk September 1999 Oxford University Computing Services 1 Overview History of ADSM services at Oxford October 1995 - started

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers Journal of Physics: Conference Series The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers To cite this article: D Bonacorsi et al 2010 J. Phys.: Conf. Ser. 219 072027 View

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

LCG Conditions Database Project

LCG Conditions Database Project Computing in High Energy and Nuclear Physics (CHEP 2006) TIFR, Mumbai, 13 Feb 2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans On behalf of the COOL team (A.V., D.Front,

More information

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)

More information

Data storage services at KEK/CRC -- status and plan

Data storage services at KEK/CRC -- status and plan Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The

More information

Grid Operation at Tokyo Tier-2 Centre for ATLAS

Grid Operation at Tokyo Tier-2 Centre for ATLAS Grid Operation at Tokyo Tier-2 Centre for ATLAS Hiroyuki Matsunaga, Tadaaki Isobe, Tetsuro Mashimo, Hiroshi Sakamoto & Ikuo Ueda International Centre for Elementary Particle Physics, the University of

More information

and the GridKa mass storage system Jos van Wezel / GridKa

and the GridKa mass storage system Jos van Wezel / GridKa and the GridKa mass storage system / GridKa [Tape TSM] staging server 2 Introduction Grid storage and storage middleware dcache h and TSS TSS internals Conclusion and further work 3 FZK/GridKa The GridKa

More information

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Figure 1: cstcdie Grid Site architecture

Figure 1: cstcdie Grid Site architecture AccessionIndex: TCD-SCSS-T.20121208.098 Accession Date: Accession By: Object name: cstcdie Grid Site Beowulf Clusters and Datastore Vintage: c.2009 Synopsis: Complex of clusters & storage (1500 cores/600

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

Security and Reliability of the SoundBite Platform Andy Gilbert, VP of Operations Ed Gardner, Information Security Officer

Security and Reliability of the SoundBite Platform Andy Gilbert, VP of Operations Ed Gardner, Information Security Officer Security and Reliability of the SoundBite Platform Andy Gilbert, VP of Operations Ed Gardner, Information Security Officer 2007 SoundBite Communications, Inc. All Rights Reserved. Agenda Scalability Capacity

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Server Virtualization and Optimization at HSBC. John Gibson Chief Technical Specialist HSBC Bank plc

Server Virtualization and Optimization at HSBC. John Gibson Chief Technical Specialist HSBC Bank plc Server Virtualization and Optimization at HSBC John Gibson Chief Technical Specialist HSBC Bank plc Background Over 5,500 Windows servers in the last 6 years. Historically, Windows technology dictated

More information

Council, 26 March Information Technology Report. Executive summary and recommendations. Introduction

Council, 26 March Information Technology Report. Executive summary and recommendations. Introduction Council, 26 March 2014 Information Technology Report Executive summary and recommendations Introduction This report sets out the main activities of the Information Technology Department since the last

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

Virtualizing Oracle on VMware

Virtualizing Oracle on VMware Virtualizing Oracle on VMware Sudhansu Pati, VCP Certified 4/20/2012 2011 VMware Inc. All rights reserved Agenda Introduction Oracle Databases on VMware Key Benefits Performance, Support, and Licensing

More information

Distributed Data Management on the Grid. Mario Lassnig

Distributed Data Management on the Grid. Mario Lassnig Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

Oracle for administrative, technical and Tier-0 mass storage services

Oracle for administrative, technical and Tier-0 mass storage services Oracle for administrative, technical and Tier-0 mass storage services openlab Major Review Meeting 2009 29 January 2009 Andrei Dumitru, Anton Topurov, Chris Lambert, Eric Grancher, Lucia Moreno Lopez,

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

EGI-InSPIRE RI NGI_IBERGRID ROD. G. Borges et al. Ibergrid Operations Centre LIP IFCA CESGA

EGI-InSPIRE RI NGI_IBERGRID ROD. G. Borges et al. Ibergrid Operations Centre LIP IFCA CESGA EGI-InSPIRE RI-261323 NGI_IBERGRID ROD G. Borges et al. Ibergrid Operations Centre LIP IFCA CESGA : Introduction IBERGRID: Political agreement between the Portuguese and Spanish governments. It foresees

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

GEMSS: a novel Mass Storage System for Large Hadron Collider da

GEMSS: a novel Mass Storage System for Large Hadron Collider da Jun 8, 2010 GEMSS: a novel Mass Storage System for Large Hadron Collider da A.Cavalli 1, S. Dal Pra 1, L. dell Agnello 1, A. Forti 1, D.Gregori 1 B.Matrelli 1, A.Prosperini 1, P.Ricci 1, E.Ronchieri 1,

More information

The Echo Project An Update. Alastair Dewhurst, Alison Packer, George Vasilakakos, Bruno Canning, Ian Johnson, Tom Byrne

The Echo Project An Update. Alastair Dewhurst, Alison Packer, George Vasilakakos, Bruno Canning, Ian Johnson, Tom Byrne The Echo Project An Update Alastair Dewhurst, Alison Packer, George Vasilakakos, Bruno Canning, Ian Johnson, Tom Byrne What is Echo? Service S3 / Swift XrootD / GridFTP Ceph cluster with Erasure Coded

More information

Benoit DELAUNAY Benoit DELAUNAY 1

Benoit DELAUNAY Benoit DELAUNAY 1 Benoit DELAUNAY 20091023 Benoit DELAUNAY 1 CC-IN2P3 provides computing and storage for the 4 LHC experiments and many others (astro particles...) A long history of service sharing between experiments Some

More information

The ATLAS Production System

The ATLAS Production System The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Bill Boroski LQCD-ext II Contractor Project Manager

Bill Boroski LQCD-ext II Contractor Project Manager Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,

More information

WSDC Hardware Architecture

WSDC Hardware Architecture WSDC Hardware Architecture Tim Conrow, Lead Engineer Heidi Brandenburg IPAC/Caltech HB 1 Overview Hardware System Architecture as presented at the Critical Design Review RFA from CDR board Additional tasks

More information

Dimensioning storage and computing clusters for efficient high throughput computing

Dimensioning storage and computing clusters for efficient high throughput computing Journal of Physics: Conference Series Dimensioning storage and computing clusters for efficient high throughput computing To cite this article: E Accion et al 2012 J. Phys.: Conf. Ser. 396 042040 View

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

Opportunities A Realistic Study of Costs Associated

Opportunities A Realistic Study of Costs Associated e-fiscal Summer Workshop Opportunities A Realistic Study of Costs Associated X to Datacenter Installation and Operation in a Research Institute can we do EVEN better? Samos, 3rd July 2012 Jesús Marco de

More information

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN 1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the

More information

Service Availability Monitor tests for ATLAS

Service Availability Monitor tests for ATLAS Service Availability Monitor tests for ATLAS Current Status Work in progress Alessandro Di Girolamo CERN IT/GS Critical Tests: Current Status Now running ATLAS specific tests together with standard OPS

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

Experience with Data-flow, DQM and Analysis of TIF Data

Experience with Data-flow, DQM and Analysis of TIF Data Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,

More information

Proven video conference management software for Cisco Meeting Server

Proven video conference management software for Cisco Meeting Server Proven video conference management software for Cisco Meeting Server VQ Conference Manager (formerly Acano Manager) is your key to dependable, scalable, self-service video conferencing Increase service

More information

Grid Security Policy

Grid Security Policy CERN-EDMS-428008 Version 5.7a Page 1 of 9 Joint Security Policy Group Grid Security Policy Date: 10 October 2007 Version: 5.7a Identifier: https://edms.cern.ch/document/428008 Status: Released Author:

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties Netherlands Institute for Radio Astronomy Update LOFAR Long Term Archive May 18th, 2009 Hanno Holties LOFAR Long Term Archive (LTA) Update Status Architecture Data Management Integration LOFAR, Target,

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

PoS(EGICF12-EMITC2)106

PoS(EGICF12-EMITC2)106 DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1 A distributed tier-1 L Fischer 1, M Grønager 1, J Kleist 2 and O Smirnova 3 1 NDGF - Nordic DataGrid Facilty, Kastruplundgade 22(1), DK-2770 Kastrup 2 NDGF and Aalborg University, Department of Computer

More information

Data Transfers in the Grid: Workload Analysis of Globus GridFTP

Data Transfers in the Grid: Workload Analysis of Globus GridFTP Data Transfers in the Grid: Workload Analysis of Globus GridFTP Nicolas Kourtellis, Lydia Prieto, Gustavo Zarrate, Adriana Iamnitchi University of South Florida Dan Fraser Argonne National Laboratory Objective

More information

Data transfer over the wide area network with a large round trip time

Data transfer over the wide area network with a large round trip time Journal of Physics: Conference Series Data transfer over the wide area network with a large round trip time To cite this article: H Matsunaga et al 1 J. Phys.: Conf. Ser. 219 656 Recent citations - A two

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

T0-T1-T2 networking. Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group

T0-T1-T2 networking. Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group T0-T1-T2 networking Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group 1 Requirements from the experiments 2 T0-T1 traffic 1. For all the experiments, the most important thing is to save a second

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

A scalable storage element and its usage in HEP

A scalable storage element and its usage in HEP AstroGrid D Meeting at MPE 14 15. November 2006 Garching dcache A scalable storage element and its usage in HEP Martin Radicke Patrick Fuhrmann Introduction to dcache 2 Project overview joint venture between

More information

2014 Bond Technology Update Progress of the Technology Network Infrastructure Upgrades Long Range Planning Committee March 4, 2015

2014 Bond Technology Update Progress of the Technology Network Infrastructure Upgrades Long Range Planning Committee March 4, 2015 2014 Bond Technology Update Progress of the Technology Network Infrastructure Upgrades Long Range Planning Committee March 4, 2015 2014 Bond Technology Update Progress of the Technology Network Infrastructure

More information

Leading Performance for Oracle Applications? John McAbel Collaborate 2015

Leading Performance for Oracle Applications? John McAbel Collaborate 2015 Leading Performance for Oracle Applications? John McAbel Collaborate 2015 First Let s Test Your IT Knowledge 1.) According to IDC in 2014, which system vendor is 2 nd in x86 blades market share in worldwide

More information

dcache tape pool performance Niklas Edmundsson HPC2N, Umeå University

dcache tape pool performance Niklas Edmundsson HPC2N, Umeå University dcache tape pool performance Niklas Edmundsson HPC2N, Umeå University dcache tape pool performance This presentation is focused on dcache tape pools as most commonly deployed on the NDGF Tier1, with TSM

More information

Grid and Cloud Activities in KISTI

Grid and Cloud Activities in KISTI Grid and Cloud Activities in KISTI March 23, 2011 Soonwook Hwang KISTI, KOREA 1 Outline Grid Operation and Infrastructure KISTI ALICE Tier2 Center FKPPL VO: Production Grid Infrastructure Global Science

More information

Accelerating Throughput from the LHC to the World

Accelerating Throughput from the LHC to the World Accelerating Throughput from the LHC to the World David Groep David Groep Nikhef PDP Advanced Computing for Research v5 Ignatius 2017 12.5 MByte/event 120 TByte/s and now what? Kans Higgs deeltje: 1 op

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

HPC at UZH: status and plans

HPC at UZH: status and plans HPC at UZH: status and plans Dec. 4, 2013 This presentation s purpose Meet the sysadmin team. Update on what s coming soon in Schroedinger s HW. Review old and new usage policies. Discussion (later on).

More information