August 31, 2009 Bologna Workshop Rehearsal I

Size: px
Start display at page:

Download "August 31, 2009 Bologna Workshop Rehearsal I"

Transcription

1 August 31, 2009 Bologna Workshop Rehearsal I 1

2 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing Compute power Main focus in this talk is on the the T0 and T1 reconstruction pass stages Requirements for analysis and simulation stages are in afternoon session talks Wide area networking Disk and tape storage Proposed Computing Resources Off-line operations August 31, 2009 Bologna Workshop Rehearsal I 2

3 CMS-HI Research Plan and HI Running CMS-HI Research Program Goals (see Bolek s opening presentation) Focus is on the unique advantages of the CMS detector for detecting high p T jets, Z 0 bosons, photons, D and B mesons, and quarkonia First studies at low luminosity will concentrate on establishing the global properties of heavy ion central collisions at the LHC Later high luminosity runs with sophisticated high level triggering will allow for in-depth rare probe investigations of strongly interacting matter Projected Heavy Ion Beam Schedule Only first HI run is known for certain, at the end of the 2010 pp run This run will be for 2 weeks at low luminosity In 2011 LHC will shut down for an extended period Conditioning work on magnets is needed to achieve design beam energy HI running is assumed to resume in 2012 with higher luminosity This luminosity will require the use of the HLT to select events HI runs will be for one month in each year of the LHC operations August 31, 2009 Bologna Workshop Rehearsal I 3

4 Assumed HI Running Schedule Projected HI Luminosity and Data Acquisition for the LHC CMS-HI Run Ave. L (cm -2 s -1 ) Uptime (s) Events taken Raw data 2010 ~5.0 x ~10 5 ~2.0 x 10 7 (MB) ~ x x x 10 7 (HLT) x x 10 7 (HLT) 225 Caveats 1) First year running may achieve greater luminosity and uptime resulting in up to 50% more events taken than assumed here 2) First year running is in minimum bias mode where the HLT will operate in a tag and pass mode 3) Third year running is the planned nominal year case when the CMS DAQ writes at 225 MB/s for the planned 10 6 s of HI running August 31, 2009 Bologna Workshop Rehearsal I 4

5 CMS-HI Compute Model CMS-HI Compute Model Guiding Principles CMS-HI computing will follow, as much as feasible, the CMS TDR-2005 report CMS-HI software development is entirely within the CMSSW release schedule Size of CMS-HI community and national policies mandates that we adapt the CMS multi-tiered computing grid structure to be optimum for our purposes CMS-HI Compute Model Implementation Raw data is processed at the T0/CAF in much the same workflows and staff as in HEP Data quality monitoring is performed in on-line and off-line modes A first pass reconstruction of the data is performed at the T0 Raw data, AlCa output, and T0 reco output are transferred to a Tier1-like computer center to be built at Vanderbilt University s ACCRE facility DOE policy will not allow the US sites to be official Tier1 with guaranteed performance ACCRE center will archive all the CMS-HI data and production files to tape It is possible that the Tier1 center in France will receive ~10% of the CMS-HI data Reco passes will be done at the Vanderbilt site, and possibly in France The Vanderbilt center will be the main analysis Tier2 for US groups (DOE policy) Reconstruction output will be transferred to Tier2 centers in Russia, Brazil, and Turkey Simulation production will be done at a newly funded CMS-HI center at the MIT Tier2, and at other participating CMS Tier2 centers (Russia, France, ) August 31, 2009 Bologna Workshop Rehearsal I 5

6 Computing Requirements: Annual Compute Power Budget for All Tasks Assumptions for Tasks Requiring Available Compute Power Reconstruction passes: CMS standard is 2, will want more for the first data set Analysis passes are scaled to take 25% of total annual compute power Simulation production/analyses are estimated to take one reconstruction pass Simulation event samples at 10% of real event totals (RHIC/post-RHIC experience) Simulation event generation/analysis processing takes 10x that of real event Constraint: Accomplish All Data Processing in One Year Offline processing must keep up with the annual data streams The goal is to process all the data within one year after acquisition, on average It is essential to have completed analyses to guide future running conditions Total Computer Power 12 Month Budget Allocation (after T0 reconstruction) A single, complete raw data reconstruction pass is targeted to take 4 months Analysis passes of the two reconstruction passes will take 4 months Simulation production/analysis will be done in 4 months In this model the total compute power requirement is verified by establishing that one reconstruction pass after the T0 will take not more than 4 months To establish this verification we need to determine the reconstruction times August 31, 2009 Bologna Workshop Rehearsal I 6

7 Computing Requirements: Determining Compute Power for One Reconstruction Pass Simulated two sets of HI collision events using the HYDJET generator Minimum bias event set meaning all impact parameters, first year running model Central collision event set meaning 0 4 fm impact parameter corresponds to the HLT events which will begin with the second year of running Simulated events are reconstructed in the CMSSW_3_1_2 release, August 2009 Results of simulation reconstruction CPU times Major component of the reconstruction time is the pixel proto-track formation This component depends on the p T minimum momentum cut parameter value Heavy ion event reconstruction will require the use of a p T cut > 75 MeV/c There will not be enough compute power possible to track every charged particle in central collision events down to 75 MeV/c ; this is not a major drawback for us At RHIC the PHENIX charged particle physics program cut is >= 0.2 GeVc Protons, kaons, and pions range out at ~0.5, ~0.3, and ~0.2 GeV/c, respectively The reconstruction CPU times for central events are plotted on the next page Minimum bias event reconstruction CPU times are approximately 3 times faster August 31, 2009 Bologna Workshop Rehearsal I 7

8 Computing Requirements: Determining Compute Time for Central Event Reco Distribution of reco CPU times for 200 central collision HI events using an 8.5 HS06 node Dependence of reco CPU times for central collision HI events on tracking p T cut Tracking p T cut = 0.3 GeV/c Error bars are the RMS widths of the distributions, not the centroid uncertainties Major caveat: The particle multiplicity for HI collisions at the LHC is uncertain to +/- 50% Therefore all CPU times and event size predictions have the same uncertainty August 31, 2009 Bologna Workshop Rehearsal I 8

9 Charged Particle Tracking in CMS pp tracking algorithm builds proto-tracks in 3 pixel layers down to p T = 75 MeV/c HI tracking algorithm for proto-tracks in pixel layers restricts the search window to correspond to a higher minimum p T August 31, 2009 Bologna Workshop Rehearsal I 9

10 Reconstruction Computing Requirements: Method of Verification for CMS-HI Compute Power Use event reconstruction times previously quoted Calculated the number of days processing time at the T0 Since reconstruction processing depends on the tracking pt cut, then alternative choices for the p T cut will be compared The T0 is (conservatively) fixed at 62,000 HEPSPEC06 units for five years Calculate the number of days for the T1 reconstruction pass The calculation is done for each year starting in 2010 Only one T1 center at Vanderbilt is assumed for now A contribution from the T1 center in France could occur eventually The T1 center at Vanderbilt will be constructed during a 5-year period Equal installments of about 4,250 HEPSPEC06 units (~500 cores) are assumed The same choices of the p T cut as used for the T0 will be compared Compare the number of days to the 4-month goal for each year If the number of days is ~4 months, the compute model is self-consistent If not, more compute power or less computing (p T tracking cut) are the choices Recall: all the predicted compute times have a +/-50% uncertainty until we see data August 31, 2009 Bologna Workshop Rehearsal I 10

11 Reconstruction Computing Requirements: 5-Year Annual Allocations of CMS-HI Compute Power August 31, 2009 Bologna Workshop Rehearsal I 11

12 Computing Requirements: Wide Area Networking Nominal Year Running Specifications for HI CMS DAQ planned to write at 225 MB/s for 10 6 s = 225 TB Calibration and fast reconstruction at T0 = 75 TB Total nominal year data transfer from T0 = 300 TB Note: DAQ may be allowed to write faster eventually Nominal Year Raw Data Transport Scenario T0 holds raw data briefly (days) to do calibrations, preliminary reco Data are written to T0 tape archive, not designed for re-reads Above mandates a continuous transfer of data to a remote site 300 TB x 8 bits/byte / (30 days x 24 hours/day x 3600 sec/hour) = 0.93 Gbps DC rate (no outages), or ~10 TB/day Same rate calculation as for pp data except pp runs for ~5 months A safety margin must be provided for outages, e.g. 4 Gbps burst August 31, 2009 Bologna Workshop Rehearsal I 12

13 Computing Requirements: Wide Area Networking Nominal Year Raw Data Transport Scenario Tier0 criteria mandate a continuous 1 Gbps transfer of data to a remote site A safety margin must be provided for outages, e.g. 4 Gbps burst Plan is to transfer these raw data to ACCRE for archiving to tape and for second pass reconstruction In August 2008, Vanderbilt spent $1M to provide a 10 Gbps link into ACCRE from Atlanta s Southern Crossroads hub which provides service to STARLIGHT Raw Data Transport Network Proposal for CMS-HI (Discussion with US DOE-NP) CMS-HEP and ATLAS will use USLHCNet to FNAL and BNL FNAL estimates that CMS-HI traffic will be ~2% all pp use of USLHCNet Propose to use USLHCNet at modest pro-rated cost to DOE-NP with HI raw data being transferred to STARLIGHT during one month only Have investigated Internet2 alternatives to the use of USLHCNet CMS management strongly discourages use of a new data path for CMS-HI A new data path from T0 would have to be solely supported by the CMS-HI group Transfers from VU to overseas Tier2 sites would use Internet2 alternatives Waiting for DOE review report from a presentation done on May 11 August 31, 2009 Bologna Workshop Rehearsal I 13

14 Computing Requirements: Local Disk Storage Considerations Disk Storage Categories [need confirmation of numbers] Raw data buffer for transfer from T0, staging to tape archive Currently 2.5 MBytes/minimum bias event; 4.4 MBytes/HLT event Reconstruction (RECO) output Currently 1.93 MBytes/minimum bias event; 7.68 MBytes/central event Analysis Object Data (AOD) output (input for Physics Interests Groups) PAT output format may replace AOD output format Don t have current numbers for AOD or PAT output events sizes Assume scale factors for AOD/PAT output and for PInG output Simulation production (MC) [need good numbers] Currently? MBytes/minimum bias event;? MBytes/central event Disk Storage Acquisition Timelines Model dependent according to luminosity growth Need to minimize tape re-reads, most "popular" files kept on disk Model dependent according to pace of physics publication: deadlines are set by which older data are removed (RHIC experience) August 31, 2009 Bologna Workshop Rehearsal I 14

15 Table has to be re-worked according to new events size information FY Computing Requirements: Local Disk Storage Annual Allocations Annual Disk Storage Allocated to Real Data Reconstruction and Analysis Events (Millions) Raw Data Buffer RECO AOD PInG Total Assumptions for the Allocation Growth of Real Data Disk Needs 1) In there will be two years devoted to processing the first year s ~50 TB of data 2) Event growth in as per model on page 4 3) Relative sizes of RECO, AOD, and PInG categories according to present experience 4) Relative amounts of CPU/disk/tape purchased are constrained by the annual funding Possible disk contribution from Tier1 site in France (or non-us Tier2 sites) is not included August 31, 2009 Bologna Workshop Rehearsal I 15

16 Table has to be re-worked according to new events size information Computing Requirements: Local Disk Storage Annual Allocations Annual Disk Storage Allocated to Simulation Production and Analysis FY Events (Millions) Evt. Size Raw RECO AOD Assumptions for the Allocation Growth of Simulation Disk Needs 1) GEANT (MC) event size is taken to be five times a real event size 2) Out-year RECO/AOD/PInG totals include retention of prior year results for ongoing analyses 3) Total real data + MC storage after 5 years = 680 TB in the US PInG ) These totals do not reflect pledges of disk space for Tier2 institutions outside the US Total August 31, 2009 Bologna Workshop Rehearsal I 16

17 Table has to be re-worked according to new events size information Computing Requirements: Tape Storage Allocations Tape Storage Categories and Annual Allocations First priority is archiving securely the raw data for analysis FY Tape must be on hand at beginning of the run, i.e. as early as August 2010 So the tape must be (over-)bought in the prior months Tape storage largely follows annual real data production statistics Some, but not all, MC/RECO/AOD/PInG annual outputs will be archived to tape Access to tape will be limited to archiving and production teams Guidance from experiences at RHIC's HPSS Unfettered, unsynchronized read requests to tape drives leads to strangulation Real Evts. (Millions) Raw RECO AOD PInG Real MC Real+MC Tot Δ (ΤΒ) August 31, 2009 Bologna Workshop Rehearsal I 17

18 Summary of Proposed Compute Resources for CMS-HI United States (DOE-NP funding proposal) Tier1/Tier2-like center for raw data processing and analysis at Vanderbilt This Tier1/Tier2-like center will have ~21,000 HEPSPEC06 after 5 years Raw data transport to the Vanderbilt center will use USLHCNET during one month All other file transfers to the Vanderbilt center will use non-uslhcnet resources (Proposed to the DOE-NP, not yet final); Vanderbilt center has 10 Gbps gateway A new HI simulation center will be added to the Tier2 center at MIT, ~6000 HS06 This new simulation center has been approved in principle by the DOE-NP There will be 680 TB of disk and 1.5 PB (?) of tape after 5 years at Vanderbilt France (possibly a 10% Tier1 contribution, with corresponding Tier2) Russia (preliminary numbers have already been provided) Brazil (preliminary numbers have already been provided) Turkey (preliminary numbers nave already been provided) Korea (?, no preliminary numbers have been provided) August 31, 2009 Bologna Workshop Rehearsal I 18

19 OFFLINE Operations at Vanderbilt T1/T2: Projected FTE staffing by local group Proof of Principle from RHIC Run7 Project Done for PHENIX at Vanderbilt 30 TBytes of raw data were shipped by GridFTP from RHIC to ACCRE in 6 weeks, comparable to the ~30-60 TBytes we expect from the LHC in 2010 Raw data were processed immediately on 200 CPUs as calibrations were received Reconstructed output of ~20 TBytes was shipped back to BNL/RACF for analysis by PHENIX users; there was no local tape archiving (data archived at BNL s HPSS) Entire process was automated/web-viewable using ~100 homemade PERL scripts PHENIX specific local manpower consisted of one faculty member working at 0.8 FTE, one post-doc at 0.4 FTE, three graduate students at 0.2 FTE each Intended Mode of Operation for CMS-HI Will install already developed CMS data processing software tools Much of CMSSW is working at ACCRE, good cooperation with VU CMS-HEP group Extensive domestic/international networking testing is already in progress Dedicating a similar sized local manpower effort as worked for PHENIX Run7 project, and overseas persons in CMS-HI for second/third shift monitoring tasks Will build a ~30 m 2 CMS center in our Physics building (~$20K equipment cost) to do off-line monitoring and DQM supervision from right near our own faculty offices August 31, 2009 Bologna Workshop Rehearsal I 19

20 OFFLINE Operations: Performance evaluations and oversight Existing Local Monitoring and Error Handling Tools ACCRE already has tools which provide a good statistical analysis of operational readiness and system utilizations A 24/7 trouble ticket report system was highly effective in the PHENIX Run7 project Internal and External Operations Evaluations The OSG and the CMS automated testing software are already reporting daily network reliability metrics to the ACCRE facility Although ACCRE will not be an official Tier1 site in CMS we will still strive to have its network performance as good as any Tier1 in CMS CMS-HI will establish an internal oversight group to advise the local Vanderbilt group on the performance of the CMS-HI compute center Oversight group will give annual guidance for new hardware purchases The progress and performance of the CMS-HI compute center will be reviewed at the quarterly CMS Compute Resources Board meetings, like all Tier1/Tier2 facilities August 31, 2009 Bologna Workshop Rehearsal I 20

21 Backup Various Cost Tables August 31, 2009 Bologna Workshop Rehearsal I 21

22 Computing Requirements: Local Disk Storage Allocation Risk Analysis Storage Allocation Comparisons and Risk Analysis CMS-HI proposes to have 680 TB after 5 years when steady state data production is 300 TB per year (or possibly even greater) CMS-HEP TDR-2005 proposed 15.6 PB combined Tier1 + Tier2 Ratio is 4.4% for disk storage as compared with 10% for CPUs PHENIX had 800 TB disk storage at RCF in 2007 when 600 TB of raw data was written, six years after RHIC had started Painful in PHENIX for allocating disk space among users and production Contention for last few TB of disk space for analysis purposes PHENIX has access to significant disk space at remote sites (CCJ, CCF, Vanderbilt,...) for real data production and simulations Mitigation of Disk Storage Allocation Risk May have access to other disk resources (REDDNet from NSF, 10s TB) Could decide to change balance of CPUs/Disk/Tape in outer years Disk storage at overseas CMS-HI sites has been pledged (100s of TB) August 31, 2009 Bologna Workshop Rehearsal I 22

23 Computing Requirements: Tape Storage Decision: FNAL or Vanderbilt? Quote For Tape Archive Services at FNAL (5-year cumulative costs) Tape media = $167,990 Additional tape drives = $32,000 Tape robot maintenance = $135,000 Added FNAL staff (0.2 FTE) = $200,000 Total = $534,990 Quote For Tape Archive Services at Vanderbilt (5-year costs) Will use existing tape robot system (LT04 tapes) for first two years, and then move to a larger tape robot system (LT06 tapes) for next three years Total hardware cost including license, media, and maintenance = $249,304 ACCRE staff requirement at 0.5 FTE/year subsidized at 58% by Vanderbilt Net staff cost to DOE = $134,613 Total = $383,917 Advantages to CMS-HI for Tape Archive at Vanderbilt Simpler operations no synchronization between Vanderbilt and FNAL Subsidy of staff cost by Vanderbilt results in overall less expense to DOE August 31, 2009 Bologna Workshop Rehearsal I 23

24 Capital Cost and Personnel Summary Category FY10 FY11 FY12 FY13 FY14 Total CPU cores Total CPUs Disk Total Disk Tape Total Tape CPU cost $151,800 $125,400 $120,000 $81,000 $64,800 $543,000 Disk cost $26,250 $36,000 $30,000 $24,000 $15,000 $131,250 Tape cost $71,150 $22,535 $58,449 $48,960 $48,210 $249,304 Total hard. $249,200 $183,935 $208,449 $153,960 $128,010 $923,554 Staff (DOE cost) Real Data Processing and Tape Archiving Center at Vanderbilt $89,650 $155,394 $161,609 $235,303 $244,715 $866,671 Total cost $338,850 $339,329 $370,059 $389,263 $372,725 $1,810,226 August 31, 2009 Bologna Workshop Rehearsal I 24

25 Capital Cost and Personnel Summary Simulation Production and Analysis Center at MIT Category FY10 FY11 FY12 FY13 FY14 Total CPU cores Total CPUs Disk Total Disk CPU cost $52,440 $41,800 $40,000 $27,000 $21,600 $182,000 Disk cost $8,750 $12,000 $10,000 $8,000 $5,000 $43,750 Total hard. $59,350 $53,800 $50,000 $35,000 $26,600 $224, FTE $37,040 $37,280 $39,400 $40,600 $41,800 $197,120 Total cost $96,390 $92,080 $89,400 $75,600 $68,400 $421,870 August 31, 2009 Bologna Workshop Rehearsal I 25

26 Capital Cost and Personnel Summary Cost Model Assumptions for Vanderbilt Site Category FY10 FY11 FY12 FY13 FY14 Total Hardware CPU cores with 4 GB Disk per TB Staffing (FTE) $345 $275 $250 $225 $225 - $350 $300 $250 $200 $200 - By DOE By VU Staffing (Cost) By DOE $89,650 $155,394 $161,609 $235,303 $244,715 $886,671 By VU $268,950 $217,551 $226,253 $168,074 $174,797 $1,055,625 August 31, 2009 Bologna Workshop Rehearsal I 26

27 Capital Cost and Personnel Summary Cumulative Cost of US Proposal Category FY10 FY11 FY12 FY13 FY14 Total Real data center at Vanderbilt Simulation center at MIT Vanderbilt + MIT Total $338,850 $339,329 $370,059 $389,263 $372,725 $1,810,116 $96,390 $92,080 $89,400 $75,600 $68,400 $421,870 $435,240 $431,409 $459,459 $464,863 $441,125 $2,232,096 August 31, 2009 Bologna Workshop Rehearsal I 27

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 Version V1: December 16 at 14:45 CST 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology Version February 18, 2010 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN 37235 3) Co-PIs

More information

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012 Continuation Report/Proposal for DOE Grant Number: DE SC0005220 1 May 15, 2012 Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En yo, Takashi Ichihara, Yasushi Watanabe and Satoshi Yokkaichi RIKEN Nishina Center for Accelerator-Based

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

PoS(High-pT physics09)036

PoS(High-pT physics09)036 Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction. at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

ATLAS PILE-UP AND OVERLAY SIMULATION

ATLAS PILE-UP AND OVERLAY SIMULATION ATLAS PILE-UP AND OVERLAY SIMULATION LPCC Detector Simulation Workshop, June 26-27, 2017 ATL-SOFT-SLIDE-2017-375 22/06/2017 Tadej Novak on behalf of the ATLAS Collaboration INTRODUCTION In addition to

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information

Direct photon measurements in ALICE. Alexis Mas for the ALICE collaboration

Direct photon measurements in ALICE. Alexis Mas for the ALICE collaboration Direct photon measurements in ALICE Alexis Mas for the ALICE collaboration 1 Outline I - Physics motivations for direct photon measurements II Direct photon measurements in ALICE i - Conversion method

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

LHC Computing Models

LHC Computing Models LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

Dijet A LL Dijet cross section 2006 Dijet A LL Preparation for Tai Sakuma MIT

Dijet A LL Dijet cross section 2006 Dijet A LL Preparation for Tai Sakuma MIT Dijet A LL 2005 Dijet cross section 2006 Dijet A LL Preparation for 2008 Tai Sakuma MIT Motivation for dijet A LL is to constrain ΔG G with initial event kinematics The initial state variables can be written

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Tracking POG Update. Tracking POG Meeting March 17, 2009

Tracking POG Update. Tracking POG Meeting March 17, 2009 Tracking POG Update Tracking POG Meeting March 17, 2009 Outline Recent accomplishments in Tracking POG - Reconstruction improvements for collisions - Analysis of CRAFT Data Upcoming Tasks Announcements

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN 1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the

More information

Computing at Belle II

Computing at Belle II Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small

More information

Tracking and compression techniques

Tracking and compression techniques Tracking and compression techniques for ALICE HLT Anders Strand Vestbø The ALICE experiment at LHC The ALICE High Level Trigger (HLT) Estimated data rate (Central Pb-Pb, TPC only) 200 Hz * 75 MB = ~15

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

Performance of the ATLAS Inner Detector at the LHC

Performance of the ATLAS Inner Detector at the LHC Performance of the ALAS Inner Detector at the LHC hijs Cornelissen for the ALAS Collaboration Bergische Universität Wuppertal, Gaußstraße 2, 4297 Wuppertal, Germany E-mail: thijs.cornelissen@cern.ch Abstract.

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

Computing for ILC experiments

Computing for ILC experiments Computing for ILC experiments Any comments are welcomed Akiya Miyamoto KEK 14 May 2014 AWLC14 Introduction Computing design and cost are not included in ILC TDR, because difficult to estimate a reliable

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

CMS Conference Report

CMS Conference Report Available on CMS information server CMS CR 2005/021 CMS Conference Report 29 Septemebr 2005 Track and Vertex Reconstruction with the CMS Detector at LHC S. Cucciarelli CERN, Geneva, Switzerland Abstract

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

I/O Choices for the ATLAS. Insertable B Layer (IBL) Abstract. Contact Person: A. Grillo

I/O Choices for the ATLAS. Insertable B Layer (IBL) Abstract. Contact Person: A. Grillo I/O Choices for the ATLAS Insertable B Layer (IBL) ATLAS Upgrade Document No: Institute Document No. Created: 14/12/2008 Page: 1 of 2 Modified: 8/01/2009 Rev. No.: 1.00 Abstract The ATLAS Pixel System

More information

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN CERN E-mail: mia.tosi@cern.ch During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2 10 34 cm 2 s 1 with an average pile-up

More information

Data handling and processing at the LHC experiments

Data handling and processing at the LHC experiments 1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

Track reconstruction with the CMS tracking detector

Track reconstruction with the CMS tracking detector Track reconstruction with the CMS tracking detector B. Mangano (University of California, San Diego) & O.Gutsche (Fermi National Accelerator Laboratory) Overview The challenges The detector Track reconstruction

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

arxiv:hep-ph/ v1 11 Mar 2002

arxiv:hep-ph/ v1 11 Mar 2002 High Level Tracker Triggers for CMS Danek Kotliński a Andrey Starodumov b,1 a Paul Scherrer Institut, CH-5232 Villigen, Switzerland arxiv:hep-ph/0203101v1 11 Mar 2002 b INFN Sezione di Pisa, Via Livornese

More information

Computing Resources Scrutiny Group

Computing Resources Scrutiny Group CERN RRB 17 056 April 17 Computing Resources Scrutiny Group C Allton (UK), V Breton (France), G Cancio Melia (CERN), A Connolly(USA), M Delfino (Spain), F Gaede (Germany), J Kleist (Nordic countries),

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Tracking and Vertex reconstruction at LHCb for Run II

Tracking and Vertex reconstruction at LHCb for Run II Tracking and Vertex reconstruction at LHCb for Run II Hang Yin Central China Normal University On behalf of LHCb Collaboration The fifth Annual Conference on Large Hadron Collider Physics, Shanghai, China

More information

YOUR CONDUIT TO THE CLOUD

YOUR CONDUIT TO THE CLOUD COLOCATION YOUR CONDUIT TO THE CLOUD MASSIVE NETWORKS Enterprise-Class Data Transport Solutions SUMMARY COLOCATION PROVIDERS ARE EVERYWHERE. With so many to choose from, how do you know which one is right

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties Netherlands Institute for Radio Astronomy Update LOFAR Long Term Archive May 18th, 2009 Hanno Holties LOFAR Long Term Archive (LTA) Update Status Architecture Data Management Integration LOFAR, Target,

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger

Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger Oliver Berroteran 2016-08-23 Supervisor: Markus Fasel, CERN Abstract The High Level Trigger (HLT) is a computing farm consisting

More information

Event reconstruction in STAR

Event reconstruction in STAR Chapter 4 Event reconstruction in STAR 4.1 Data aquisition and trigger The STAR data aquisition system (DAQ) [54] receives the input from multiple detectors at different readout rates. The typical recorded

More information

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod

More information

Adding timing to the VELO

Adding timing to the VELO Summer student project report: Adding timing to the VELO supervisor: Mark Williams Biljana Mitreska Cern Summer Student Internship from June 12 to August 4, 2017 Acknowledgements I would like to thank

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

Data Quality Monitoring at CMS with Machine Learning

Data Quality Monitoring at CMS with Machine Learning Data Quality Monitoring at CMS with Machine Learning July-August 2016 Author: Aytaj Aghabayli Supervisors: Jean-Roch Vlimant Maurizio Pierini CERN openlab Summer Student Report 2016 Abstract The Data Quality

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop 1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data

More information

Tracking and Vertexing performance in CMS

Tracking and Vertexing performance in CMS Vertex 2012, 16-21 September, Jeju, Korea Tracking and Vertexing performance in CMS Antonio Tropiano (Università and INFN, Firenze) on behalf of the CMS collaboration Outline Tracker description Track

More information

Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization

Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization Scientifica Acta 2, No. 2, 74 79 (28) Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization Alessandro Grelli Dipartimento di Fisica Nucleare e Teorica, Università

More information

CMS FPGA Based Tracklet Approach for L1 Track Finding

CMS FPGA Based Tracklet Approach for L1 Track Finding CMS FPGA Based Tracklet Approach for L1 Track Finding Anders Ryd (Cornell University) On behalf of the CMS Tracklet Group Presented at AWLC June 29, 2017 Anders Ryd Cornell University FPGA Based L1 Tracking

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

Experience with Data-flow, DQM and Analysis of TIF Data

Experience with Data-flow, DQM and Analysis of TIF Data Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,

More information

SLAC Participation in ATLAS Su Dong

SLAC Participation in ATLAS Su Dong SLAC Participation in ATLAS Su Dong Jun/07/06 Su Dong SLAC DOE Review: ATLAS participation 1 LHC: Part of Energy Frontier Strategy The recognition of the synergy between LHC and ILC is growing in the HEP

More information

Machine Learning in Data Quality Monitoring

Machine Learning in Data Quality Monitoring CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

Is Tape Really Cheaper Than Disk?

Is Tape Really Cheaper Than Disk? White Paper 20 October 2005 Dianne McAdam Is Tape Really Cheaper Than Disk? Disk Is... Tape Is... Not since the Y2K phenomenon has the IT industry seen such intensity over technology as it has with the

More information

The CMS data quality monitoring software: experience and future prospects

The CMS data quality monitoring software: experience and future prospects The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data

More information

for the DESY/ ECFA study detector

for the DESY/ ECFA study detector The TPC Tracker for the DESY/ ECFA study detector Ties Behnke DESY 1-May-1999 the TPC tracker requirements from physics a TPC at TESLA: can this work? results from simulation technical issues conclusion

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

CASE STUDY: RELOCATE THE DATA CENTER OF THE NATIONAL SCIENCE FOUNDATION. Alan Stuart, Managing Director System Infrastructure Innovators, LLC

CASE STUDY: RELOCATE THE DATA CENTER OF THE NATIONAL SCIENCE FOUNDATION. Alan Stuart, Managing Director System Infrastructure Innovators, LLC CASE STUDY: RELOCATE THE DATA CENTER OF THE NATIONAL SCIENCE FOUNDATION Alan Stuart, Managing Director National Science Foundation s New Headquarters in Alexandria, Virginia 1. Introduction to the National

More information

Total Cost of Ownership: Benefits of the OpenText Cloud

Total Cost of Ownership: Benefits of the OpenText Cloud Total Cost of Ownership: Benefits of the OpenText Cloud OpenText Managed Services in the Cloud delivers on the promise of a digital-first world for businesses of all sizes. This paper examines how organizations

More information

Real-time Analysis with the ALICE High Level Trigger.

Real-time Analysis with the ALICE High Level Trigger. Real-time Analysis with the ALICE High Level Trigger C. Loizides 1,3, V.Lindenstruth 2, D.Röhrich 3, B.Skaali 4, T.Steinbeck 2, R. Stock 1, H. TilsnerK.Ullaland 3, A.Vestbø 3 and T.Vik 4 for the ALICE

More information

Using Self-Protecting Storage to Lower Backup TCO

Using Self-Protecting Storage to Lower Backup TCO Economic Insight Paper Using Self-Protecting Storage to Lower Backup TCO A TCO comparison of NetApp s integrated data protection solution vs. a traditional backup to an external PBBA By John Webster, Sr.

More information

The LHC computing model and its evolution. Dr Bob Jones CERN

The LHC computing model and its evolution. Dr Bob Jones CERN The LHC computing model and its evolution Dr Bob Jones CERN Bob.Jones CERN.ch CERN was founded 1954: 12 European States Today: 20 Member States ~ 2300 staff ~ 790 other paid personnel > 10000 users

More information