LHC Computing Models

Size: px
Start display at page:

Download "LHC Computing Models"

Transcription

1 LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli

2 Outline Comparative analysis of the computing models (little about Alice) Referee comments Roadmap: what s s next Disclaimer: it is hard to digest and summarize the available information. Advance apologies for errors and omissions. LHC Computing Models - F. Forti 2

3 A Little Perspective In 2001 the Hoffman Review was conducted to quantify the resources needed for LHC computing Documented in CERN/LHCC/ As a result the LHC Computing Grid project was launched to start building up the needed capacity and competence and provide a prototype for the experiments to use. In 2004 the experiments have been running a Data Challenge to verify their ability to simulate, process and analyze their data In Dec 2004 Computing Model Documents have been submitted to LHCC, who has reviewed them on Jan 17-18, 2005 The Computing TDRs and the LCG TDR are expected this spring/summer. LHC Computing Models - F. Forti 3

4 Running assumptions 2007 Luminosity 0.5x10 33 cm -2 s Luminosity 2x10 33 cm -2 s Luminosity 1x10 34 cm -2 s -1 but trigger rate is independent of luminosity 7 months run pp = 10 7 s (real time 1.8x10 7 s) 1 month run AA = 10 6 s (real time 2.6x10 6 s) 4 months shutdown LHC Computing Models - F. Forti 4

5 Data Formats Names differ, but the concepts are similar: RAW data Reconstructed event (ESD, RECO, DST) Tracks with associated hits, Calorimetry objects, Missing energy, trigger at all levels, Can be used to refit, but not to do pattern recognition Analysis Object Data (AOD, rdst) Tracks, particles, vertices, trigger Main source for physics analysis TAG Number of vertices, tracks of various types, trigger, etc. Enough information to select events, but otherwise very compact. LHC Computing Models - F. Forti 5

6 General strategy Similar general strategy for the models: Tier 0 at CERN: 1 st pass processing in quasi-real time after rapid calibration RAW data storage Tier 1s (6 for Alice,CMS, LHCb; ; 10 for Atlas): Reprocessing; Centrally organized analysis activities Copy of RAW data; some ESD; all AOD; some SIMU Tier 2s (14-30) User analysis (chaotic analysis); Simulation Some AOD depending on user needs LHC Computing Models - F. Forti 6

7 Event sizes Parameter Unit ALICE ATLAS CMS LHCb p-p Pb-Pb Nb of assumed Tier1 not at CERN Nb of assumed Tier2 not at CERN Event recording rate Hz RAW Event size MB REC/ESD Event size MB AOD Event size kb TAG Event size kb N/A Running time per year M seconds Events/year Giga Storage for real data PB RAW SIM Event size MB REC/ESD SIM Event size MB Events SIM/year Giga LHC Computing Models - F. Forti 7

8 First Pass Reconstruction Assumed to be in real time CPU power calculated to process data in 10 7 s. Fast calibration prior to reconstruction Disk buffer at T0 to hold events before reconstruction Atlas: 5 days; CMS: 20 days; LHCb:? Parameter Unit ALICE ATLAS CMS LHCb p-p Pb-Pb Time to reconstruct 1 event k SI2k sec Time to simulate 1 event k SI2k sec LHC Computing Models - F. Forti 8

9 Streaming All experiments foresee RAW data streaming, but with different approaches. CMS: O(50) streams based on trigger path Classification is immutable, defined by L1+HLT Atlas: 4 streams based on event types Primary physics, Express line, Calibration, Debugging and diagnostic LHCb: >4 streams based on trigger category B-exclusive, Di-muon, D* Sample, B-inclusiveB Streams are not created in the first pass, but during the stripping process Not clear what is the best/right solution. Probably bound to evolve in time. LHC Computing Models - F. Forti 9

10 Data Storage and Distribution RAW data and the output of the 1 st reconstruction are store on tape at the T0. Second copy of RAW shared among T1s. CMS and LHCb distribute reconstructed data together (zipped) with RAW data. No navigation between files to access RAW. Space penalty, especially if RAW turns out to be larger than expected Storing multiple versions of reconstruced data can become inefficient Atlas distributes RAW immediately before reco T1s could do processing in case of T0 backlog LHC Computing Models - F. Forti 10

11 Data Storage and Distribution Number of copies of reco data varies Atlas assumes ESD have 2 copies at T1s CMS assumes a 10% duplication amount T1s for optimization reasons Each T1 is responsible for permanent archival of its share of RAW and reconstructed data. When and how to throw away old versions of reconstructed data is unclear All AOD are distributed to all T1s AOD are the primary source for data analysis LHC Computing Models - F. Forti 11

12 Calibration Initial calibration is performed at the T0 on a subset of the events It is then used in the first reconstruction Further calibration and alignment is performed offline in the T1s Results are inserted in the conditions database and distributed Plans are still very vague Atlas maybe a bit more defined LHC Computing Models - F. Forti 12

13 Reprocessing Data need to be reprocessed several times because of: Improved software More accurate calibration and alignment Reprocessing mainly at T1 centers LHCb is planning on using the T0 during the shutdown not obvious it is available Number of passes per year Alice 3 Atlas 2 CMS 2 LHCb 4 LHC Computing Models - F. Forti 13

14 Analysis The analysis process is divided in: Organized and scheduled (by working groups) Often requires large data samples Performed at T1s User-initiated (chaotic) Normally on small, selected samples Largely unscheduled, with huge peaks Mainly performed at T2s Quantitatively very uncertain LHC Computing Models - F. Forti 14

15 Analysis data source Steady-state analysis will use mainly AOD-style data, but initially access to RAW data in the analysis phase may be needed. CMS and LHCb emphasize this need by storing raw+reco (or raw+rdst) data together, in streams defined by physics channel Atlas relies on Event Directories formed by querying the TAG database to locate the events in the ESD and in the RAW data files LHC Computing Models - F. Forti 15

16 Simulation Simulation is performed at T2 centers, dynamically adapting the share of CPU with analysis Simulation data is stored at the corresponding T1 Amount of simulation data planned varies: Parameter Events/year Events SIM/year Ratio SIM/data Unit Giga Giga % p-p 100% ALICE % 20% Dominated by CPU power 100% may be too much; 10% may be too little 1 1 Pb-Pb 0.1 ATLAS CMS % LHCb % LHC Computing Models - F. Forti 16

17 GRID The level of reliance/use of GRID middleware is different for the 4 experiments: Alice: heavily relies on advanced, not yet available, Grid functionality to store and retrieve data, and to distribute CPU load among T1s and T2s Atlas: the Grid is built in the project, but basically assuming stability of what is available now. CMS: designed to work without Grid, but will make use of it if available. LHCb: flexibility to use the grid, but not strict dependance on it. Alice Atlas CMS Number of times the word grid appears in the computing model documents (all included) LHCb 1 LHC Computing Models - F. Forti 17

18 @CERN Computing at CERN beyond the T0 Atlas: CERN Analysis Facility but only for CERN-based people, not for the collaboration CMS: T1 and T2 at CERN but T1 has no tape since T0 does the storing LHCb: unclear, explicit plan to use the event fileter farm during the shutdown periods Alice: don t t need anything at CERN, the Grid will supply the computing power. LHC Computing Models - F. Forti 18

19 Overall numbers 2005 plan 2001 Review 2008; 20% sim standard year ATLAS ATLAS (HR) CMS CMS (HR) LHCb LHCb(HR) Alice ALICE(HR) Tier 0 CPU MSI2k CPU at CERN MSI2k Tier 0 disk PB disk CERN PB Tier 0 tape PB tape CERN PB Tier 1 cpu MSI2k Tier 1 disk PB Tier 1 tape PB Tier 2 cpu MSI2k Tier 2 disk PB Tier 2 tape PB total CPU MSI2k total disk PB total tape PB CPU Increase now/hr Disk Increase now/hr Tape Increase now/hr WAN IN Tier0 Gb/s WAN OUT Tier0 Gb/s WAN IN per Tier1Gb/s 5.7 WAN OUT per TieGb/s 3.5 LHC Computing Models - F. Forti 19

20 Referee comments Sum of comments from LHCC review and italian referees We still need to interact with the experiments We will compile a list of questions after today s presentations We plan to hold four phone meetings next week to discuss the answers Some are just thing the experiment know they need to do Stated here to reinforce them LHC Computing Models - F. Forti 20

21 LHCC Overall Comments The committee was very impressed with the quality of the work that was presented. In some cases, the computing models have evolved significantly from the time of the Hoffmann review. In general there is a large increase in the amount of disk space required. There is also an increase in overall CPU power wrt the Hoffmann Review. The increase is primarily at Tier-1's and Tier-2's. Also the number of Tier-1 1 and Tier-2 2 centers has increased. The experiences from the recent data challenges have provided a foundation for testing the validity of the computing models. The tests are at this moment incomplete. The upcoming data challenges and service challenges are essential to test key features such as data analysis and network reliability. LHC Computing Models - F. Forti 21

22 LHCC Overall Comments II The committee was concerned about the dependence on precise scheduling required by some of the computing models. The data analysis models in all 4 experiments are essentially untested. The risk is that distributed user analysis is not achievable on a large scale. Calibration schemes and use of conditions data have not been tested. These are expected to have an impact of only about 10% in resources but may impact the timing and scheduling. The reliance on the complete functionality of GRID tools varies from one experiment to another. There is some risk that disk/cpu resource requirements will increase if key GRID functionality is not used. There is also a risk that additional manpower will be required for development, operations and support. LHC Computing Models - F. Forti 22

23 LHCC Overall Comments III The contingency factors on processing times and RAW data size vary among the experiments. The committee did not review the manpower requirements required to operate these facilities. The committee did not review the costs. Will this be done? It would be helpful if the costing could be somewhat standardized across the experiments before it is presented to the funding agencies. The committee listened to a presentation on networks for the LHC. A comprehensive analysis of the peak network demands for the 4 experiments combined is recommended (see below.) LHC Computing Models - F. Forti 23

24 LHCC Reccommendations The committee recommends that the average and the peak computing requirements of the 4 experiments be studied in more detail. A month by month analysis of the CPU, disk, tape access and network needs for all 4 experiment is required. A clear statement on computing resources required to support HI running in CMS and ATLAS is also required. Can the peak demands during the shutdown period be reduced/smoothed? Plans for distributed analysis during the initial period should be worked out. Dependence of the computing model on raw event size, reconstruction time, etc. should be addressed for each experiment. Details of the ramp up ( ) 2008) should be determined and a plan for the evolution of required resources should be worked out. A complete accounting of the offline computing resources required at CERN is needed from ( ). 2010). In addition to production demands, the resource planning for calibration, monitoring, analysis and code testing and development should be included - even though the resources may seem small. The committee supports the requests for Tier-1/Tier 1/Tier-22 functionality at CERN. This planning should be refined for the 4 experiments. LHC Computing Models - F. Forti 24

25 LHCC Conclusions Aside from issues of peak capacity, the committee is reasonably certain that the computing models presented are robust enough to handle the demands of LHC production computing during early running (through 2010.) There is a concern about the validity of the data analysis components of the models. LHC Computing Models - F. Forti 25

26 Additional comments from INFN Referees Basic parameters such as event size and reconstruction CPU time have very large uncertainties Study the dependance on the computing models on these key parameters and determine what are the brick-wall limits Data formats are not well defined Some are better than others Need to verify that the proposed formats are good for real life analysis For example: can you do event display on AODs? can you run an alignment systematic study on ESDs? LHC Computing Models - F. Forti 26

27 Additional Comments II Many more people need to try and do analysis with the existing software and provide feedback Calibration and condition database access have not sufficiently defined and can represent bottlenecks No cost-benefit analysis has been performed so far Basically the numbers are what the experiments would like to have No optimization done yet on the basis of the available resources In particular: amount of disk buffers; duplication of data; reuse of tapes LHC Computing Models - F. Forti 27

28 Additional Comments III Are the models flexible enough? Given the large unknowns, will the models be able to cope with large changes in the parameters? For example: assuming all reconstructed data is on disk may drive the experiments (and the funding agencies) into a cost brick-wall if the size is larger than expected, or effectively limit the data acquisition rate. evolution after 2008 is not fully charted and understood. Is there enough flexibility to cope with a resource limited world? Are the models too flexible? Assuming the grid will optimize things for you (Alice) may be too optimistic Buffers and safety factors aimed at flexibility are sometimes large and not fully justified LHC Computing Models - F. Forti 28

29 Addition Comments IV The bandwidth is crucial Peak in T0 T1 T1 need to be understood The required bandwidth has not been fully evaluated, especially at lower levels and for reverse flow T1 T1,T2 (eg MC data produced at T2) Incoming at CERN (not T0) of reprocessed data and MC Need to compile tables with the same safety factors assumed LHC Computing Models - F. Forti 29

30 Specific comments on experiments Coming from LHCC review Not fully digested and not yet integrated by INFN referees Useful to collect them here for future reference Some duplication unavoidable. Your patience is appreciated. LHC Computing Models - F. Forti 30

31 ATLAS I Impressed by overall level of thought and planning which have gone into the overall computing model so far. In general fairly specific and detailed Welcome thought being given to the process of and support for detector calibration and conditions database. needs more work looking forward to the DC3 and LCG Service Challenge results An accurate, rapid calibration on 10% of data is crucial for the model LHC Computing Models - F. Forti 31

32 ATLAS II Concern about the evidence basis and experience with several aspects of the computing model large reduction factor assumed in event size and processing time, not really justified data size and processing time variation with background and increasing luminosity lead to large (acknowledged but somewhat hidden) uncertainties in estimates Data size and number of copies, particularly for the ESD, have significant impact on the total costs. We note that these are larger for Atlas than for other experiments. Also very large number of copies of the AOD Depend critically on analysis patterns which are poorly understood at this time and require a fair amount of resources LHC Computing Models - F. Forti 32

33 ATLAS III Concern about the lack of practical experience with the distributed analysis model especially if AOD are not the main data source at the beginning need resources to develop the managerial software needed to handle the distributed environment (based on Grid MW),for example if Tier1s need to help in case of backlog at Tier0 Need to include HI physics in the planning. Availability of computing resources during the shutdown should not be taken for granted. Real time data processing introduces a factor 2 extra resource requirement for reconstruction. It is not clear that this assumption is justified/valid cf the ability to keep up with data taking on average. The ATLAS TAG model is yet to be verified in practice. We are unclear exactly how it will work. Primary interface for physicists, need iterations to get it right. LHC Computing Models - F. Forti 33

34 Monte carlo ATLAS IV Agree that assumption of 20% fully reconstructed Monte Carlo is a risk and a larger number would be better/safer. Trigger rates We note that the total cost of computing scales with trigger rates. This is clearly a knob that can be turned. The CERN Analysis Facility is more a mixture of a Tier-1 and Tier-2 No doubt Atlas needs computing at CERN for calibration and analysis LHC Computing Models - F. Forti 34

35 CMS I Uncertainty of factor ~2 on many numbers taken as input to the model c.f. ATLAS assumptions Event size 0.3 MB MC inflated to 1.5 MB factor 2.5 for conservative thresholds/zero suppression at startup Safety factor of 2 in the Tier-0 0 RECO resources should be made explicit Should we try to use same factor for all four experiments? Fully simulated Monte Carlo 100% of real data rate seems like a reasonable goal but so would 50% (Atlas assumes 20%) Heavy Ion Need a factor of 10 improvement in RECO speed wrt current performance Ratio of CPU to IO means that this is possibly best done at Tier-2 2 sites! LHC Computing Models - F. Forti 35

36 CMS II Use of "CMS" Tier-0 0 resources during 4-month 4 shutdown? Maybe needed for CMS and/or ALICE heavy ion RECO Re-RECO RECO of CMS pp data on Tier-0 0 may not be affordable? We find clear justification for a sizable CERN-based analysis facility Especially for detector-related related (time critical) activities monitoring, calibration, alignment Is distinction between Tier-1 1 and Tier-2 2 at CERN useful? c.f. ATLAS LHC Computing Models - F. Forti 36

37 CMS III CMS attempt to minimize reliance on some of the currently least mature aspects of the Grid e.g., global data catalogues, resource brokers, distributed analysis Streaming by RECO physics objects Specific streams placed at specific Tier-1 1 sites RECO+RAW (FEVT full event) is the basic format for first year or two Conservative approach, but in our view not unreasonably so Some potential concerns: More difficult to balence load across all Tier-1s Politics: which Tier-1s get the most sexy streams? Analysis at Tier-1 1 restricted largely to organized production activities AOD production, dataset skimming, calibration/alignment jobs? except perhaps for one or two "special" T1s LHC Computing Models - F. Forti 37

38 CMS IV Specific baseline presented, but A lot of thought has gone into considering alternatives Model has some flexibility to respond to real life Presented detailed resources for 2008 Needs for 2007 covered by need to ramp up for 2008 No significant scalability problems apparent for future growth The bottom line: Assumptions and calculation of needed resources seem reasonable Within overall uncertainty of perhaps a factor ~2? ~ LHC Computing Models - F. Forti 38

39 LHCb I LHCb presented a computing model based on a significantly revised DAQ plan, with a planned output of 2 khz The committee did not try to evaluate the merit of the new data collection strategy, but tried to assess whether computing resources seem appropriate given the new strategy. It s notable that computing resources required for new plan are similar (within 50% except for disk) to those in the Hoffman report even though event rate is increased by an order of magnitude, largely because of reduction in simulation requirements in new plan. The committee was impressed by the level of planning that has gone into the LHCb computing model, and by the clarity and detail of the presentations. In general, the committee believes that LHCb presented a well reasoned plan with appropriate resources for their proposed computing model. LHC Computing Models - F. Forti 39

40 LHCb II Time variation of resource requirements. In the LHCb computing plan as presented, the peak cpu and network needs exceed the average by a factor of 2. This variation must be considered together with expected resource use patterns of other experiments. LHCb (and others) should consider scenarios to smooth out peaks in resource requirements. Monte Carlo. Even in the new plan, Monte Carlo production still consumes more than 50% of cpu resources. Any improvement in performance of MC or reduction in MC requirements would therefore have a significant impact on cpu needs. The group s current MC estimates, while difficult to justify in detail, seem reasonable for planning. Event size. The committee was concerned about the LHCb computing model s reliance on the small expected event size (25 kb). The main concern is I/O during reconstruction and stripping. LHCb believe that a factor of 2 larger event size would still be manageable. rdst size. The rdst size has almost as large an impact on computing resources as the raw event size. The committee recommends that LHCb develop an implementation of the rdst as soon as possible to understand whether the goal of 50kB (including raw) can be achieved. LHC Computing Models - F. Forti 40

41 LHCb III Event reconstruction and stripping strategy. The multi-year plan of event reconstruction and stripping seems reasonable, although 4 strippings per year may be ambitious. If more than 4 streams are written, there may be additional storage requirements. User analysis strategy. The committee was concerned about the use of Tier 1 centers as the primary user analysis facility. Are Tier 1 centers prepared to provide this level of individual user support? Will LHCb s planned analysis activities interfere with Tier 1 production activities? Calibration. Although it is not likely to have a large impact on computing plans, we recommend that details of the calibration plan be worked out as soon as possible. Data challenges. Future data challenges should include detector calibration and user analysis to validate those parts of the computing model. Safety factors. We note that LHCb has included no explicit safety factors (other than prescribed efficiency factors) in computing needs given their model. This issue should be addressed in a uniform way among the experiments. LHC Computing Models - F. Forti 41

42 The Grid and the experiments Use of Grid functionality will be crucial for the success of LHC computing. Experiments in general and the italian community in particular need to ramp up their use of LCG in the data challenges Verify the models Feedback to developers Strong interaction between experiments and LCG team mandatory to match requirements and implementation Cannot accomodate large overheads due to lack of optimization of resource usage. LHC Computing Models - F. Forti 42

43 Conclusion and Roadmap These computing models are one step on the way to LHC computing Very good outcome, in general specific and concrete Some interaction and refinement in the upcoming months In the course of 2005: Computing TDRs of the experiments. Memorandum of understanding for the computing resources for LCG phase II. Specific planning for CNAF and Tier2s in Italy. Expect to start building up the capacity in LHC Computing Models - F. Forti 43

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Computing at Belle II

Computing at Belle II Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small

More information

Data handling and processing at the LHC experiments

Data handling and processing at the LHC experiments 1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

LHC Computing Grid. Technical Design Report. Version: June The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L.

LHC Computing Grid. Technical Design Report. Version: June The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L. LCG-TDR-001 CERN-LHCC-2005-024 20 June 2005 LHC Computing Grid Technical Design Report Version: 1.04 20 June 2005 The LCG TDR Editorial Board Chair: J. Knobloch Project Leader: L. Robertson Technical Design

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

Machine Learning in Data Quality Monitoring

Machine Learning in Data Quality Monitoring CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

Early experience with the Run 2 ATLAS analysis model

Early experience with the Run 2 ATLAS analysis model Early experience with the Run 2 ATLAS analysis model Argonne National Laboratory E-mail: cranshaw@anl.gov During the long shutdown of the LHC, the ATLAS collaboration redesigned its analysis model based

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

The ATLAS EventIndex: Full chain deployment and first operation

The ATLAS EventIndex: Full chain deployment and first operation The ATLAS EventIndex: Full chain deployment and first operation Álvaro Fernández Casaní Instituto de Física Corpuscular () Universitat de València CSIC On behalf of the ATLAS Collaboration 1 Outline ATLAS

More information

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data D. Barberis 1*, J. Cranshaw 2, G. Dimitrov 3, A. Favareto 1, Á. Fernández Casaní 4, S. González de la Hoz 4, J.

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction. at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent

More information

Computing Resources Scrutiny Group

Computing Resources Scrutiny Group CERN RRB 17 056 April 17 Computing Resources Scrutiny Group C Allton (UK), V Breton (France), G Cancio Melia (CERN), A Connolly(USA), M Delfino (Spain), F Gaede (Germany), J Kleist (Nordic countries),

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

Experience with Data-flow, DQM and Analysis of TIF Data

Experience with Data-flow, DQM and Analysis of TIF Data Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,

More information

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop 1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Open data and scientific reproducibility

Open data and scientific reproducibility Open data and scientific reproducibility Victoria Stodden School of Information Sciences University of Illinois at Urbana-Champaign Data Science @ LHC 2015 Workshop CERN Nov 13, 2015 Closing Remarks: Open

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Track pattern-recognition on GPGPUs in the LHCb experiment

Track pattern-recognition on GPGPUs in the LHCb experiment Track pattern-recognition on GPGPUs in the LHCb experiment Stefano Gallorini 1,2 1 University and INFN Padova, Via Marzolo 8, 35131, Padova, Italy 2 CERN, 1211 Geneve 23, Switzerland DOI: http://dx.doi.org/10.3204/desy-proc-2014-05/7

More information

Benchmarking the ATLAS software through the Kit Validation engine

Benchmarking the ATLAS software through the Kit Validation engine Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

ALICE Run3/Run4 Computing Model simulation software

ALICE Run3/Run4 Computing Model simulation software ALICE Run3/Run4 Computing Model simulation software Armenuhi.Abramyan, Narine.Manukyan Alikhanyan National Science Laboratory (Yerevan Physics Institute) @cern.ch Outline O2 Computing System upgrade program

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

The BABAR Database: Challenges, Trends and Projections

The BABAR Database: Challenges, Trends and Projections SLAC-PUB-9179 September 2001 The BABAR Database: Challenges, Trends and Projections I. Gaponenko 1, A. Mokhtarani 1, S. Patton 1, D. Quarrie 1, A. Adesanya 2, J. Becla 2, A. Hanushevsky 2, A. Hasan 2,

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

August 31, 2009 Bologna Workshop Rehearsal I

August 31, 2009 Bologna Workshop Rehearsal I August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

ATLAS PILE-UP AND OVERLAY SIMULATION

ATLAS PILE-UP AND OVERLAY SIMULATION ATLAS PILE-UP AND OVERLAY SIMULATION LPCC Detector Simulation Workshop, June 26-27, 2017 ATL-SOFT-SLIDE-2017-375 22/06/2017 Tadej Novak on behalf of the ATLAS Collaboration INTRODUCTION In addition to

More information

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013 Computing at the Large Hadron Collider Frank Würthwein Professor of Physics of California San Diego November 15th, 2013 Outline The Science Software & Computing Challenges Present Solutions Future Solutions

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment

A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment EPJ Web of Conferences 108, 02023 (2016) DOI: 10.1051/ epjconf/ 201610802023 C Owned by the authors, published by EDP Sciences, 2016 A New Segment Building Algorithm for the Cathode Strip Chambers in the

More information

Event cataloguing and other database applications in ATLAS

Event cataloguing and other database applications in ATLAS Event cataloguing and other database applications in ATLAS Dario Barberis Genoa University/INFN 1 Topics Event cataloguing: the new EventIndex DB Database usage by ATLAS in LHC Run2 PCD WS Ideas for a

More information

Analysis & Tier 3s. Amir Farbin University of Texas at Arlington

Analysis & Tier 3s. Amir Farbin University of Texas at Arlington Analysis & Tier 3s Amir Farbin University of Texas at Arlington Introduction Tug of war between analysis and computing requirements: Analysis Model Physics Requirements/User preferences (whims) Organization

More information

Trigger and Data Acquisition at the Large Hadron Collider

Trigger and Data Acquisition at the Large Hadron Collider Trigger and Data Acquisition at the Large Hadron Collider Acknowledgments (again) This overview talk would not exist without the help of many colleagues and all the material available online I wish to

More information

Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization

Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization Scientifica Acta 2, No. 2, 74 79 (28) Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization Alessandro Grelli Dipartimento di Fisica Nucleare e Teorica, Università

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

Tracking and flavour tagging selection in the ATLAS High Level Trigger

Tracking and flavour tagging selection in the ATLAS High Level Trigger Tracking and flavour tagging selection in the ATLAS High Level Trigger University of Pisa and INFN E-mail: milene.calvetti@cern.ch In high-energy physics experiments, track based selection in the online

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

Virtualization. Q&A with an industry leader. Virtualization is rapidly becoming a fact of life for agency executives,

Virtualization. Q&A with an industry leader. Virtualization is rapidly becoming a fact of life for agency executives, Virtualization Q&A with an industry leader Virtualization is rapidly becoming a fact of life for agency executives, as the basis for data center consolidation and cloud computing and, increasingly, as

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

arxiv: v1 [cs.dc] 20 Jul 2015

arxiv: v1 [cs.dc] 20 Jul 2015 Designing Computing System Architecture and Models for the HL-LHC era arxiv:1507.07430v1 [cs.dc] 20 Jul 2015 L Bauerdick 1, B Bockelman 2, P Elmer 3, S Gowdy 1, M Tadel 4 and F Würthwein 4 1 Fermilab,

More information

Muon Reconstruction and Identification in CMS

Muon Reconstruction and Identification in CMS Muon Reconstruction and Identification in CMS Marcin Konecki Institute of Experimental Physics, University of Warsaw, Poland E-mail: marcin.konecki@gmail.com An event reconstruction at LHC is a challenging

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

Update of the Computing Models of the WLCG and the LHC Experiments

Update of the Computing Models of the WLCG and the LHC Experiments Update of the Computing Models of the WLCG and the LHC Experiments September 2013 Version 1.7; 16/09/13 Editorial Board Ian Bird a), Predrag Buncic a),1), Federico Carminati a), Marco Cattaneo a),4), Peter

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN 1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce

More information