Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

Size: px
Start display at page:

Download "Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012"

Transcription

1 Continuation Report/Proposal for DOE Grant Number: DE SC May 15, 2012 Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, ) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN ) PIs at this institution: Prof. Charles F. Maguire (Project Director) Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN Telephone: (615) charles.f.maguire@vanderbilt.edu 4) DOE Grant Number: DE SC ) DOE Office of Science Program Office: Nuclear Physics, Heavy Ion 6) DOE Office of Science Program Contact: Dr. James Sowinski

2 Continuation Report/Proposal for DOE Grant Number: DE SC Introduction The mission of the Vanderbilt Tier 2 CMS Computing Project 1 is to be the primary computing resource for the CMS-HI research group investigating the production of the Quark Gluon Plasma as produced in relativistic heavy ion collisions at the Large Hadron Collider. The Project has been initially funded over a three-year period, November 1, 2010 to October 31, The exact funding scenario is described in the original budget justification document, which is attached in the first section of the Appendix to this report. Spending authorization for the project was granted on November 1, 2010 by the DOE Office of Science program contact at the time, Dr. Gulshan Rai. A Project Management and Acquisition Plan, with three associated Memoranda of Understanding, are the guiding documents for implementing the mission of the Project. The five-year plan of hardware acquisition and staff support was detailed in the PMAP, along with the annual costs of these components. Two tables summarizing these categories of the Project and their annual costs are provided in Section 2 of this report. The eventual cumulative 5-year cost to develop the Project was calculated to be $2.40M. The decision was made to fund the Project initially for 3 years at a total cost of $1.84M, with $1.74M being provided for the first 2 years and $0.10M being provided for the third year in this initial phase. The precise allocations of these funds, as originally estimated in 2010, is contained in the first section of the Appendix. 2 This continuation report presents a justification for the final $0.10M funding award to the Project in its third year. This report will summarize the actual experiences in carrying out the Project mission as well as the expenditures incurred as of March 31, A brief history of the performance and achievements of the Project in meeting its stated deliverables and milestones will also be presented. In addition, this report will propose the tasks to be done, and their costs, for the remaining duration of the initial 3-year funding period. The management structure of this Project, as originally described in the PMAP, is given in the second Appendix section of this report. 2 Actual Project Development as Compared to the Project Plan The development of the Vanderbilt CMS Tier 2 Computing Project has been closely matched to the performance of the LHC facility for delivering heavy ion collision events. The original PMAP document was composed in 2010, before any heavy ion collisions had ever taken place at the LHC. The best assumptions available then for the performance of the machine, and also the proposed LHC schedule known in , were used to develop a Project constructions plan of hardware acquisitions. This construction plan is shown in Table 1. The costs listed in Table 1 are for CPU nodes, disk storage pricing, and for operating staff support, all at the Vanderbilt Tier 2. In addition, the Table 2 gives the estimated costs of tape storage at the FNAL Tier 1, where this site functions as the CMS-mandated secondary storage archive center for the HI data sets. The first heavy ion collisions at the LHC took place in November 2010, about 10 days after the funding authorization for the Project. The performance of the LHC in this first heavy ion run was an unqualified success. Even more so, in the second heavy ion run which took place in November 1 A glossary of acronym definitions is provided in the third Appendix section of this report. 2 Since the spending authorization was granted as of November 1, 2010, then that date is being taken as the official start of the Project for purposes of this report.

3 Continuation Report/Proposal for DOE Grant Number: DE SC , the performance of the LHC was nothing short of spectacular. Far more heavy ion data were recorded in 2011 by the CMS detector than had been anticipated in the original PMAP. This huge success in fact had been anticipated in the few months prior to November The CMS computing management was able to persuade the Tier 1 computing center at Lyon, France to host a significant amount of data (230 TB) of particular interest to the French heavy ion physicists. Without this storage commitment from the French group, which had been presented as a hopeful but unconfirmed possibility in the PMAP, then there would not have been enough storage space available to take advantage of the excellent heavy ion collision performance of the LHC. Table 1: Original Projected Expenditures for the Vanderbilt Tier 2 Center Category FY11 FY12 FY13 FY14 FY15 Total Compute/Disk Acquisitions New CPUs (cores) New CPUs (HS06) Total CPUs (cores) Total CPUs (HS06) New Disk (TB) Total Disk (TB) Hardware and Staffing Costs to the DOE CPUs $137,600 $196,000 $264,000 $112,000 $0 $709,600 Disk $121,250 $56,000 $45,000 $15,255 $0 $237,505 Total Hardware $258,850 $252,000 $309,000 $127,255 $0 $947,105 Staffing (DOE Cost) $180,476 $188,285 $195,816 $203,649 $211,795 $980,021 Staff+Hardware Total $439,326 $440,285 $504,816 $330,904 $211,795 $1,927,126 Staffing Support Decomposition (FTEs) By DOE By Vanderbilt Staffing (Vanderbilt Cost) $180,476 $188,285 $195,816 $203,648 $211,795 $980,021 CPU, Disk, and FTE Cost Assumptions Cost/core with 3 GB Disk cost per TB Total cost per FTE $120,317 $125,523 $130,544 $135,766 $141,197 Table 2: Original Funding Profile for Fermilab Tier 1 Tape Archive Category FY11 FY12 FY13 FY14 FY15 Total Tape Volume (PB) Cost to DOE $94,000 $103,000 a $40,000 $116,000 $120,000 $473,000 a Includes a $25,000 capital charge for the purchase of an LT05 tape drive The most significant impact of the performance of the LHC in producing far more heavy ion collisions

4 Continuation Report/Proposal for DOE Grant Number: DE SC data volume than expected has been to change the ratio of disk purchases to CPU purchases for the Project. For example, the first two years of purchases were expected to result in 765 TB of disk storage. Instead, 960 TB of disk storage has been purchased, while correspondingly fewer CPUs have been purchased in order to stay within the annual budget constraints. The reduction in CPUs purchases has been somewhat proportionately less than the increase in disk purchases because the Dell Computer Company has made an extremely attractive CPU configuration available to the LHC experiment collaboration members. The specific hardware purchases made in the first two years, as of March 31, 2012, were as follows: 1) 60 dual quad compute nodes and 3 dual quad gateway nodes Dell R410 nodes with 2.4 GHz processors, 24 GB memory 2) 19 dual hex compute nodes Dell R410 nodes with 2.93 GHz processors, 48 GB memory 3) 20 depots with 72 GB raw; 48 GB usable per depot after RAID 5 redundancy 4) 1 network switch Extreme X650t 5) 2 compute rack switches Extreme X460 6) a secondary 10 Gbps switch Dell PowerConnect ) 8 infrastructure nodes (SE6, SE7, SE8, SE9, SE10, FSCK server, server for network testing, L-Server node The first three items above constitute the bulk of the CPUs and the storage system. The item 7) can be classified as necessary infrastructure to meld the CPUs and the storage system together. As of March 31, 2012 the Project had 704 CPU cores and 960 TB of disk available for externally submitted physics analysis production. In addition, there are 24 CPU cores available in the gateway nodes, for purposes of software development and job testing. 3 Performance of the Tier 2 Project, November 2010 to March 2011 Even before Project spending authorization was received on November 1, 2010 there had been discussions with the Project s technical advisory committee on the scope of the first set of hardware purchases. This set of hardware purchases was intended to meet the initial deliverables and milestones of the Project. In turn these deliverables and milestones were coupled to the acquisition of the first HI data at the LHC in the November 2010 running period. Essentially it was promised that the HI RECO files from the 2011 run would be received at Vanderbilt in time for the collaboration s physicists to analyze the data before the May 2011 Quark Matter conference. The RECO files, with a volume of 190 TB, were produced at the CERN Tier 0 in February 2011, and the first transfers of those files to Vanderbilt began on March 6, The transfers were completed in 19 days, meaning an average rate of 10 TB/day. This rate was exactly as specified in the original Project proposal. This initial transfer was not without some difficulties, effectively providing a series of learning curve issues. The CMS transfer system, called PhEDEx, had to be tuned to the particular configurations of the Vanderbilt storage system. The I/O layer of the storage system was itself just in its first generation,

5 Continuation Report/Proposal for DOE Grant Number: DE SC with some known deficiencies. On some days the transfer rate exceeded 20 TB/day, while on other days there were outages in which no files were transfers. These outages had to be studied and repaired. Nonetheless, by March 25 all the HI RECO files has been transferred. Preceding and simultaneous to this transfer effort members of the HI and HEP groups at Vanderbilt were intensely preparing the Tier 2 to accept remotely submitted physics analysis jobs. Such jobs are entered into the site by the CMS remote batch system called CRAB. Again, this task required the adaptation of the CMS software to the local batch system (PBS), including some repair of the middleware Open Science Grid (OSG) package on which the CRAB system is dependent. The Vanderbilt system was opened to external CRAB job submission in early April 2011, thus allowing CMS physicists the opportunity to process the full 2010 HI data set for the mid-may Quark Matter 2011 conference. By all accounts, the CMS physicists made a highly competitive, and in some cases unique, impact on the advancement of the understanding of the Quark Gluon Plasma at this conference using this complete data set. Figure 1: PhEDEx rates and quality plots for the transfer of HI 2010 RAW data from FNAL to Vanderbilt in August, The peak transfer rates (left side) were above 350 MB/s corresponding to 24 TB/day. In June 2011 after the Quark Matter conference, the Tier 2 Project members continued working on the next generation of the local storage software system. The new generation of this storage software was installed in mid-july In early August 2011 a week-long data transfer challenge exercise was conducted with FNAL. The goal of this exercise was to transfer an average of 20 TB/day over one week, to be ready for the HI 2011 data taking period coming in November. The new storage software system also had better, dynamic error-correction features to maximize the efficiency of the data transfer process. The exercise used the 150 TB of HI 2010 RAW data which were still on disks at FNAL for this purpose. Although there were again some incidental problems during this week-long transfer (and some of these were external to Vanderbilt), the complete transfer was accomplished in the 7 day period to meet the goal of 20 TB/day. The instantaneous transfer rates and the quality of the data transfers (i.e. how many transfers worked without needing a re-try) are shown in Figure 1. After this exercise the Tier 2 Project was then certified ready to receive the HI 2011 run data, as had been stipulated in the deliverables and milestones list. During this Summer 2011 period the Tier 2 project members consulted with the CMS-HI group on the purchase order decisions for the upcoming HI run. Given the optimism about how well the LHC

6 Continuation Report/Proposal for DOE Grant Number: DE SC was expected to deliver HI collision data, the decision was made to purchase more disk space than had been originally planned in the PMAP. This new hardware was commissioned in September 2011, in sufficient time before the HI run. During the run itself, the data transfer operations proceeded very smoothly, a consequence of all the preparation which had been done in the months before. A total of 385 TB was transferred in 21 days, including a few days when the LHC was not producing data. The Tier 2 Project also went beyond its original scope for the 2011 Run in that a prompt skimming task was successfully commissioned. The prompt skimming task involves selecting special sets of events from the full RECO files, and making a much smaller volume of enriched sample RECO files. In this case the enrichment was based on looking for high p T jets. The prompt skimming process is heavily I/O intensive, and had always been done at a full scale Tier 1 facility in CMS. Prompt skimming has not been done before at any CMS Tier 2 since these have typically less powerful I/O hardware systems. Nevertheless, the HI group at Vanderbilt volunteered to develop a special I/O package in the CMS framework which would make uses of native local storage software commands to overcome these limitations. The local prompt skimming was successfully demonstrated only a few days before the HI 2011 run began. From the large 385 TB set of RECO files an enriched set of only 10 TB was produced, which could easily be copied to smaller computer sites. These prompt skim files in fact enabled a first rapid publication of the HI 2011 data analysis in the Spring Figure 2: Three weeks of daily monitoring of the Vanderbilt Tier 2 in early 2012 by the CMS site status system. The one Red mark on February 7 in the Old SAM Availability metric was due to a flaw in an obsolete component of the external monitoring system. The new version recorded 100% site availablity for this particular metric on that day. Like all Tier 2 systems in CMS, the performance of the Vanderbilt site is monitoring continuously by external software systems, as illustrated in Figure 2. Specifically, seven different metrics are measured, corresponding to successfully submitted external jobs, and the ability to send and receive data files using the standard CMS data transfer software. Sites receive a daily Site Readiness Status mark. It is stipulated that a well-performing Tier 2 site should achieve an overall passing mark on at least

7 Continuation Report/Proposal for DOE Grant Number: DE SC % of the week-days. The marks in the week-end days, given by the gray-shaded columns in Figure 2 are not counted in this 80% passing threshold. 4 Future Project Development as Compared to the Project Plan The future Project development discussion can be separated into near-term and long-term phases. The near-term phase corresponds to completing the initial 3-year Project construction, scheduled for purposes of this report to finish in October The long-term phase address the post-construction period in which there will be continued operating costs as well as a plan for hardware replacements. This continuation report will concentrate on the near-term phase, with the expectation that the postreconstruction phase will be addressed in a new Project proposal written in early Coincidentally, these two phases coincide with a major change in operations of the LHC itself. According to the present understandings, which will be revisited at the end of June 2012, the LHC will go into shutdown mode at the end of calendar The shutdown may last 18 months or more. The purpose of the shutdown is to make upgrades to the dipole magnets of the collider such that these may be safely operated at the maximum magnetic field needed to bend 7 TeV protons as originally designed. The third heavy ion run is planned to occur before this 18 month shutdown begins. In the PMAP document the third year of the Project had been projected as the shutdown year for the LHC. Hence, no major milestones or deliverables were set for most of Instead, the reality is that the group is very busy planning for what will be needed for the HI run in The main technical group in CMS making the plan for the run is the High Level Trigger team which implements the physics goals of the data taking. These physics goals are determined by the heavy ion Physics Interest Groups (PInGs), based on simulations of the events. Here again there is much uncertainty because most, if not all, the HI collisions in 2012 will be for protons on lead. These collisions will probe the cold nuclear matter effects at the LHC energy regime, much as the deuteron on gold collisions did for the RHIC energy regime. The cold nuclear matter effects serve as an important cross check on the physics conclusions deriving from the analyses of lead on lead collision events. The HLT decisions for the proton on lead events are also constrained by the downstream computing resources in CMS, just as in the proton on proton collisions. The rate of events being accepted by the HLT must be consistent with the rate that these events can be promptly reconstructed at the CERN Tier 0, and the rate at which these events can be transferred from the Tier 0 to external sites such as FNAL and Vanderbilt. Finally, the HLT decisions are constrained in terms of the total volume of the produced data. Given the approximate three weeks of collisions, a total volume amount of approximately 500 TB (RAW + RECO) is reasonable to transfer to FNAL and Vanderbilt. A factor of two more volume would be likely impossible to handle. By comparison, the data volumes at RHIC are approaching this magnitude, but those volumes are obtained after several months of running, not several weeks. The formal meetings of the heavy ion HLT 2012 run study group have just begun as of this report writing in May The Tier 2 Project team members are attending these HLT study group meetings. A decision will be made in late June on the hardware purchases for the HI 2012 run, based on the information provided by the HLT study group as well as the scheduling information presented by the LHC management. There is some possibility that the HI 2012 run may be postponed until February Whether this will happen will be known for certain in late June If the next HI run is

8 Continuation Report/Proposal for DOE Grant Number: DE SC postponed until February 2013, then there is no immediate need to begin hardware purchases in early July. Instead, we can use the Summer 2012 period to study more the physics simulations for protons on lead, as well as take advantage of any price improvements on the hardware. As a rough estimate, the Tier 2 Project team is considering a purchase of 700 TB of new disk storage, and 500 more CPU cores to accommodate the HI 2012 run production and data analysis. Again, there was no study at the time of the PMAP composition for this LHC collision scenario, and this is uncharted territory for the Project. 5 Project Budget The actual expenditures for the Project are adhering closely to the original budget justification document. The hardware mix has been adjusted to be consistent with the substantially better performance of the the LHC compared to what was projected two years ago. As of the last quarterly Project report, extending up to March 31, 2012 (17 months into the Project s initial three-year phase) the high level category expenditures have been as follows 1) Total hardware expenses have been $461.2K. 2) Staff operating expenses have totaled $262.4K. 3) Tape archive costs are estimated as $169.0K. 3 The cumulative cost of the above items is $892.6K, leaving an unencumbered balance of $947.4K based on the three-year Project budget of $1,840K, or $847.4K unencumbered balance presently at Vanderbilt. The total $1,840K amount is being provided in two successive periods of $1,740K and $100K. The plan for using the Project balance of $892.6K is as follows, based on the original budget justification (see Tables 1 and 2) and assuming a final date of October 31, This final date is three years after the start of spending authorization. 1) Future hardware expenses $486K 2) Future staff operating expenses $329K 3) Future tape archive costs $78K Again, these amounts are generally consistent with the original PMAP projections, except for the additional tape archive costs in item 3). The original PMAP did not foresee an LHC run in calendar year 2012, but in calendar years 2013 and The item 3) moves part of those two out years of tape costs into FY 2013 when they will actually be needed. The staff operating costs in item 2) are for the period April 2012 to October These are about $25K higher than estimated in the original PMAP, reflecting slightly increased salary costs due to staff promotions. There is a serious concern at this time that the cost of disk storage during calendar year 2012 will not be according to what was estimated in 2010 and quoted in Table 1. Thus the hardware expense in item 1) may be somewhat underestimated. The country of Thailand has unique manufacturing facilities 3 A $66K actual cost for the 2010 data is in process of being reimbursed to FNAL, and $103K is the estimated cost for archiving the larger set of 2011 data.

9 Continuation Report/Proposal for DOE Grant Number: DE SC for critical disk storage components. All of these manufacturing facilities were destroyed in flooding during The result has been a classic surge in prices according to the law of supply and demand, as much as a factor of two in a few weeks time in early Although the pricing surge has abated somewhat, there are still large fluctuations in the price and major disk inventories are scarce. In this respect, a delay of the 2012 purchases to later than June, may be beneficial assuming that there is a corresponding delay in the next HI run. The Project management team will contact the DOE program manager if it becomes certain that there will be a serious negative impact on the budget plan. Lastly, although it is past the scope of the present 3-year Project budget, one can anticipate some part of the future operating costs starting in November The Table 1 shows staff costs of of more than $200K in each of the two years starting in November Similarly from Table 2, it can be assumed that the tape archive costs will be approximately $100K after the LHC returns to operational status with new HI runs. There will also be replacement costs for some of the hardware which has either become obsolete and cost inefficient to maintain, or else has simply failed after the warranty period. The precise calculation for these replacement costs will be material for a new Project proposal document to be written in early calendar 2013.

10 Continuation Report/Proposal for DOE Grant Number: DE SC A Original Budget Justification for CMS-HI Computing Project (August 2010) A funding proposal for a project to develop a computational resource which will meet the majority of the computing needs of the CMS-HI research program in the U.S. has been made. This project is be funded in two consecutive time segments: August 15, 2010 to August 14, 2012, and August 15, 2012 to August 14, The following budget is requested: First time period, August 2010 to August 2012 Personnel Costs, item B2 $285,588 This cost comprises 53 Person-Months of effort by 6 technical staff at Vanderbilt s ACCRE computer center, calculated at an annual FTE rate of $64,666. These technical staff will be responsible for the installation and maintenance of the computer hardware to be purchased for this project. The staff will also insure that the software to utilize the hardware will function properly, and that the external network connections are working at the desired rates. Fringe Benefits, item C $71,968 This is calculated a the rate of 25.2% on the salary amount shown in item B2. Permanent Equipment, item D $1,184,000 This amount consists first of computer hardware in the amount of $947,000 comprising computer cores and associated disk space, for processing the data from heavy ion collisions at the LHC recorded by the CMS detector. A second amount is $237,000 which will provide tape archival storage of the raw data and the processed results. Total Direct Costs, item H $1,541,566 This is the sum of the salary, fringe benefits, and the hardware costs. Indirect Costs, item I $198,444. This is the overhead charge on the salary plus fringe benefit costs, based on a rate of 55.5% during this two year period. Total Project Cost, item L $1,740,000 The sum of all the previously costed items. Second time period, August 2012 to August 2013 Personnel Costs, item B2 $51,200 This cost comprises 9 Person-Months of effort by 2 technical staff at Vanderbilt s ACCRE computer center, calculated at an annual FTE rate of $68,267. These staff will have the same duties as described previously. In addition, this staff effort will be supplemented by staff supported by Vanderbilt University. Fringe Benefits, item C $12,902 This is calculated a the rate of 25.2% on the salary amount shown in item B2. Total Direct Costs, item H $64,102 This is the sum of the salary and fringe benefits. Indirect Costs, item I $35,897 This is the overhead charge on the salary plus fringe benefit costs, based on a rate of 56% during this third year period. Total Project Cost, item L $100,000 The sum of all the previously costed items.

11 Continuation Report/Proposal for DOE Grant Number: DE SC B Project Management Structure Description from the PMAP Technical Advisory Commi=ee Bolek Wyslouch, Chair HI Physics Convenors (2) Tier0 Data Ops Liaison FNAL Tier1 Liaison ACCRE Technical Manager ACCRE OperaIons Manager Financial Advisory Board Carie Kennedy, Chair Physics Finance Staff Office of Nuclear Physics Program Manager Gulshan Rai Vanderbilt Tier2 Project Manager Charles Maguire CMS- HI ProducGon Local Manager VU Post- Doc TBA Tier2 Review Board Ken Bloom, Chair VU HEP Technical Liaison Dan Engh Figure 3: Management Organization Chart for the Vanderbilt Tier 2 equipment project Prof. Charles Maguire of Vanderbilt University is the designated Project Manager having ultimate responsibility for the disposition of the project funds. In this role he will be assisted by a Technical Advisory board chaired by Prof. Boleslaw Wyslouch, who is the overall US CMS-HI Project Manager. The technical advisory board will consist of the two Heavy Ion physics conveners designated by CMS management, liaison persons from the CMS Tier 0 and FNAL Tier 1 data operations groups, and the technical and operations managers of the ACCRE organization. It will be the responsibility of this technical advisory board to provide input to the Tier 2 Project Manager on the decisions for annual equipment purchases and to advise on the operational policies and priorities for the Vanderbilt Tier 2 facility. A second advisory committee consisting of the ACCRE financial manager and the financial staff of the Vanderbilt Department of Physics and Astronomy will monitor the expenditures on this project for both hardware and ACCRE technical staff. There is already in the US CMS organization a Tier 2 review board headed by Prof. Ken Bloom of the University of Nebraska. For the past several years this board has been monitoring the performance of the existing US Tier 2 facilities which are associated with the Fermilab Tier 1 facility. 4 4 On May 14, 2012 there was an external review of the Vanderbilt Tier 2 storage system by a team of experts assembled by Prof. Bloom.

12 Continuation Report/Proposal for DOE Grant Number: DE SC C Glossary of Acronyms Included in the PMAP - ACCRE - Advanced Computing Center for Research and Education at Vanderbilt University - AlCa - Alignment and Calibration data, a specific set of data files designed to diagnose the alignment and calibration constants needed to process experimental data - AOD - Analyzed Object Data, a streamlined set of data files designed to be processed quickly for experimental analysis - CERN - Originally European Center for Nuclear Research on the border between Switzerland and France, but now meaning the major high energy physics laboratory hosting the Large Hadron Collider (LHC) accelerator - CERN-SPS - The super-proton-synchrotron accelerator at CERN, now one of the accelerator stages of the LHC - CMS - Compact Muon Solenoid experiment collaboration, one of the big experiment collaborations at the LHC - CMS-HI - CMS Heavy Ion program - CMSSW = CMS software - CMSSW x y z - a specific release of CMSSW - CPU - central processing unit - DQM - Data Quality Monitoring - ESD - Event Summary Data, a special set of data files giving global summary information about given sets of data taking time periods in an experiment - Fermilab - Fermi National Accelerator Laboratory - FNAL - Fermi National Accelerator Laboratory - GB - gigabyte, bytes, where one byte is a measure of computer memory or disk storage sufficient to hold one character of text - Gbps - Gigabit per second, a measure of network file transfer speed - HEP - high energy physics - HI - heavy ion - HLT - High Level Trigger - HS06 - A standard measure of CPU processing power adopted by all the high energy physics experiments at CERN after LHC - Large Hadron Collider, the major new accelerator constructed at CERN and designed to reach collision energies of 7 TeV on 7 TeV for protons

13 Continuation Report/Proposal for DOE Grant Number: DE SC LNS - the Laboratory for Nuclear Science at MIT - L-Store - a disk storage system for extremely large volumes of data - L-Server - hardware unit in direct support of of the L-Store system - MIT - Massachusetts Institute of Technology - MOU - Memorandum of Understanding; there are 3 MOUs associated with this PMAP: one addressing the CERN Tier 0 data production and transfer responsibilities, the second addressing the Fermilab Tier1 1 data archiving responsibilities, and the third addressing the Tier 2 installation and operations responsibilities of ACCRE - PAG - Physics Analysis Group, a set of CMS members focused on a number of related physics analysis topics - PAT - physics analysis tools, a collection of software packages in CMS which are designed to facilitate data analysis - Pb + Pb - Lead-Lead collisions or data taken at the LHC. - PB - petabytes, a measure of disk or tape storage meaning bytes - PhEDEx - Physics Event Data Export, the ensemble of software which for the routine transport of large file systems in CMS - PHENIX - One of the experiment collaborations at the Relativistic Heavy Ion Collider, standing for Pioneering High Energy Nuclear Ion experiment. - PMAP - Project Management and Acquisition Plan - pp - proton - proton, as in proton-proton collisions or proton-proton data. - QCD - Quantum Chromo Dynamics - RECO - Reconstructed data files, the first processing stage of the raw data files in the CMSSW system, containing extra information to assist the first analyses of the data - re-reco - A second or third reconstruction pass over the raw data files, or a subset of these files, which is done with improved calibration constants or better event reconstruction algorithms - RHI - relativistic heavy ion - RHIC - relativistic heavy ion collider - SE - storage element, a hardware unit designed to transfer data efficiently into or out of the storage system - TB - terabytes, a measure of disk or tape storage meaning bytes - UIC - University of Illinois at Chicago - VUA&S - The School of Arts & Science division of Vanderbilt University

14 Continuation Report/Proposal for DOE Grant Number: DE SC VUMC - The Medical Center division of Vanderbilt University - VUSE - The School of Engineering division of Vanderbilt University - WLCG - worldwide LHC computing grid - ZDC - Zero Degree Calorimeter

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology Version February 18, 2010 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN 37235 3) Co-PIs

More information

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 Version V1: December 16 at 14:45 CST 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy

More information

August 31, 2009 Bologna Workshop Rehearsal I

August 31, 2009 Bologna Workshop Rehearsal I August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

PoS(High-pT physics09)036

PoS(High-pT physics09)036 Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Bill Boroski LQCD-ext II Contractor Project Manager

Bill Boroski LQCD-ext II Contractor Project Manager Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Report on Collaborative Research for Hurricane Hardening

Report on Collaborative Research for Hurricane Hardening Report on Collaborative Research for Hurricane Hardening Provided by The Public Utility Research Center University of Florida To the Utility Sponsor Steering Committee January 2010 I. Introduction The

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

SUBJECT: PRESTO operating agreement renewal update. Committee of the Whole. Transit Department. Recommendation: Purpose: Page 1 of Report TR-01-17

SUBJECT: PRESTO operating agreement renewal update. Committee of the Whole. Transit Department. Recommendation: Purpose: Page 1 of Report TR-01-17 Page 1 of Report TR-01-17 SUBJECT: PRESTO operating agreement renewal update TO: FROM: Committee of the Whole Transit Department Report Number: TR-01-17 Wards Affected: All File Numbers: 465-12, 770-11

More information

The Computation and Data Needs of Canadian Astronomy

The Computation and Data Needs of Canadian Astronomy Summary The Computation and Data Needs of Canadian Astronomy The Computation and Data Committee In this white paper, we review the role of computing in astronomy and astrophysics and present the Computation

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

COMPARING COST MODELS - DETAILS

COMPARING COST MODELS - DETAILS COMPARING COST MODELS - DETAILS SOFTLAYER TOTAL COST OF OWNERSHIP (TCO) CALCULATOR APPROACH The Detailed comparison tab in the TCO Calculator provides a tool with which to do a cost comparison between

More information

Memorandum APPENDIX 2. April 3, Audit Committee

Memorandum APPENDIX 2. April 3, Audit Committee APPENDI 2 Information & Technology Dave Wallace, Chief Information Officer Metro Hall 55 John Street 15th Floor Toronto, Ontario M5V 3C6 Memorandum Tel: 416 392-8421 Fax: 416 696-4244 dwwallace@toronto.ca

More information

Figure 1: Summary Status of Actions Recommended in June 2016 Committee Report. Status of Actions Recommended # of Actions Recommended

Figure 1: Summary Status of Actions Recommended in June 2016 Committee Report. Status of Actions Recommended # of Actions Recommended Chapter 3 Section 3.05 Metrolinx Regional Transportation Planning Standing Committee on Public Accounts Follow-Up on Section 4.08, 2014 Annual Report In November 2015, the Standing Committee on Public

More information

CSCS CERN videoconference CFD applications

CSCS CERN videoconference CFD applications CSCS CERN videoconference CFD applications TS/CV/Detector Cooling - CFD Team CERN June 13 th 2006 Michele Battistin June 2006 CERN & CFD Presentation 1 TOPICS - Some feedback about already existing collaboration

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW

Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW For Research and Service Centers Submitting Self-Study Reports Fall 2017 INTRODUCTION Primary responsibility for maintaining

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

NIS Directive : Call for Proposals

NIS Directive : Call for Proposals National Cyber Security Centre, in Collaboration with the Research Institute in Trustworthy Inter-connected Cyber-physical Systems (RITICS) Summary NIS Directive : Call for Proposals Closing date: Friday

More information

The CMS data quality monitoring software: experience and future prospects

The CMS data quality monitoring software: experience and future prospects The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data

More information

Facilities. Western Illinois University Quad Cities. Consolidated Fiscal-Year 2011 Annual Report and Fiscal-Year 2012 Planning Document

Facilities. Western Illinois University Quad Cities. Consolidated Fiscal-Year 2011 Annual Report and Fiscal-Year 2012 Planning Document Mission statement: Western Illinois University Quad Cities Facilities Consolidated Fiscal-Year 2011 Annual Report and Fiscal-Year 2012 Planning Document Submitted by: William Brewer University Architect

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Manager, Infrastructure Services. Position Number Community Division/Region Yellowknife Technology Service Centre

Manager, Infrastructure Services. Position Number Community Division/Region Yellowknife Technology Service Centre IDENTIFICATION Department Position Title Infrastructure Manager, Infrastructure Services Position Number Community Division/Region 32-11488 Yellowknife Technology Service Centre PURPOSE OF THE POSITION

More information

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY Preston Smith Director of Research Services RESEARCH DATA DEPOT AT PURDUE UNIVERSITY May 18, 2016 HTCONDOR WEEK 2016 Ran into Miron at a workshop recently.. Talked about data and the challenges of providing

More information

SPARC 2 Consultations January-February 2016

SPARC 2 Consultations January-February 2016 SPARC 2 Consultations January-February 2016 1 Outline Introduction to Compute Canada SPARC 2 Consultation Context Capital Deployment Plan Services Plan Access and Allocation Policies (RAC, etc.) Discussion

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En yo, Takashi Ichihara, Yasushi Watanabe and Satoshi Yokkaichi RIKEN Nishina Center for Accelerator-Based

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

Client Services Procedure Manual

Client Services Procedure Manual Procedure: 85.00 Subject: Administration and Promotion of the Health and Safety Learning Series The Health and Safety Learning Series is a program designed and delivered by staff at WorkplaceNL to increase

More information

This document/guide contains dated material; always check the ASMC website for the most recent information, policies, and other information.

This document/guide contains dated material; always check the ASMC website for the most recent information, policies, and other information. December 2010 CDFM OVERVIEW The American Society of Military Comptrollers offers the Certified Defense Financial Manager (CDFM) program to those persons desiring to demonstrate proficiency in the core

More information

... IBM Advanced Technical Skills IBM Oracle International Competency Center September 2013

... IBM Advanced Technical Skills IBM Oracle International Competency Center September 2013 Performance benefits of IBM Power Systems and IBM FlashSystem for JD Edwards EnterpriseOne IBM Power 780 server with AIX and IBM FlashSystem 820 flash storage improves batch performance in a client proof

More information

Early experience with the Run 2 ATLAS analysis model

Early experience with the Run 2 ATLAS analysis model Early experience with the Run 2 ATLAS analysis model Argonne National Laboratory E-mail: cranshaw@anl.gov During the long shutdown of the LHC, the ATLAS collaboration redesigned its analysis model based

More information

Market Survey. Technical Description Dismantling and Supply of Electrical Overhead Travelling (EOT) Cranes with a capacity up to 10 Tonnes

Market Survey. Technical Description Dismantling and Supply of Electrical Overhead Travelling (EOT) Cranes with a capacity up to 10 Tonnes EDMS No. 1995140 Group Code: EN-HE Market Survey Technical Description Dismantling and Supply of Electrical Overhead Travelling (EOT) Cranes with a capacity up to 10 Tonnes Abstract This Technical Description

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training

REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training Table of Contents: Part I. Overview Information Part II. Full Text of Announcement Section I.

More information

ATLAS Experiment and GCE

ATLAS Experiment and GCE ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

Annual Report for the Utility Savings Initiative

Annual Report for the Utility Savings Initiative Report to the North Carolina General Assembly Annual Report for the Utility Savings Initiative July 1, 2016 June 30, 2017 NORTH CAROLINA DEPARTMENT OF ENVIRONMENTAL QUALITY http://portal.ncdenr.org Page

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties Netherlands Institute for Radio Astronomy Update LOFAR Long Term Archive May 18th, 2009 Hanno Holties LOFAR Long Term Archive (LTA) Update Status Architecture Data Management Integration LOFAR, Target,

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

Request for tenders proposing hosting arrangements for the ECPGR Secretariat/EURISCO

Request for tenders proposing hosting arrangements for the ECPGR Secretariat/EURISCO Request for tenders proposing hosting arrangements for the ECPGR Secretariat/EURISCO Dear National Coordinators, Based on the outcome of the External Independent Review of the ECPGR Programme, the ECPGR

More information

ESNET Requirements for Physics Reseirch at the SSCL

ESNET Requirements for Physics Reseirch at the SSCL SSCLSR1222 June 1993 Distribution Category: 0 L. Cormell T. Johnson ESNET Requirements for Physics Reseirch at the SSCL Superconducting Super Collider Laboratory Disclaimer Notice I This report was prepared

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

REPORT 2015/149 INTERNAL AUDIT DIVISION

REPORT 2015/149 INTERNAL AUDIT DIVISION INTERNAL AUDIT DIVISION REPORT 2015/149 Audit of the information and communications technology operations in the Investment Management Division of the United Nations Joint Staff Pension Fund Overall results

More information

Project Overview and Status

Project Overview and Status Project Overview and Status EVLA Advisory Committee Meeting, March 19-20, 2009 Mark McKinnon EVLA Project Manager Outline Project Goals Organization Staffing Progress since last meeting Budget Contingency

More information

Accountable Care Organizations: Testing Their Impact

Accountable Care Organizations: Testing Their Impact Accountable Care Organizations: Testing Their Impact Eligibility Criteria * * Indicates required Preference will be given to applicants that are public agencies or are tax-exempt under Section 501(c)(3)

More information

Total Cost of Ownership: Benefits of the OpenText Cloud

Total Cost of Ownership: Benefits of the OpenText Cloud Total Cost of Ownership: Benefits of the OpenText Cloud OpenText Managed Services in the Cloud delivers on the promise of a digital-first world for businesses of all sizes. This paper examines how organizations

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

TIER Program Funding Memorandum of Understanding For UCLA School of

TIER Program Funding Memorandum of Understanding For UCLA School of TIER Program Funding Memorandum of Understanding For UCLA School of This Memorandum of Understanding is made between the Office of Information Technology (OIT) and the School of ( Department ) with reference

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Service Description: CNS Federal High Touch Technical Support

Service Description: CNS Federal High Touch Technical Support Page 1 of 1 Service Description: CNS Federal High Touch Technical Support This service description ( Service Description ) describes Cisco s Federal High Touch Technical support (CNS-HTTS), a tier 2 in

More information

Governing Body 313th Session, Geneva, March 2012

Governing Body 313th Session, Geneva, March 2012 INTERNATIONAL LABOUR OFFICE Governing Body 313th Session, Geneva, 15 30 March 2012 Programme, Financial and Administrative Section PFA FOR INFORMATION Information and communications technology questions

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

AEC Broadband Update 2018

AEC Broadband Update 2018 AEC Broadband Update 2018 AEC has received numerous calls and emails in support of our Co-op moving forward with broadband opportunities. This report is intended to bring our members up-to-date on where

More information