CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1

Size: px
Start display at page:

Download "CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1"

Transcription

1 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 Version V1: December 16 at 14:45 CST 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN ) PI at this institution: Prof. Charles F. Maguire Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN Telephone: (615) charles.f.maguire@vanderbilt.edu 4) Funding Opportunity: DE-PS02-09ER ) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion 6) DOE/Office of Science Program Contact: Dr. Gulshan Rai 7) Collaborating Institution: Massachusetts Institute of Technology 8) Principal Investigator at Collaborating Institution: Prof. Bolek Wyslouch

2 Computing Resources Proposal Update for U.S. CMS-HI Research E. Appelt 1, M. Binkley 1, B. Brown 1, K. Buterbaugh 1, K. Dow 2, D. Engh 1, M. Gelinas 2, M. Goncharov 2, S. Greene 1, M. Issah 1, C. Johnson 1, C. Kennedy 1, W. Li 2, S. de Lesdesma 1, C. Maguire 1, J. Mora 1, G. Roland 2, P. Sheldon 1, G. Stephans 2, A. Tackett 1, J. Velkovska 1, E. Wenger 2, B. Wyslouch 2, and Y. Yilmaz 2 December 20, Vanderbilt University, Department of Physics and Astronomy, Nashville, TN Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA Abstract An updated proposal for meeting the computing needs of the CMS-HI research program in the U.S. is described. This update addresses all the major comments deriving from the May 2009 review of the first proposal for CMS-HI computing presented to the US DOE-NP. The present proposal envisions a heavy ion computing structure which is tightly integrated into the main CMS computing structure and includes provision for important contributions to heavy ion computing from the CMS T0, the FNAL T1, and several national T2 centers. These contributions will significantly assist the work of the two major HI T2 centers in the US which are to be sited, as previously proposed, at Vanderbilt University and at MIT-Bates. 2

3 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 1 Executive Summary Since May 2009 the members of the Heavy Ion research program in CMS have mounted an extensive, multi-faceted effort to improve the HI computing proposal from what was first presented for review to the U.S. DOE-NP. This effort has proceeded along four distinct tracks: 1) a more sophisticated, data-driven analysis of the annual computing requirements using the latest version of the CMS software framework, 2) a series of discussions with CMS computing management on how to organize HI computing to make the best use of existing CMS computing resources and infrastructure, 3) a thorough canvas of overseas CMS T2 sites to secure several new commitments of resources dedicated to the analyses of heavy ion data, and 4) the development of plan of access to the two major HI compute centers in the US, at Vanderbilt and MIT, by the other CMS-HI-US institutions. A concrete result of this effort has been the formulation of a HI data model document which has received the endorsement of CMS computing management. This data model, a copy of which is included as an Appendix to this proposal update, provides a HI computing structure which is is much more strongly integrated into the rest of CMS computing than had been the case previously. The HI computing plan being proposed here will first take full and complete advantage of the computing resources available to CMS at the CERN T0, in the same manner as does the pp program. The transfer to the US of the HI raw data and prompt reconstruction output from the T0 will proceed as a seamless addition to the existing work-flow for the transfer of the comparable pp data. As was previously proposed there are planned to be dedicated HI computing centers at Vanderbilt and at MIT-Bates. These two sites will have missions for the HI program which are formally recognized by the WLCG organization. The MIT-Bates site will continue to function as a standard T2 center, while the Vanderbilt site will be an enhanced T2 center in CMS. The enhanced functionalities at Vanderbilt will be 1) the receipt and reconstruction of raw data files, and 2) the export of some reconstruction output to certain other CMS T2 centers in Europe and South America. These T2 centers have active HI groups and have committed to receiving reconstruction output from the Vanderbilt HI center for analysis purposes. The Vanderbilt center will also perform standard T2 analysis tasks Last, but not least, the Vanderbilt and the MIT-Bates sites will include initially T3 components for which accounts can be given to all CMS-HI participants. These T3 components will provide immediate access to data reconstruction and simulation production, in the same manner that the MIT-Bates center has doing for CMS-HI participants during the past few years. However, for the long term the recommendation of CMS computing management is to have identical analysis operations in the HI group as compared with the pp users. Hence, the remote site users in CMS-HI should eventually access the Vanderbilt and MIT-Bates T2 centers from their home institutions using the standard CRAB job submission and file retrieval tools. The CMS-HI personnel at Vanderbilt and MIT-Bates will assist the other CMS-HI members in learning how to use these tools. Since it is an important aspect to the user analysis discussion this proposal will discuss the hardware resources needed at all the CMS-HI-US institutions so that their members may have effective access to all HI T2 computing centers in CMS.

4 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 2 2 Responses To The Issues From The Review Report A proposal for heavy ion computing was presented to the U.S. DOE-NP in May 2009, and a summary of the reviewer comments was delivered in September While this first review recognized the need for a substantial heavy ion computing facility in the U.S., a number of questions were raised at the review concerning the justifications and operations of the proposed facility. The DOE-NP requested that these questions should be addressed in an updated proposal having a due date at the end of calendar This modified proposal contains the requested update, and is centered around addressing these major review report questions. To make for a more coherent response, the various issues have been grouped together along related topics. The issue of cost is addressed in a separate section of this proposal update. 2.1 Resource Requirements for HI Computing Related to the resource requirements for HI computing in CMS the report from the May 11 review asked that the following three related issues be addressed: Justification of the formal performance requirements of the proposed CMS HI computing center(s) that integrates the CERN Tier-0 obligations, the service level requirements including end users, and the resources of non-doe participants and other countries. The analysis and storage model articulated in the CMS HI Collaboration proposal differs from the one described in the CMS Computing Technical Design Report (TDR1). In the TDR, heavy ion data would be partially processed at the CERN Tier-0 center during the heavy ion run.... (Remaining text omitted as no longer applicable) The US CMS HI computing resources should be driven by technical specifications that are independent of specific hardware choices. Well-established performance metrics should be used to quantify the needs in a way that can be mapped to any processor technology the market may be offering over the course of the next few years. Expected improvements of the CPU processor cost/performance ratio suggest that the requested budget for CPU hardware could be high. Compared with the proposal originally presented to the DOE in May 2009, the current CMS-HI computing plan integrates much more of the already existing CMS T0 T1 T2 computing resource base. Stated succinctly, the model for HI computing resembles the model for pp computing far more than before, and thus the HI computing sector will be more robust for current operations and for future evolution with the rest of CMS computing. These steps forward have been made possible more following a more intense study of the HI computing requirements vis-a-vis the present availability of resources in CMS. First and foremost, the HI raw data acquired during the annual one month of Pb+Pb collisions at the LHC will be completely reconstructed at the T0 soon after it is acquired, the so-called prompt reco pass. The annual amount of T0 time needed for the prompt reconstruction of the HI raw data is detailed in the Table 2 of the data model document contained in the Appendix to this proposal update. In each of the years 2010 and , the number of days needed for prompt reconstruction at the T0 for the entire data set is less than the one month of running time. Therefore, the HI raw data

5 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 3 can be promptly reconstructed, and the production results transferred out of the T0, using the same work-flows (and personnel) as the pp raw data. This conclusion that the T0 could equally well serve the needs of the HI program was reached after a careful study of the computing reconstruction requirement, both for the CPU time and the memory footprint. The memory size, after a considerable advance in the software itself, is known to be less than 2 GBytes/central event, and is thus compatible with the 2 GByte/core size of the T0 compute nodes. The prompt reconstruction pass at the T0 will occur as soon as the Alignment and Calibration analysis pass has completed for each run. The prompt Alignment and Calibration work-flows for the HI data have been studied and found to be essentially identical to those for pp running. The on-line Data Quality Monitoring (DQM) requirements for the HI data have also been examined and these too have been determined to be largely similar to those for pp running. In a few cases, DQM components for the pp running can be ignored (e.g. missing jet energy), while in other cases (e.g. reaction plane and centrality class) new components are being added for the HI events. The net result is that we expect a smooth transition between T0 processing for the pp events and T0 processing for the HI events. The more identical the work-flows can be made for the HI events, then the smoother this transition will be. Naturally, the set of people at the T0 who will have gained expertise during the prior months of pp running will remain engaged during the one month of HI running. In fact, a subset of these people on shift during the pp running will be from the CMS-HI group, assuring that the transition to HI data processing will be as seamless as possible. After prompt reconstruction, the raw data files and the prompt reconstruction output will flow out of the T0 to the T1 site at FNAL. This feature of the HI data model is a major change from the original proposal to the DOE for which the raw data were planned to be transferred directly to the new computer center at Vanderbilt University. Further discussion with CMS computing management during the last six months has led us to the consensus that it is far more sensible to transfer the HI raw data and prompt reco streams from the T0 site to the FNAL T1 center for archiving to tape. Then these files, while still on on disk at FNAL would be transferred to disk storage at Vanderbilt. The obvious advantage to this plan is that it will make the transfer of HI files from the T0 look largely the same as the transfer of the pp files out of the T0 from the perspective of the work-flows at the T0. Moreover, since there are already existing high speed (10 Gbps) links between FNAL and the Vanderbilt site, there should not be any difficulty in the copying of the files from FNAL to Vanderbilt. Network commissioning tests between FNAL and Vanderbilt have already begun, and example results will be shown in Section 2.4 of this update proposal. This plan for the HI raw data and prompt reco production transfer presupposes that there will be a substantial ( 0.5 PB) disk storage capacity in place at Vanderbilt as early as November 2010 in order to receive and store the files. The model of pp data operations for the CMS T1 centers assumes that the raw data files will remain in place on disk for the subsequent reconstruction re-passes, two re-passes per year once steady state running has been attained. The reconstruction re-pass plan for the HI raw data will be the same as for the pp data, to match the expected evolution of the reconstruction software releases on a twice yearly basis. In the original CMS Computing TDR there was a mention of HI computing being done at the T0 after the HI running period and during the several months when the LHC was inactive. While it will certainly be possible for the prompt reco pass to extend for some days or even weeks after the month of HI running has concluded, the CMS computing management has decided that it is not feasible to have a model where the HI data are re-read from tape for reconstruction re-passes of the raw data. The infrastructure at the T0 does not exist for re-reads of raw data from tape, nor does the infrastructure

6 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 4 exist at the T0 for the export of production after the prompt reco pass. Developing this infrastructure solely for the HI data would be a gross expansion of scope for the T0 personnel and cannot be justified. The current model of HI computing in CMS plans that the reconstruction re-passes will be done at the Vanderbilt site, mirroring how such reconstruction re-passes are planned for the pp data at the various T1 sites in CMS. The size of the Vanderbilt site, in terms of HS06-sec per year for , has been determined on a data-driven basis according to fulfilling the CMS quota of reconstruction re-passes. This facility size determination is detailed in depth in Section C of the Appendix and is quantified in HS06-sec units where HS06 is the LHC equivalent of the obsolete SpecInt2000 unit of compute power. The annual growth of the Vanderbilt compute center is driven by the amount and type of data being acquired each year, and by the dual requirement that it fulfill both a raw data reconstruction role and a T2 data analysis role. Part of the T2 data analysis burden for CMS-HI will be shared by certain overseas T2 sites in CMS (Russia, France, Turkey, and Brazil), and by the new MIT-Bates HI compute center. The Vanderbilt site will spend approximately half of its yearly integrated compute power in the raw data reconstruction mode, and half in the T2 data analysis mode. In that respect, the Vanderbilt T2 center will be unique in CMS since no other T2 site will be doing data reconstruction or exporting reconstruction output to other T2 sites This singular departure from the CMS computing model for the pp program has been approved by CMS computing management and is awaiting formal approval by the WLCG. 2.2 Quality of Service Understandings with the WLCG Two of the issues raised in the review concerned the relation of the proposed HI computing centers with the WLCG: The relationship between DOE NP supported grid resources (VU and MIT) for heavy ion research and the grid resources available to the larger CMS HEP collaboration needs to be clarified. Also the formal arrangements with respect to NP pledged resources to the CERN Worldwide LHC Computing Grid (WLCG) need to be defined. The Tier-1 quality-of-service (QoS) requirements were specifically developed for the HEP LHC experiments, but they might be relaxed for the CMS HI effort in view of its spooled approach to the grid. CMS HI and the WLCG are encouraged to examine the possibility of tailoring the Tier-1 requirements to reflect the needs of the U.S. heavy ion research community. These two issues regarding the support of grid services by the DOE NP and the relationship to the WLCG are much simplified in this updated computing proposal by virtue of the changed plans described in the previous subsection. The new plan is to have the HI raw data and prompt reco production output transferred first from the T0 to the FNAL T1 center for archiving to tape there. The T0 and FNAL T1 data operations groups will be in charge of this work-flow, treating it as a one-month extension of their responsibility for transferring pp files to the FNAL T1. There will be some cost associated with the extra tape archiving at FNAL, but such costs would have occurred for tape archiving at the Vanderbilt center in the original plan. The cost for the HI tape archiving at FNAL is presented in Section 3. While these data files are still present on disk storage at the FNAL T1, they will be copied down to the Vanderbilt site. The high speed links from FNAL to Vanderbilt already exist, and there is not expected to be any charge to the DOE-NP for this stage of the transfer. In the same manner, there are

7 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 5 already high speed links between the MIT-Bates T2 center and FNAL for the transfer of simulation or analysis HI production to be archived. Network tests between MIT-Bates and Vanderbilt have already begun, and a high throughput has already been measured in both directions. Discussions are beginning with the WLCG organization on these roles for the Vanderbilt center and the HI center at MIT-Bates. With the support of CMS computing management, as well as the national CMS leadership in the US, we expect that the appropriate MoUs defining the responsibilities of the Vanderbilt and MIT-Bates sites for the HI program will be completed by mid The Vanderbilt site will also have a related MoU with the FNAL T1 site, as does every other US T2 site, defining their mutual responsibilities. This MoU should be completed in parallel with the MoU to the WLCG. 2.3 HI Computing Center Operations In the area of the HI computing center operations, these four issues were brought up in the review report: The management, interaction and coordination model between the Tier-2 center(s) and Tier-3 clients is not well formulated. It will be important to document that user institutions will have sufficient resources to access the VU computing center and how the use of the facility by multiple U.S. and international clients will be managed. A detailed plan for external oversight of the responsiveness and quality of operation of the computing center(s) should be developed. Draft Memoranda of Understanding (MoUs) should be prepared between the appropriate parties, separately for the VU and the Bates Computing Facilities, that clearly define management, operations, and service level responsibilities. The size of the workforce associated with data transport, production, production re-passes, calibration and Monte Carlo simulation efforts, and the general challenges of running in a grid environment should be carefully examined and documented. For doing analyses of the HI production results at the T2 centers the users in CMS-HI will follow the same model as the users in the pp program. Namely, CMS-HI users will group themselves at T3 centers which will submit analysis jobs to the T2 centers using the standard CMS analysis tools. It would be wasteful to try to design and operate two different modes of user analysis in CMS. This model of HI analysis has already been proved to be successful at the MIT T2 center. All the CMS-HI users, both in the US and abroad, have been given accounts MIT, on a set of gateway nodes. These nodes support the standard CMS job submission and analysis tools, and have sufficient disk space for user output. As such, this collection of gateways has been functioning as the HI T3 center in the US. From those accounts, the CMS-HI users can submit jobs to process HI production files resident at the MIT T2, or any other T2 in CMS which has the needed HI production files present. This same mode of user analysis operation exists throughout CMS. A given institution has the necessary gateways with the CMSSW software infrastructure installed, and with sufficient local disk resources for user output, to qualify as a functioning T3 site in CMS. Such is the case already for the Vanderbilt site, having been developed by the HEP-CMS group in that institution s Physics Department. In the past month, we have put in place at Vanderbilt the accounting structures to allow logins by selected

8 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 6 CMS-HI users from institutions: MIT, UIC, UMD, and Kansas. Documentation for running CMSSW software at Vanderbilt, and for submitting jobs, has been developed by the local RHI group. The plan is to have this selected set of users commission the T3 running at Vanderbilt in early 2010, to have the same functionality as the T3 now operating at MIT-Bates. Then, in mid-2101, the Vanderbilt T3 site will become open to all CMS-HI users. In this manner, all the CMS-HI users who need to do so will be trained rapidly to take advantage of the standard CMSSW infrastructure tools. At that point, and with the availability of sufficient disk resources at their home institutions, the users can establish local T3 sites to communicate with the HI T2 centers at Vanderbilt and MIT-Bates. The CMS computing management strongly encourages us in the CMS-HI group to follow this mode of user analysis. We should document the current HI T3 capability at MIT-Bates (some of this could already be in section 2.5): 1) how many users have accounts, 2) what is the file space available to the users, 3) what is the gateway configuration into the T2. From these numbers we can proposed a list of resources which should be available at every other CMS-HI-US institution to qualify as a T3. ADDITIONAL PLANNED RESPONSE: For the workforce, we need to include an organizational box-chart for the various on-line, off-line and simulation computing responsibilities. We can stress that each of these HI boxes are integrated into the equivalent pp boxes, thus leveraging our own limited manpower. Examples of this leveraging would be the HI raw data transport from the T0, and the use of the central simulations operations team at CERN for HI production management. 2.4 ACCRE Investment and Infrastructure One issue of concern in the review report related to the status of the ACCRE computing facility at Vanderbilt and its technical choices: ACCRE should articulate its plans and facility investments needed to support the development of a custom infrastructure suited to the needs of the US CMS HI collaboration and conversely, to what extent the US CMS HI collaboration will need to adapt to the existing ACCRE infrastructure (e.g. the use of L-Store at Vanderbilt, when other CMS computing centers use dcache). PLANNED RESPONSE: The ACCRE people have told me that they will respond directly to these questions. 2.5 Simulation Compute Center Operations at MIT-Bates The review report requested further elaboration on the plan for the separate HI computing center at MIT-Bates: The CMS HI proposal provided insufficient information to allow the panel to make a clear recommendation regarding the one- and two-site computing center options. A case should be made beyond stating that a two center solution might result in more access to opportunistic compute cycles, and that the simulation expertise existing in the MIT-Bates CMS HI group should not be lost. Based on (near) cost equality of the two solutions, there was no strong argument for the

9 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 7 one-site solution. As the lead institution for the CMS HI simulation effort, it might be appropriate for MIT-Bates to divide responsibilities between two sites provided a well-defined, fully integrated computing plan is presented. The second CMS-HI compute center at MIT-Bates will be responsible for the following: 1) Generating and analyzing simulated data for heavy ion events. 2) Functioning as an additional analysis center for the CMS-HI Physics Analysis Groups Under the management of the Laboratory for Nuclear Science (LNS), MIT has constructed a new data center at the Bates Linear Accelerator Center. This data center has sufficient rack space, power, cooling, and network capacity to support the full proposed MIT component of the CMS-HI computing effort as well as the fully installed CMS Tier2 system and computing for several other MIT groups. The CMS-HI component will consist of 680 new CPUs (cores) and 100 TByte disk capacity that will be acquired over a five-year period. The center has been functioning as a heavy-ion computing facility for CMS HI program since In fact the very first CMS computers were purchased from the funds supplied by DOE Nuclear Physics office. The heavy-ion physics part was always tightly integrated into the overall functioning of the center. The simulation and simulated data analysis are done routinely at MIT. The configuration of disk space, CPU and network is optimized for typical usage of a CMS Tier-2 center: simulations and data analysis with tools that allow quick shipment of simulated data to other CMS HI centers like Vanderbilt or to FNAL for storage on tape. MIT will be used by central CMS data operation group for simulations and it will be very easy to add HI specific tasks to the list of jobs running at MIT. The organization of the joint HE-HI facility is overseen by Profs. Boleslaw Wyslouch and Christoph Paus. Maxim Goncharov is the overall manager. Wei Li of the MIT heavy ion group provides part time support for the grid operations and he is in change of the overall simulations for the CMS HI program. The long-term plan is for the HI and HE groups to support one post-doctoral associate each who will spend roughly half of their time on computer facility management. In addition, on average one to three graduate students from each of the groups will devote a fraction of their time to support the facility operations. The Computer Services Group of LNS will provide important system management expertise and personnel (on average about 0.25 FTE). The CMS HI funds will support a system manager at the 0.25 FTE level to assist with daily operational tasks. Connection to external networks as well as management and maintenance of the network connections within MIT are provided by MIT Information Services and Technology (IST). In addition to continuing its ongoing role in generating and analyzing simulated events, the MIT facility will serve a critical role analyzing CMS HI data, most especially in the early period. The joint HI-Tier2 facility is now and will continue to be a fully functional CMS analysis center with all necessary installed software and other services. Therefore, the existing and all future installed CPUs will be ready immediately to perform the full suite of CMS functions with minimal effort. Depending on the early luminosity of the HE data from CMS, some fraction of the Tier2 CPU power may be available as well for opportunistic use for HI analysis. Combined with the existing CPU power devoted to HI at MIT and leveraging the cooperative arrangement with the Tier2 system, the MIT component of the full CMS HI computing effort can provide an important resource for data analysis as the Vanderbilt facility is installed and commissioned.

10 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 8 Since the disk space at MIT is distributed among many computers the allocation of disk space will be very straightforward. Already now the disk space is divided between heavy-ion pools and the T2 pools. The CPU usage prioritization and monitoring will be done using Condor system. We will share CPU according to the fraction of hardware and manpower investment. Condor allows reservation of minimal number of slots and usage of all the farm resources during the periods of reduced activity. There is also a possibility of time-share: using of full power of the center for a period of time. This may be essential in the early days of running where quick turnaround of data analysis will be essential. Analysis of the simulated and reconstructed data can be done at MIT in a very straightforward fashion. The part of distributed disk space located on the HI computers will be divided into simulation and analysis areas. Analysis and simulation jobs running on any of center s CPUs will be able to independently access their respective areas. There will be one server dedicated to simple interactive activities by the users. The HI users will be able to run short interactive jobs on this dedicated machine. In addition they will be able to submit condor jobs directly and, preferably, use CMS CRAB job submission system. All of these facilities are available now for the usage of CMS HI collaborators. The division of resources between T2-like simulation activities and T3-like analysis activities will be more logical than physical. MIT already supports large user base from among the US CMS HI collaborators. There is extensive know-how within the MIT group on how to do analysis. The system is designed to smoothly and quickly accept new computing resources and continue to service the CMS HI program. 3 Proposal Budget PLANNED RESPONSE: I have a preliminary new budget from Carie Kennedy at ACCRE, but I have to press on Lothar to get us the new FNAL T1 tape budget. 4 Summary PLANNED RESPONSE: Write after the other sections have been completed.

11 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 9 Appendix Data Model for the Heavy Ion Research Program in CMS, A Introduction This appendix contains the data model for the heavy ion research program in CMS. This data model, which has been approved by the computing management CMS, describes the computing resource base for the heavy ion program in terms of the standard CMS computing structure of T0, T1, and T2 data processing. The computing resource (CPU power, disk, and tape) requirements for the HI program are determined according to a data-driven basis for the years These requirements are then compared to the projected availability of computing resources, including a major contribution funded by the U.S. DOE-NP. The details of the HI physics program in CMS have been described previously to the collaboration at a number of venues. The major physics goals of the HI research program were initially documented in the March 2007 High Density QCD with Heavy Ions addendum to the CMS physics technical design report 1. Subsequently, the specific components of the HI physics program were reviewed internally by CMS on May 27 28, A workshop on HI computing in CMS took place on September 12 this year during the Bologna CMS Week 3. A follow-up set of HI computing presentations was made during the October 2009 Offline and Computing Week 4. Similarly, a workshop on the detector readiness for HI beams was held on October 14, The general conclusion from these reviews and workshops is that the HI group is well-poised to begin collecting data when the Pb+Pb collisions begin in November 2010, although some important steps remain to be taken. In particular, one of the steps to be achieved is the completion of the computing resources base for the processing of HI data. This resource base will depend critically on the approval of a new HI computing center in the United States, which is being proposed to be sited at the existing large scale computer facility ACCRE 6 on the campus of Vanderbilt University. In turn, that HI proposed computing center is being scaled according to the details of this data model document. The broad outline of HI data processing in CMS can be summarized as follows 1) The raw data are first processed at the T0, including the standard work-flows there for Alignment, Calibration and Data Quality Monitoring, followed by a prompt reconstruction pass. The raw data, AlCa, and Reco output files are archived at the T0, and these files are all to be sent to the US T1 site at FNAL for secondary archiving to tape. 2) While still on disk storage, the data files at FNAL are subscribed to by a new T2 site for HI to be located at Vanderbilt University and which is proposed to be funded by the US DOE-NP. Unlike 1 CERN/LHCC , CMS TDR 8.2 Add 1 2 indico.cern.ch/conferencedisplay.py?confid=59574, (Is there an official review report available?) 3 indico.cern.ch/conferencedisplay.py?confid= indico.cern.ch/conferencedisplay.py?confid= Advanced Computing Center for Research and Education,

12 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 10 other T2 sites in CMS, this HI T2 site will conduct raw data reconstruction passes in addition to supporting the usual T2 functions for data analysis and simulations. 3) The T2 site at Vanderbilt will export some reconstruction output to other T2 sites, e.g. Russia, France, Brazil, Turkey, and MIT, which have groups who have expressed interest in carrying out heavy ion physics analyses. The T2 status of the proposed new site at Vanderbilt University is now under negotiation with the WLCG. 4) The T2 site at MIT-Bates will be provided with a significant expansion of CPU and disk resources pledged to HI simulations in CMS. HI simulation production will directed by the central data operations group in CMS. 5) Production files from the various HI T2 sites will be archived to tape at the FNAL T1 site in the same manner as files produced from the processing of pp data. The numerical estimates for the different data flow stages in this model are developed in the following sections of this document. B Luminosity, Event Number, and Raw Data File Size Projections Because of intrinsic physics constraints, the luminosity growth for heavy ions beams is not forecast to be as great as that for proton beams. The present projections for luminosity growth, event numbers, and raw data files are show in Table 1 for the period At present it is being assumed that there will not be any LHC running during calendar There is a large amount of uncertainty Table 1: Projected luminosity and up-time profile for the CMS heavy ion runs Year Ave. Luminosity [cm 2 s 1 ] uptime [s] Events recorded Raw data [TByte] (Min Bias) (HLT) (HLT) (HLT) 300 regarding the first year running conditions for heavy ions. This uncertainty applies to the actual beam luminosity, the live time, and the event sizes themselves. The event size uncertainty in turns derives from both a basic lack of physics knowledge about the event multiplicity and a technical uncertainty about the zero suppression. For resource planning purposes the HI data model assumes that the first year event number and raw data volume amounts as shown in Table 1 are a factor of 3 greater than their most probable values. These uncertainties will be resolved once experience is gained with the first year s data. The first year running will be in minimum bias mode. The High Level Trigger (HLT) will be used in a tag-and-pass mode such that the proposed trigger algorithms can be evaluated and optimized. The raw data will be initially unsuppressed and will be perhaps as large as 1 PByte before suppression down to a about 250 TBytes. It would be that post-suppression 250 TByte quantity of raw data which would be exported to the secondary tape archive at the FNAL T1.

13 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 11 On account of the expected luminosity and live-time growth after 2010 all subsequent HI running requires the use the HLT. It is also anticipated that these later runs will have an effective zero suppression scheme implemented, with the net result that both the event number and the raw data volume in 2012 will be less than what these were in the minimum bias running conditions of The actual physics content of the raw data on average, however, will be more complex by virtue of the work done by the HLT. By the time of the 2013 run the HI data model assumes that the running conditions will have reached nominal year, steady state values. C Raw Data Reconstruction Computing Requirement The raw data reconstruction computing effort for the HI program is tabulated in Table 2. The estimates for the required computing power, in terms of HS06-sec, were developed from simulated events reconstructed using CMSSW in November The estimates include an 80% net efficiency factor for CPU utilization. Based on these estimates, the first year s data acquired in 2010 will require approximately 4 days of processing at the T0 center, not including the time for zero suppression of the original 1 PByte (or less) of the minimum bias raw data files. Even assuming another 50% computing load for the zero suppression task, the total processing time at the T0 for the 2010 data set is calculated to be about one week. After the raw data files are processed at the T0, they will be transported to the T1 tape archive center at FNAL, along with the production output. The data volumes for this stage of the data model will be discussed in Section 7 of this document. These files, while still resident on disk at FNAL, will be subscribed to by the new HI center at Vanderbilt. This center is proposed to have the following annual compute power totals in HS06 units: 3200, 5800, 15900, for the years 2010, 2012, 2013, and 2014 respectively. With this amount of compute power available annually, then each reco re-pass of the HI data can be accomplished in less than 12 weeks, as shown in the last column of Table 2. After having met the CMS standard of two reco re-passes per year, and done this within a 24 week or less span, then the Vanderbilt HI center will able to spend at least six months of the year as a T2 center for analysis and/or simulation production. The HI T2 center operations are discussed in Section 4. The 2011 year is a special case when provision is being made for 4 reco re-passes. It is anticipated that there will be a significant learning curve to optimize the reconstruction and analyses of the first set of raw data. The model is to take advantage of the down-time of the LHC in 2011 in order to perfect the software using multiple passes over the data. Since the Vanderbilt center is proposed to almost double in size in 2011, it can still meet its goal spending six months of the year performing T2 center operations while still being able to complete 4 reco re-passes in that one year. D Data Analysis Computing Requirement The required data analysis computing load for the analysis stage is not as straightforward to estimate as is the raw data reconstruction requirement. The HI experience at the RHIC Computing Facility has been that the integrated computing power devoted to user analysis has been comparable in recent years to the integrated power devoted to raw data reconstruction. Whether that correlation will also be true at the LHC is open to question, but it is a conservative assumption for the HI data model in CMS for purposes of evaluating the proposed T2 resource base.

14 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 12 Table 2: Projected Raw Data Reconstruction Computing Effort Year Trigger Events Compute Load T0 Reco Re-passes Re-pass Time (10 6 ) (10 10 HS06-Sec) (Days) (Days/Re-pass) 2010 Min Bias No Beam Reprocess 1.7 None HLT HLT HLT Based on this assumption, the projected computing load for data analysis is shown in Table 3. For this table the number of analysis passes each year is the initial analysis pass of the prompt reco production from the T0 plus the number of reco re-passes shown in Table 2. Table 3: Projected Data Analysis Computing Load Year Compute Load Per Analysis Pass Number of Analysis Passes Total Compute Load (10 10 HS06-Sec) (10 10 HS06-Sec) E Simulation Production Computing Requirement The simulation production model for the HI physics program differs significantly from that of the pp program. The HI program is not a golden event strategy wherein one is looking for special event topologies such as the decay of the Higgs boson. Instead, the HI physics strategy is largely one of statistical analysis. One chooses a specific physics signal, for example jet suppression or elliptic flow as a function of particle transverse momentum. These signals are examined typically as a function of the event s centrality (impact parameter) class, which is highly relevant in HI collisions but not so in pp collisions. The signals may be also analyzed in pp collisions. To the extent that particular signals are prominent in the sets central HI collisions but not in peripheral collisions nor in pp collisions appropriately scaled, then one take this as evidence of an effect of the dense state of matter created in a central HI collision. In such a physics program individual events are not particularly unique but the mean behavior of a collection of a certain event class carries with it a physics content. In this respect, simulation production in HI physics is aimed at quantifying the geometric acceptances and efficiencies of the detector for the given classes of events. Signal detection efficiencies can be a strong function of the event centrality class, being lower in HI central collisions as compared with peripheral collisions or with pp event collisions.

15 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 13 Simulations are also aimed at quantifying systematic false signals or backgrounds which may intrude, such as combinatoric backgrounds in the detection of particle pairs from resonance decays. The net result of these considerations is that simulation production for HI physics, as has been carried out in the past for the CERN SPS and the RHIC programs, involves the generation of a small fraction (10 20%) of the real events instead of the generation of a comparable number of events as might be the case in the HEP physics program. However, because of the innate complexity of the HI events, the simulation time for these events is much longer ( times) than the reconstruction time. As was the case for the reconstruction requirement, the simulation production requirement is derived from estimates based on the use of CMSSW The annual integrated simulation power estimates are shown in Table 4. The Compute Load column includes both the simulated event generation time and the far smaller reconstruction time. The simulation events number is taken to be 20% of the number of real events in a given year. Table 4: Projected Simulation Computing Load Year Event Type Number of Simulated Events Total Compute Load (10 6 ) (10 10 HS06-Sec) 2010 Min Bias Min Bias Central Central Central F Integrated Computing Requirement for the T2 Resource Base The previous sections of this data model document have provided the separate computing requirements for raw data reconstruction, data analysis, and simulation production. The raw data reconstruction re-passes will be carried only at the HI computing center proposed at Vanderbilt, according to the time durations indicated in Table 2. The data analysis and simulation production will be performed at the available T2 resource base. For the HI program in CMS the T2 resource base will consist of the following components 1) Pledges from existing T2 centers including Russia, France, Brazil, and Turkey. These resources are conservatively placed at 3000 HS06 units for the next five years, although there may eventually be twice that much available. The disk storage base at these T2 centers is estimated at 300 TB in 2010, growing to as much as 600 TB after five years. 2) The upgrade of the MIT-Bates T2 for HI use. This upgrade is projected as 6000 HS06 units over the span , i.e. increments of 1500 HS06 per year. The disk storage space for 2010 accompanying this upgrade is 134 TB in 2010, growing to 360 TB by ) The proposed resources at the Vanderbilt center, with HS06 numbers as enumerated in Section 3, when those resources are not being used for raw data reconstruction. From the numbers shown

16 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 14 in Table 2, the Vanderbilt center will be occupied approximately 50% of the year with raw data reconstruction, leaving the remaining half-year available for T2 activity. The annual disk storage space needs of the Vanderbilt center are developed in Section 7. The integrated computing requirements for data analysis and simulation production presented in the second column of Table 5 for the years The T2 resource base projected to be available each year is given in column three. The ratio of availability/need is listed in column four. In the first two years, there is an apparent comfortable safety margin with 70 80% excess capacity. In fact, one should plan on excess capacity in these first two years since this time will encompass the major learning curve for the HI program in dealing with real data. It would be unwise to assume that all will go smoothly in these first two years leading to a high duty cycle for computing resource utilization. In particular this HI T2 resource base is coming together literally for the first time in In 2012, the safety margin drops to 20%, and then to just 10% when nominal beam running is achieved in For these outer years the hope is that the non-us resource T2 contribution will grow effectively above 3000 HS06. For example, growth to 6000 HS06 units in 2013 would mean that the available T2 resource base would have an additional integrated power of HS06-sec corresponding to 15% of the estimated need. Table 5: Integrated T2 Computing Load Compared to Available Resources Year Analysis + Simulation Need Available T2 Base Ratio (10 10 HS06-sec) (10 10 HS06-sec) (Available/Need) % % % % % G Disk Storage and Tape Requirements for Raw Data Processing As mentioned in the introduction section, the FNAL T1 center will serve as the tape archive resource in the HI data model. The annual raw data volumes listed in Table 2 will be written to tape as soon as they arrive from the T0 center during the one month of HI running in the LHC. Similarly, the prompt reco output from the T0 will also be archived at the FNAL T1 center. The raw data sets will be subscribed to and stored on disk at the Vanderbilt center for subsequent reconstruction re-passes. The prompt reco files will be subscribed as well for the initial AOD and PAT file production at Vanderbilt. There will be no re-reads of raw data or prompt reco files from the FNAL T1 center unless there is a disk hardware failure at the Vanderbilt center causing the loss of the data set. It is anticipated that the annually acquired HI raw data sets written to disk at the Vanderbilt location will remain on completely on disk for a one year period during which two or more reco re-passes may be performed. The exception will be during for which the 2010 minimum bias data set will remain on disk for two years with multiple re-passes possible since the LHC is not assumed to be running in 2011.

17 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 15 According to the results obtained using the current version of CMSSW, the event sizes at the different stages of HI data processing starting from the raw data files are as listed in Table 6: Table 6: Event sizes for the HI data processing stages Year Event Type Number of Events Reco Event AOD Event PAT Events (10 6 ) (MB/event) (MB/event) (MB/event) 2010 Min Bias Min Bias 80 (from 2010) HLT HLT HLT The HI data and tape storage requirements for each LHC running period are itemized in the following three subsections based on the event size numbers presented in this table. An integrated summary table is provided at the third subsection. The storage requirements related to simulation production are estimated in Section 8. G , Minimum Bias It will be clearly too expensive, and as well as pointless, to store on disk at one site all the different file formats for all the re-passes through the entire minimum bias data set. For example, assuming six passes through the original raw data set, the total production output in would be more than 1 PB of disk space in addition to the original 0.25 PB of disk space to hold the raw data. Since successive re-passes through the data general make the previous re-pass less valuable, or even obsolete, it makes sense to retain only the previous re-pass in conjunction with the output of the last re-pass. Moreover, one can make the assumption that having two complete sets of reco files at Vanderbilt reconstruction site, corresponding to two successive re-passes, is wasteful. The older of these sets ( 150 TB) could easily be shipped other HI T2 sites if it was decided that keeping two complete reco pass output was necessary during In this model, then, the Vanderbilt site is assumed to have disk space for the following data sets in related to its reconstruction mission 1) Raw data = 250 TB 2) Current reco re-pass output = 150 TB 3) AOD and PAT output from two re-passes = 35 TB These three components sum to 435 TB. In addition to its reconstruction mission, the Vanderbilt site will provide space for user analysis output. That space is being set at 50 TB for , giving total disk storage space of 485 TB. The corresponding amount of tape storage needed at the FNAL T1 for is estimated as 1) Raw data (2010 only) = 250 TB

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology Version February 18, 2010 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN 37235 3) Co-PIs

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

August 31, 2009 Bologna Workshop Rehearsal I

August 31, 2009 Bologna Workshop Rehearsal I August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing

More information

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012 Continuation Report/Proposal for DOE Grant Number: DE SC0005220 1 May 15, 2012 Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Performance of the ATLAS Inner Detector at the LHC

Performance of the ATLAS Inner Detector at the LHC Performance of the ALAS Inner Detector at the LHC hijs Cornelissen for the ALAS Collaboration Bergische Universität Wuppertal, Gaußstraße 2, 4297 Wuppertal, Germany E-mail: thijs.cornelissen@cern.ch Abstract.

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system

DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system GENERAL PHILOSOPHY (Arthur Grossman, Steve Palumbi, and John Pringle) The three Principal

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

The IDN Variant TLD Program: Updated Program Plan 23 August 2012

The IDN Variant TLD Program: Updated Program Plan 23 August 2012 The IDN Variant TLD Program: Updated Program Plan 23 August 2012 Table of Contents Project Background... 2 The IDN Variant TLD Program... 2 Revised Program Plan, Projects and Timeline:... 3 Communication

More information

Machine Learning in Data Quality Monitoring

Machine Learning in Data Quality Monitoring CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

NC Education Cloud Feasibility Report

NC Education Cloud Feasibility Report 1 NC Education Cloud Feasibility Report 1. Problem Definition and rationale North Carolina districts are generally ill-equipped to manage production server infrastructure. Server infrastructure is most

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

DR and EE Standards for SCE Buildings

DR and EE Standards for SCE Buildings Design & Engineering Services DR and EE Standards for SCE Buildings Prepared by: Design & Engineering Services Customer Service Business Unit Southern California Edison December 2011 Acknowledgements Southern

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

Total Cost of Ownership: Benefits of the OpenText Cloud

Total Cost of Ownership: Benefits of the OpenText Cloud Total Cost of Ownership: Benefits of the OpenText Cloud OpenText Managed Services in the Cloud delivers on the promise of a digital-first world for businesses of all sizes. This paper examines how organizations

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Governing Body 313th Session, Geneva, March 2012

Governing Body 313th Session, Geneva, March 2012 INTERNATIONAL LABOUR OFFICE Governing Body 313th Session, Geneva, 15 30 March 2012 Programme, Financial and Administrative Section PFA FOR INFORMATION Information and communications technology questions

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

NORTH CAROLINA NC MRITE. Nominating Category: Enterprise IT Management Initiatives

NORTH CAROLINA NC MRITE. Nominating Category: Enterprise IT Management Initiatives NORTH CAROLINA MANAGING RISK IN THE INFORMATION TECHNOLOGY ENTERPRISE NC MRITE Nominating Category: Nominator: Ann V. Garrett Chief Security and Risk Officer State of North Carolina Office of Information

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

The BABAR Database: Challenges, Trends and Projections

The BABAR Database: Challenges, Trends and Projections SLAC-PUB-9179 September 2001 The BABAR Database: Challenges, Trends and Projections I. Gaponenko 1, A. Mokhtarani 1, S. Patton 1, D. Quarrie 1, A. Adesanya 2, J. Becla 2, A. Hanushevsky 2, A. Hasan 2,

More information

MTR to Enhance Arrangements for EPIC Qualification Examination Mechanism

MTR to Enhance Arrangements for EPIC Qualification Examination Mechanism PR074/18 31 August 2018 MTR to Enhance Arrangements for EPIC Qualification Examination Mechanism The MTR Corporation today (31 August 2018) submitted an investigation report to the Government regarding

More information

Energy Action Plan 2015

Energy Action Plan 2015 Energy Action Plan 2015 Purpose: In support of the Texas A&M University Vision 2020: Creating a Culture of Excellence and Action 2015: Education First Strategic Plan, the Energy Action Plan (EAP) 2015

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

Advancing the MRJ project

Advancing the MRJ project Advancing the MRJ project 2017.1.23 2017 MITSUBISHI HEAVY INDUSTRIES, LTD. All Rights Reserved. Overview The Mitsubishi Regional Jet (MRJ) delivery date is adjusted from mid-2018 to mid-2020 due to revisions

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

The Computation and Data Needs of Canadian Astronomy

The Computation and Data Needs of Canadian Astronomy Summary The Computation and Data Needs of Canadian Astronomy The Computation and Data Committee In this white paper, we review the role of computing in astronomy and astrophysics and present the Computation

More information

IBM Storwize V7000 TCO White Paper:

IBM Storwize V7000 TCO White Paper: IBM Storwize V7000 TCO White Paper: A TCO White Paper An Alinean White Paper Published by: Alinean, Inc. 201 S. Orange Ave Suite 1210 Orlando, FL 32801-12565 Tel: 407.382.0005 Fax: 407.382.0906 Email:

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

Figure 1: Summary Status of Actions Recommended in June 2016 Committee Report. Status of Actions Recommended # of Actions Recommended

Figure 1: Summary Status of Actions Recommended in June 2016 Committee Report. Status of Actions Recommended # of Actions Recommended Chapter 3 Section 3.05 Metrolinx Regional Transportation Planning Standing Committee on Public Accounts Follow-Up on Section 4.08, 2014 Annual Report In November 2015, the Standing Committee on Public

More information

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction. at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

How Cisco IT Improved Development Processes with a New Operating Model

How Cisco IT Improved Development Processes with a New Operating Model How Cisco IT Improved Development Processes with a New Operating Model New way to manage IT investments supports innovation, improved architecture, and stronger process standards for Cisco IT By Patrick

More information

Computing Resources Scrutiny Group

Computing Resources Scrutiny Group CERN RRB 17 056 April 17 Computing Resources Scrutiny Group C Allton (UK), V Breton (France), G Cancio Melia (CERN), A Connolly(USA), M Delfino (Spain), F Gaede (Germany), J Kleist (Nordic countries),

More information

CITY OF MONTEBELLO SYSTEMS MANAGER

CITY OF MONTEBELLO SYSTEMS MANAGER CITY OF MONTEBELLO 109A DEFINITION Under general administrative direction of the City Administrator, provides advanced professional support to departments with very complex computer systems, programs and

More information

CMS Conference Report

CMS Conference Report Available on CMS information server CMS CR 2005/021 CMS Conference Report 29 Septemebr 2005 Track and Vertex Reconstruction with the CMS Detector at LHC S. Cucciarelli CERN, Geneva, Switzerland Abstract

More information

PoS(High-pT physics09)036

PoS(High-pT physics09)036 Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform

More information

Submission. to the. Australian Communications and Media Authority. on the. Planning for mobile broadband within the 1.

Submission. to the. Australian Communications and Media Authority. on the. Planning for mobile broadband within the 1. Submission to the Australian Communications and Media Authority on the Planning for mobile broadband within the 1.5 GHz mobile band Submission by: Australian Mobile Telecommunications Association and Communications

More information

Overcoming the Challenges of Server Virtualisation

Overcoming the Challenges of Server Virtualisation Overcoming the Challenges of Server Virtualisation Maximise the benefits by optimising power & cooling in the server room Server rooms are unknowingly missing a great portion of their benefit entitlement

More information

Many Regions, Many Offices, Many Archives: An Office 365 Migration Story CASE STUDY

Many Regions, Many Offices, Many Archives: An Office 365 Migration Story CASE STUDY Many Regions, Many Offices, Many Archives: An Office 365 Migration Story CASE STUDY Making a Company s World a Smaller, Simpler Place Summary INDUSTRY Multi-national construction and infrastructure services

More information

High Performance Computing (HPC) Data Center Proposal

High Performance Computing (HPC) Data Center Proposal High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015 Quick Facts!! Located on 1 st floor in Building

More information

Total Cost of Ownership: Benefits of ECM in the OpenText Cloud

Total Cost of Ownership: Benefits of ECM in the OpenText Cloud Total Cost of Ownership: Benefits of ECM in the OpenText Cloud OpenText Managed Services brings together the power of an enterprise cloud platform with the technical skills and business experience required

More information

IUPUI eportfolio Grants Request for Proposals for Deadline: March 1, 2018

IUPUI eportfolio Grants Request for Proposals for Deadline: March 1, 2018 IUPUI eportfolio Grants Request for Proposals for 2018-2019 Deadline: March 1, 2018 Purpose IUPUI eportfolio Grants are intended to support the eportfolio Initiative s mission: The IUPUI eportfolio Initiative

More information

University of Hawaii Hosted Website Service

University of Hawaii Hosted Website Service University of Hawaii Hosted Website Service Table of Contents Website Practices Guide About These Practices 3 Overview 3 Intended Audience 3 Website Lifecycle 3 Phase 3 Begins 3 Ends 3 Description 3 Request

More information

WHO Secretariat Dr Oleg Chestnov Assistant Director-General Noncommunicable Diseases and Mental Health

WHO Secretariat Dr Oleg Chestnov Assistant Director-General Noncommunicable Diseases and Mental Health WHO Secretariat Dr Oleg Chestnov Assistant Director-General Noncommunicable Diseases and Mental Health WHO Secretariat Dr Douglas Bettcher Director Department for Prevention of NCDs UN General Assembly

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

University of Wyoming Mobile Communication Device Policy Effective January 1, 2013

University of Wyoming Mobile Communication Device Policy Effective January 1, 2013 University of Wyoming Mobile Communication Device Policy Effective January 1, 2013 Introduction and Purpose This policy allows the University to meet Internal Revenue Service (IRS) regulations and its

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

OCM ACADEMIC SERVICES PROJECT INITIATION DOCUMENT. Project Title: Online Coursework Management

OCM ACADEMIC SERVICES PROJECT INITIATION DOCUMENT. Project Title: Online Coursework Management OCM-12-025 ACADEMIC SERVICES PROJECT INITIATION DOCUMENT Project Title: Online Coursework Management Change Record Date Author Version Change Reference March 2012 Sue Milward v1 Initial draft April 2012

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

Ofcom review of proposed BBC Scotland television channel

Ofcom review of proposed BBC Scotland television channel Ofcom review of proposed BBC Scotland television channel INVITATION TO COMMENT: Publication Date: 30 November 2017 Closing Date for Responses: 14 December 2017 About this document The BBC has published

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment

A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment EPJ Web of Conferences 108, 02023 (2016) DOI: 10.1051/ epjconf/ 201610802023 C Owned by the authors, published by EDP Sciences, 2016 A New Segment Building Algorithm for the Cathode Strip Chambers in the

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS.

GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS. GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS. Cloud computing is as much a paradigm shift in data center and IT management as it is a culmination of IT s capacity to drive business

More information

This module presents the star schema, an alternative to 3NF schemas intended for analytical databases.

This module presents the star schema, an alternative to 3NF schemas intended for analytical databases. Topic 3.3: Star Schema Design This module presents the star schema, an alternative to 3NF schemas intended for analytical databases. Star Schema Overview The star schema is a simple database architecture

More information

Administrative Policies and Business Contracts

Administrative Policies and Business Contracts Page 1 of 11 Responsible Officer: Responsible Office: Catherine Montano Administrative Policies and Business Contracts Issuance Date: 08/01/2011 Effective Date: 08/01/2012 Last Review Date: 08/01/2012

More information

PROJECT FINAL REPORT. Tel: Fax:

PROJECT FINAL REPORT. Tel: Fax: PROJECT FINAL REPORT Grant Agreement number: 262023 Project acronym: EURO-BIOIMAGING Project title: Euro- BioImaging - Research infrastructure for imaging technologies in biological and biomedical sciences

More information

The Six Principles of BW Data Validation

The Six Principles of BW Data Validation The Problem The Six Principles of BW Data Validation Users do not trust the data in your BW system. The Cause By their nature, data warehouses store large volumes of data. For analytical purposes, the

More information

Bill Boroski LQCD-ext II Contractor Project Manager

Bill Boroski LQCD-ext II Contractor Project Manager Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Abstract. The Data and Storage Services group at CERN is conducting

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Italy - Information Day: 2012 FP7 Space WP and 5th Call. Peter Breger Space Research and Development Unit

Italy - Information Day: 2012 FP7 Space WP and 5th Call. Peter Breger Space Research and Development Unit Italy - Information Day: 2012 FP7 Space WP and 5th Call Peter Breger Space Research and Development Unit Content Overview General features Activity 9.1 Space based applications and GMES Activity 9.2 Strengthening

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

12 Approval of a New PRESTO Agreement Between York Region and Metrolinx

12 Approval of a New PRESTO Agreement Between York Region and Metrolinx Clause 12 in Report No. 7 of Committee of the Whole was adopted, without amendment, by the Council of The Regional Municipality of York at its meeting held on April 20, 2017. 12 Approval of a New PRESTO

More information

Best practices in IT security co-management

Best practices in IT security co-management Best practices in IT security co-management How to leverage a meaningful security partnership to advance business goals Whitepaper Make Security Possible Table of Contents The rise of co-management...3

More information

Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW

Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW For Research and Service Centers Submitting Self-Study Reports Fall 2017 INTRODUCTION Primary responsibility for maintaining

More information