5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology

Size: px
Start display at page:

Download "5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology"

Transcription

1 Version February 18, ) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN ) Co-PIs at this institution: Prof. Charles F. Maguire Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN Telephone: (615) charles.f.maguire@vanderbilt.edu Prof. Julia Velkovska Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN Telephone: (615) julia.velkovska@vanderbilt.edu 4) Funding Opportunity: DE-PS02-09ER ) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion 6) DOE/Office of Science Program Contact: Dr. Gulshan Rai 7) Collaborating Institution: Massachusetts Institute of Technology 8) Principal Investigator at Collaborating Institution: Prof. Bolek Wyslouch

2 Computing Resources Proposal Update for U.S. CMS-HI Research E. Appelt 1, M. Binkley 1, B. Brown 1, K. Buterbaugh 1, K. Dow 2, D. Engh 1, M. Gelinas 2, M. Goncharov 2, S. Greene 1, M. Issah 1, C. Johnson 1, C. Kennedy 1, P. Kurt 1, W. Li 2, S. de Lesdesma 1, C. Maguire 1, J. Mora 1, G. Roland 2, P. Sheldon 1, G. Stephans 2, A. Tackett 1, J. Velkovska 1, E. Wenger 2, B. Wyslouch 2, and Y. Yilmaz 2 February 18, Vanderbilt University, Department of Physics and Astronomy, Nashville, TN Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA Abstract An updated proposal for meeting the computing needs of the CMS-HI research program in the U.S. is described. This update addresses all the major comments deriving from the May 2009 review of the first proposal for CMS-HI computing presented to the US DOE-NP. The present proposal envisions a heavy ion computing structure which is tightly integrated into the main CMS computing structure and provides for important contributions to heavy ion computing from the CMS T0, the FNAL T1, and several national T2 centers. These contributions will significantly assist the work of the two major HI T2 centers in the US which are to be sited, as previously proposed, at Vanderbilt University and at MIT-Bates. 2

3 Contents 1 Executive Summary 1 2 Responses To The Issues From The Review Report Resource Requirements for HI Computing Quality of Service Understandings with the WLCG HI Computing Center Operations ACCRE Investment and Infrastructure Simulation Compute Center Operations at MIT-Bates Proposal Budget 13 4 Summary 15 A Data Model for the HI Research Program in CMS, A.1 Introduction A.2 Luminosity, Event Number, and Raw Data File Size Projections A.3 Raw Data Reconstruction Computing Requirement A.4 Data Analysis Computing Requirement A.5 Simulation Production Computing Requirement A.6 Integrated Computing Requirement for the T2 Resource Base A.7 Disk Storage and Tape Requirements for Raw Data Processing A , Minimum Bias A , First HLT Data A , Nominal year HLT Data A.11 Simulation Production Storage Requirements A.12 Summary of the Heavy Ion Data Model List of Figures 1 Organization chart for the CMS-HI computing tasks The SAM s tool diagnostic result of the Vanderbilt site on January 8, 2010, showing acceptable performance for doing the CMS critical tasks by a computing element (CE) accessed on the grid The cumulative amounts of PhEDEx transferred files during tests into Vanderbilt conducted in September The hourly rates for the PhEDEx transfer tests conducted into Vanderbilt in September i

4 5 Quality diagnostic results for the PhEDEx transfer tests into Vanderbilt conducted in September 2009 from four different CMS sites List of Tables 1 Funding Profile for the Vanderbilt T2 Center Funding Profile for FNAL T1 Tape Archive Funding Profile for HI T2 Center at MIT Projected luminosity and up-time profile for the CMS heavy ion runs Projected Raw Data Reconstruction Computing Effort Projected Data Analysis Computing Load Projected Simulation Computing Load Integrated T2 Computing Load Compared to Available Resources Event sizes for the HI data processing stages Disk and Tape Storage for Raw Data Processing ii

5 1 Executive Summary Since May 2009 the members of the Heavy Ion research program in CMS have mounted an extensive, multi-faceted effort to improve the HI computing proposal from what was first presented for review to the U.S. DOE-NP. This effort has proceeded along four distinct tracks: 1) a more sophisticated, data-driven analysis of the annual computing requirements using the latest version of the CMS software framework, 2) a series of discussions with CMS computing management on how to organize HI computing to make the best use of existing CMS computing resources and infrastructure, 3) a thorough canvas of overseas CMS T2 sites to secure several new commitments of resources dedicated to the analyses of heavy ion data, and 4) the development of plan of access to the two major HI compute centers in the US, at Vanderbilt and MIT, by the other CMS-HI-US institutions. A concrete result of this effort has been the formulation of a HI data model document which has received the endorsement of CMS computing management. This data model, a copy of which is included as an Appendix to this proposal update, provides a HI computing structure which is is much more strongly integrated into the rest of CMS computing than had been the case previously. Two letters of support strongly endorsing HI computing plan accompany this proposal submission. The first letter is from Dr. Ian Fisk, the current head of all CMS computing. The second letter is from Dr. Joel Butler, the national leader of CMS for the United States. The HI computing plan being proposed here will first take full and complete advantage of the computing resources available to CMS at the CERN T0, in the same manner as does the pp program. The transfer to the US of the HI raw data and prompt reconstruction output from the T0 will proceed as a seamless addition to the existing work-flow for the transfer of the comparable pp data. As was originally proposed, two dedicated HI computing centers are planned in the US at Vanderbilt and at MIT-Bates. These two sites will have missions for the HI program which are formally recognized by the WLCG project 1. The MIT-Bates site will continue to function as a standard T2 center, while the Vanderbilt site will be an enhanced T2 center in CMS. The enhanced functionalities at Vanderbilt will be 1) the receipt and reconstruction of raw data files, and 2) the export of some reconstruction output to certain other CMS T2 centers in Europe and South America. These T2 centers have active HI groups and have committed to receiving reconstruction output from the Vanderbilt HI center for analysis purposes. The Vanderbilt center will also perform standard T2 analysis tasks. Last, but not least, the Vanderbilt and the MIT-Bates sites will include initially T3 components for which accounts will be given to all CMS-HI participants. These T3 components will provide immediate access to data reconstruction and simulation production, in the same manner that the MIT-Bates center has been doing for CMS-HI participants during the past few years. However, for the long term the recommendation of CMS computing management is to have identical analysis operations in the HI group as the rest of the collaboration. Hence, the remote site users in CMS-HI should eventually access the Vanderbilt and MIT-Bates T2 centers from their home institutions using the standard CRAB 2 job 1 Worldwide LHC Computing Grid, 2 CMS Remote Analysis Builder 1

6 submission and file retrieval tools. The CMS-HI personnel at Vanderbilt and MIT-Bates will assist the other CMS-HI members in utilizing these tools. Since it is an important aspect to the user analysis discussion this proposal will discuss the hardware resources needed at all the CMS-HI-US institutions so that their members may have effective access to all HI T2 computing centers in CMS. 2 Responses To The Issues From The Review Report A proposal for heavy ion computing was presented to the U.S. DOE-NP in May 2009, and a summary of the reviewer comments was delivered in September While this first review recognized the need for a substantial heavy ion computing facility in the U.S., a number of questions were raised at the review concerning the justifications and operations of the proposed facility. The DOE-NP requested that these questions should be addressed in an updated proposal having a due date at the end of calendar This modified proposal contains the requested update, and is centered around addressing these major review report questions. To make for a more coherent response, the various issues have been grouped together along related topics. The issue of cost is addressed in a separate section of this proposal update. 2.1 Resource Requirements for HI Computing Related to the resource requirements for HI computing in CMS the report from the May 11 review asked that the following three related issues be addressed: Justification of the formal performance requirements of the proposed CMS HI computing center(s) that integrates the CERN Tier-0 obligations, the service level requirements including end users, and the resources of non-doe participants and other countries. The analysis and storage model articulated in the CMS HI Collaboration proposal differs from the one described in the CMS Computing Technical Design Report (TDR1). In the TDR, heavy ion data would be partially processed at the CERN Tier-0 center during the heavy ion run.... (Remaining text omitted as no longer applicable) The US CMS HI computing resources should be driven by technical specifications that are independent of specific hardware choices. Well-established performance metrics should be used to quantify the needs in a way that can be mapped to any processor technology the market may be offering over the course of the next few years. Expected improvements of the CPU processor cost/performance ratio suggest that the requested budget for CPU hardware could be high. Compared with the proposal originally presented to the DOE in May 2009, the current CMS-HI computing plan integrates much more of the already existing CMS T0 T1 T2 computing resource base. Stated succinctly, the model for HI computing resembles the model for pp computing far more than before, and thus the HI computing sector will be more robust for current operations and for future evolution with the rest of CMS computing. These steps forward have been made possible following a more intense study of the HI computing requirements vis-a-vis the present availability of resources in CMS. The conclusions of that requirements study have been written to a HI Data Model document 2

7 which has been accepted by CMS management. A copy of that document is attached to this proposal update as an appendix. As per this document the HI raw data acquired during the annual one month of Pb+Pb collisions at the LHC will be completely reconstructed at the T0 soon after it is acquired, the so-called prompt reco pass. The annual amount of T0 time needed for the prompt reconstruction of the HI raw data is detailed in the Table 5 of the data model document contained in the Appendix to this proposal update. According to that analysis the number of days needed at the T0 for prompt reconstruction of the entire data set in each of the years 2010 and is less than the one month of HI running time at the LHC in that year. Therefore, the HI raw data can be promptly reconstructed, and the production results transferred out of the T0, using the same work-flows (and personnel) as the pp raw data. This conclusion that the T0 could equally well serve the needs of the HI program was reached after a careful study of the computing reconstruction requirement for each year s data set, both for the CPU time and for the memory footprint. The memory size, after a considerable advance in the software itself, is known to be less than 2 GBytes/central event, and is thus compatible with the 2 GByte/core size of the T0 compute nodes. The prompt reconstruction pass at the T0 will occur as soon as the Alignment and Calibration analysis pass has completed for each run. The prompt Alignment and Calibration work-flows for the HI data have been studied and found to be essentially identical to those for pp running. The on-line Data Quality Monitoring (DQM) requirements for the HI data have also been examined and these too have been determined to be largely similar to those for pp running. In a few cases, DQM components for the pp running can be ignored (e.g. missing jet energy), while in other cases (e.g. reaction plane and centrality class) new components are being added for the HI events. The net result is that we expect a smooth transition between T0 processing for the pp events and T0 processing for the HI events. The more identical the work-flows can be made for the HI events, then the smoother this transition will be. Naturally, the set of people at the T0 who will have gained expertise during the prior months of pp running will remain engaged during the one month of HI running. In fact, a subset of these people on shift during the pp running will be from the CMS-HI group, assuring that the transition to HI data processing will be as seamless as possible. After prompt reconstruction, the raw data files and the prompt reconstruction output will flow out of the T0 to the T1 site at FNAL. This feature of the HI data model 3 is a major change from the original proposal to the DOE for which the raw data had been planned to be transferred directly to the new computer center at Vanderbilt University. Further discussion with CMS computing management during the last six months has led us to the consensus that it is far more sensible to transfer the HI raw data and prompt reco streams from the T0 site to the FNAL T1 center for archiving to tape. Then these files, while still on on disk at FNAL would be transferred to disk storage at Vanderbilt. The obvious advantage to the new plan is that it will make the transfer of HI files from the T0 look largely the same as the transfer of the pp files out of the T0 from the perspective of the work-flows at the T0. 3 Since the HI data model document was originally completed in December 2009, new information has emerged about the LHC running schedule for As of the February 2010 LHC planning meeting discussions held at Chamonix, it is anticipated that the LHC will have only a short shutdown as the end of 2010, and will resume pp collisions early in 2011 to last for at least a few months. Moreover, there has been informal discussion of a second HI run to take place after the pp run in Then the LHC will be shutdown for upgrades for the rest of 2011 and most of Needless to say, this new information increases the urgency of commissioning the HI computing centers in time to analyze the 2010 data. There will not be any T0 resource for re-reconstructing the HI data in 2011, nor in any subsequent year, according to CMS management computing policy. 3

8 Moreover, since there are already existing high speed (10 Gbps) links between FNAL and the Vanderbilt site, there should not be any difficulty in the copying of the files from FNAL to Vanderbilt. Network commissioning tests between FNAL and Vanderbilt have already begun, and example results will be shown in Section 2.4 of this update proposal. This plan for the HI raw data and prompt reco production transfer presupposes that there will be a substantial ( 0.5 PB) disk storage capacity in place at Vanderbilt as early as November 2010 in order to receive and store the files. The model of pp data operations for the CMS T1 centers assumes that the raw data files will remain in place on disk for the subsequent reconstruction re-passes, two re-passes per year once steady state running has been attained. The reconstruction re-pass plan for the HI raw data will be the same as for the pp data, to match the expected evolution of the reconstruction software releases on a twice yearly basis. In the original CMS Computing TDR there was a mention of HI computing being done at the T0 after the HI running period and during the several months when the LHC was inactive. While it will certainly be possible for the prompt reco pass to extend for some days or even weeks after the one month of HI running has concluded, the CMS computing management has decided that it is not feasible to have a model where the HI data are re-read from tape for reconstruction re-passes of the raw data. The infrastructure at the T0 does not exist for re-reads of raw data from tape, nor does the infrastructure exist at the T0 for the export of production after the prompt reco pass. Developing this infrastructure solely for the HI data would be a gross expansion of scope for the T0 personnel and cannot be justified when simpler alternatives are available. The current model of HI computing in CMS states that the reconstruction re-passes will be done at the Vanderbilt site, mirroring how such reconstruction re-passes are planned for the pp data at the various T1 sites in CMS. The size of the Vanderbilt site, in terms of HS06-sec per year for , has been determined on a data-driven basis according to fulfilling the CMS quota of annual reconstruction re-passes. The facility size determination for the Vanderbilt site is detailed in depth in Section C of the Appendix, and is quantified in HS06-sec units where the hardware independent HS06 unit is the LHC successor to the obsolete SpecInt2000 unit of compute power. The annual growth of the Vanderbilt compute center is driven by the amount and type of data being acquired each year, and by the dual requirement that it fulfill both a raw data reconstruction role and a T2 data analysis role for CMS-HI. Part of the T2 data analysis burden for CMS-HI will be shared by certain overseas T2 sites in CMS (Russia, France, Turkey, and Brazil), and by the new MIT-Bates HI compute center. The Vanderbilt site will spend approximately half of its yearly integrated compute power in the raw data reconstruction mode, and half in the T2 data analysis mode. In that respect, the Vanderbilt T2 center will be unique in CMS since no other T2 site will be doing data reconstruction or exporting reconstruction output to other T2 sites This singular departure from the CMS computing model for the pp program has been approved by CMS computing management and is awaiting formal approval by the WLCG. 2.2 Quality of Service Understandings with the WLCG Two of the issues raised in the review concerned the relation of the proposed HI computing centers with the WLCG: The relationship between DOE NP supported grid resources (VU and MIT) for heavy ion research and the grid resources available to the larger CMS HEP collaboration needs to be clarified. Also the formal arrangements with respect to NP pledged resources to the CERN Worldwide LHC Computing Grid (WLCG) need to be defined. 4

9 The Tier-1 quality-of-service (QoS) requirements were specifically developed for the HEP LHC experiments, but they might be relaxed for the CMS HI effort in view of its spooled approach to the grid. CMS HI and the WLCG are encouraged to examine the possibility of tailoring the Tier-1 requirements to reflect the needs of the U.S. heavy ion research community. These two issues regarding the support of grid services by the DOE NP and the relationship to the WLCG are much simplified in this updated computing proposal by virtue of the changed plans for raw data transfer and tape archiving where were described in the previous subsection. The new plan is to have the HI raw data and prompt reco production output transferred first from the T0 to the FNAL T1 center for archiving to tape there. The T0 and FNAL T1 data operations groups will be in charge of this work-flow, treating it as a one-month extension of their responsibility for transferring pp files to the FNAL T1. There will be some cost associated with the extra tape archiving at FNAL, but such costs would have occurred for tape archiving at the Vanderbilt center in the original plan. The cost for the HI tape archiving at FNAL is tabulated in the Table 2 presented in Section 3. While these data files are still present on disk storage at the FNAL T1, they will be copied down to the Vanderbilt site. The high speed links from FNAL to Vanderbilt already exist, and there is not expected to be any charge to the DOE-NP for this stage of the transfer. In the same manner, there are already high speed links between the MIT-Bates T2 center and FNAL for the transfer of simulation or analysis HI production to be archived. Network tests between MIT-Bates and Vanderbilt have already begun, and a high throughput has already been measured in both directions. Discussions are beginning with the WLCG organization on these roles for the Vanderbilt center and the HI center at MIT-Bates. With the support of CMS computing management, as well as the national CMS leadership in the US, we expect that the appropriate MoUs defining the responsibilities of the Vanderbilt and MIT-Bates sites for the HI program will be completed by mid The Vanderbilt site will also have a related MoU with the FNAL T1 site, as does every other US T2 site, defining their mutual responsibilities. This MoU should be completed in parallel with the MoU to the WLCG. 2.3 HI Computing Center Operations In the area of the HI computing center operations, these four issues were brought up in the review report: The management, interaction and coordination model between the Tier-2 center(s) and Tier-3 clients is not well formulated. It will be important to document that user institutions will have sufficient resources to access the VU computing center and how the use of the facility by multiple U.S. and international clients will be managed. A detailed plan for external oversight of the responsiveness and quality of operation of the computing center(s) should be developed. Draft Memoranda of Understanding (MoUs) should be prepared between the appropriate parties, separately for the VU and the Bates Computing Facilities, that clearly define management, operations, and service level responsibilities. The size of the workforce associated with data transport, production, production re-passes, calibration and Monte Carlo simulation efforts, and the general challenges of running in a grid environment should be carefully examined and documented. 5

10 HLT Operations Christof Roland Coordinates with the CMS HLT group to design the DAQ bandwidth Coo for HI running DQM Operations Julia Velkovska Supervises the on-line and off-line data quality monitoring Coo during HI running T0 Operations Constantin Loizdes Coordinates the HI data operations at the T0 for prompt reco Coo and off-site transfers Project Director Bolek Wyslouch VU T2 Operations Charles Maguire MIT T2 Operations Christoph Paus Manages the raw data reconstruction and the analysis passes Coo at the Vanderbilt T2 site Manages the overall functioning of the MIT T2 site for CMS Coo simulations and analysis Non-US T2 Operations Coordinates the activities at the non-us R. Granier de Cassagnac T2 sites (Russia, Coo France, Brazil, Turkey) Software Coordinator Edward Wenger Coordinates with the CMS software group for the supervision Coo of HI code releases Simulations Manager Wei Li Coordinates the HI simulation production requests with the Coo CMS data operations group Analysis Coordinator Gunther Roland Overseas the analysis work of the HI Physics Interest Coo Groups Figure 1: Organization chart for the CMS-HI computing tasks For doing analyses of the HI production results at the T2 centers the users in CMS-HI will follow the same model as the users in the pp program. Namely, CMS-HI users will group themselves at T3 centers which will submit analysis jobs to the T2 centers using the standard CMS CRAB analysis tool. It would be wasteful to try to design and operate two different modes of user analysis in CMS for the HI program. This T3 based model of HI analysis has already been proved to be successful at the MIT T2 center. All the CMS-HI users, both in the US and abroad, have been given accounts MIT, on a set of gateway nodes. These nodes support the standard CMS job submission and analysis tools, and have sufficient disk space for user output. As such, this collection of gateways has been functioning as the single HI T3 center in the US. From those accounts, the CMS-HI users can submit jobs to process HI production files resident at the MIT T2, or any other T2 in CMS which has the needed HI production files present. This same mode of user analysis operation exists throughout CMS. A given institution has the necessary gateways with the CMSSW software infrastructure installed, and with sufficient local disk resources for user output, to qualify as a functioning T3 site in CMS. Such is the case already for the Vanderbilt site, with the T3 there having been developed by the HEP-CMS group in that institution s Physics Department. In the past month, we have put in place at Vanderbilt the accounting structures to allow 6

11 logins by selected CMS-HI users from four institutions: MIT, UIC, UMD, and Kansas. Documentation for running CMSSW software at Vanderbilt, and for submitting jobs, has been developed by the local RHI group. The plan is to have this selected set of users commission the T3 running at Vanderbilt for HI analysis purposes in early 2010, with the goal of having the Vanderbilt set demonstrate the same functionality as the T3 now operating at MIT-Bates. Then, in mid-2010, the Vanderbilt T3 site will become open to all CMS-HI users. In this manner, all the CMS-HI users who need to do so will be trained rapidly to take advantage of the standard CMSSW infrastructure tools. At that point, and with the availability of sufficient disk resources at their home institutions, the users can establish local T3 sites to communicate with the HI T2 centers at Vanderbilt and MIT-Bates. The CMS computing management strongly encourages us in the CMS-HI group to follow this T3 based mode of user analysis. Regarding the workforce issue for the all the aspects of running in a grid environment, the basic principle is that HI operations must be aligned as closely as possible to mirror the corresponding operations in the pp program. In this manner, both the CMS personnel involved and the operations software infrastructure already developed can be leveraged for use in the HI program. There will be designated persons in the CMS-HI program who will serve as liaisons and members within all the computing operations groups in CMS which process the data from the initial stage of on-line alignment and calibrations tasks to the final stage data and production archiving tasks. Figure 2: The SAM s tool diagnostic result of the Vanderbilt site on January 8, 2010, showing acceptable performance for doing the CMS critical tasks by a computing element (CE) accessed on the grid. In order to carry out the various HI computing operations, the computing organization for CMS-HI is as shown in Figure 1. Nine major computing responsibilities are depicted in this figure, along with the names of the persons assigned each responsibility. These nine persons form the internal computing committee for CMS-HI. That computing committee is chaired by the CMS-HI-US Projector Director, Bolek Wyslouch, with deputy chair, Charles Maguire, who also serves as the Off-Line Computing Director. In this role Prof. Maguire reports on behalf of the HI group to the CMS Computing Review Board (CRB). The CRB sets computing policies for all of CMS and reviews their effectiveness at each 7

12 computing site. In addition, all individual CMS computing sites are automatically monitored daily by the SAM s tool 4 for their ability to import and export data, and to run standard CMS data processing jobs. An example of the Vanderbilt site passing this daily critical test is shown in Figure 2. Some particular aspects of the CMS-HI computing responsibilities are worth elaborating in order to illustrate the close cooperation which has already been attained between HI computing operations and the rest of CMS computing: HLT Operations: The algorithms needed for the HI trigger decisions were the subject of in-depth in CMS earlier in 2009, as well as at the May 2009 DOE-NP review. These algorithms are judged ready for the tag-and-pass commissioning run in November DQM Operations: Prof. Velkovska spent her sabbatical year at CERN learning the standard CMS DQM operations. A research associate from the Vanderbilt University RHI group, Pelin Kurt, has been stationed at CERN since December 2009 with a primary responsibility for interfacing with the CMS DQM operations group. Vanderbilt University itself is funding 5 a 30 m 2, CMS data operations center in a prime location of its Physics Department building in support of the DQM responsibility taken on by Prof. Velkovska. This data center is scheduled to begin operations in early March 2010, just after the pp collisions have restarted at the LHC. The center will also have consoles dedicated to the monitoring the off-line data reconstruction responsibility which the Vanderbilt group has assumed. T0 Operations: Dr. Loizides has worked in person with many of the CMS T0 data operations group (Dirk Hufnagel, Josh Bendavid, Stephen Gowdy, and David Evans) and has responsibility for guiding them through the processing of the HI data at the T0, and its export to the T1 site at FNAL. Naturally, the HLT, DQM, and T0 operations responsibles in HI will be in constant contact during the HI running, as these three tasks are mutually inter-dependent. VU T2 Operations: Prof. Maguire has been working with the ACCRE technical staff for the past two years in order to bring that facility into operations for CMS-HI data reconstruction and analysis. Prof. Maguire will also serve as the primary liaison with the FNAL T1 operations group while data are being transferred between FNAL and Vanderbilt. MIT T2 Operations: The MIT T2 operations will be discussed at length in Section 2.5. Non-US T2 Operations: Dr. Raphael Granier de Cassagnac will be coordinating the transfer of HI reconstruction production to, and the analysis production activities at, the non-us T2 centers. Having managed the transfer of the muon raw data stream from RHIC and its subsequent reconstruction at the Computer Center France complex during several years for the PHENIX collaboration, Dr. Granier de Cassagnac brings valued experience to CMS-HI in this critical role. 4 SAM stands for Service Availability Monitoring. 5 Construction of this CMS data center began in mid-december 2009 and is scheduled for occupancy on February 22, Furniture and computer equipment installations are planned for the week of March 1. This center is expected to be fully operational when pp collisions resume at the LHC in mid-march. As such the center will participate in the world-wide media event being planned at CERN to celebrate the successful commissioning of the LHC for the physics research program. During pp data taking this center will be staffed by members of the local RHI and HEP groups. The total cost of $110,000, including equipment, is being borne by Vanderbilt University. 8

13 Software Manager: Dr. Wenger has been deeply involved with the CMS software release validation group to bring the HI data processing software to be consistent with the implementation of the pp data processing software. This had not been the case prior to Over the last year, many aspects of the heavy-ion software have become more closely integrated with the rest of the CMS program. For the first time in 2009, all simulation and reconstruction code has been published in the current software release, with new features continually being added according to the CMS release schedule. This has included the designation of a heavy-ion scenario in the production scripts, allowing a host of small modifications to be picked up by the standard pp work-flows. Close collaboration with the coordinators of the generation and simulation groups has allowed the implementation of a powerful new event embedding tool, especially suited to the needs of the heavy-ion program. This tool has recently been incorporated into three new release validation work-flows, for verifying the performance of the photon, jet, muon and track reconstruction algorithms in the heavy-ion environment for each new software pre-release. Simulations Manager: Likewise, the simulation production for HI purposes has been made part of the standards simulation production cycles managed by the center CMS data operations group. Specific HI simulation requests are funneled through the HI simulations manager, Wei Li, for coordination with the work of the central data operations group. Analysis Coordinator: The Analysis Coordinator oversees the work of the five major Physics Interest Groups in CMS-HI. In turn the HI Analysis Coordinator reports to the CMS Analysis Coordinator on their progress. It should also be mentioned that there is a strong participation by HI persons in pre-existing CMS pp analysis groups such as the QCD and b-physics groups. 2.4 ACCRE Investment and Infrastructure Figure 3: The cumulative amounts of PhEDEx transferred files during tests into Vanderbilt conducted in September

14 Figure 4: The hourly rates for the PhEDEx transfer tests conducted into Vanderbilt in September 2009 One issue of concern in the review report related to the status of the ACCRE computing facility at Vanderbilt and its technical choices: ACCRE should articulate its plans and facility investments needed to support the development of a custom infrastructure suited to the needs of the US CMS HI collaboration and conversely, to what extent the US CMS HI collaboration will need to adapt to the existing ACCRE infrastructure (e.g. the use of L-Store at Vanderbilt, when other CMS computing centers use dcache). Figure 5: Quality diagnostic results for the PhEDEx transfer tests into Vanderbilt conducted in September 2009 from four different CMS sites. Since the time of this review, many CMS sites have switched away from dcache to Hadoop. So there is clearly nothing especially advantageous or universal about dcache. In any case, CMS production jobs and CMS users will never know whether they are using L-Store 6 instead of dcache or Hadoop. 6 Logistical Storage, 10

15 We have written the necessary interfaces so that this low level structure is completely transparent to the higher-level CMSSW operations. Regarding the network infrastructure at ACCRE, the attached plots Figures 3 5 show a 48 hour sustained 2.5 Gbps transfer to Vanderbilt using PhEDEx, the standard software module used by CMS to move data. A total of 40 TBytes of data was copied during this test. The jobs were sending the data to Vanderbilt s GridFTP server, which has an L-Store back-end on it that stores the data in L-Store depots at ACCRE. The simultaneous data streams originated from four different sites: the FNAL T1, the T2 at Florida, the T2 at Nebraska, and the T2 at SINP in Russia. PhEDEx itself has not been modified in any way to make this happen. Figure 3 shows the cumulative volume of the transfers from the four sites during the 48 hour test. Figure 4 shows the instantaneous transfer rates during this period, and Figure 5 shows the persistently good quality of the transfer operations during the test. Similarly, CMSSW job I/O is based on ROOT, and we have a ROOT plug-in that knows how to access data in L-Store depots. Again, no changes have been made to CMSSW code. However, if it were to happen that we decide that L-Store is not satisfactory for the CMS-HI computing facility, then we will likely switch to using our existing GPFS system 7, which has currently being upgraded to a 100 Gbps backbone. And if that too were not to prove a satisfactory alternative, we will install Hadoop, dcache, or whatever else has been proved to work better elsewhere. Naturally, the sooner we can have the disk storage installed, then the sooner we can test the different alternatives for the storage system. In particular, since the first data are due to arrive in November 2010, then it is critical to have the initial amount of disk (0.46 PB) in place by the summer of Simulation Compute Center Operations at MIT-Bates The review report requested further elaboration on the plan for the separate HI computing center at MIT-Bates: The CMS HI proposal provided insufficient information to allow the panel to make a clear recommendation regarding the one- and two-site computing center options. A case should be made beyond stating that a two center solution might result in more access to opportunistic compute cycles, and that the simulation expertise existing in the MIT-Bates CMS HI group should not be lost. Based on (near) cost equality of the two solutions, there was no strong argument for the one-site solution. As the lead institution for the CMS HI simulation effort, it might be appropriate for MIT-Bates to divide responsibilities between two sites provided a well-defined, fully integrated computing plan is presented. The second CMS-HI compute center at MIT-Bates will be responsible for the following: 1) Generating and analyzing simulated data for heavy ion events. 2) Functioning as an additional analysis center for the CMS-HI Physics Analysis Groups Under the management of the Laboratory for Nuclear Science (LNS), MIT has constructed a new data center at the Bates Linear Accelerator Center. This data center has sufficient rack space, power, 7 General Parallel File System, 11

16 cooling, and network capacity to support the full proposed MIT component of the CMS-HI computing effort as well as the fully installed CMS Tier2 system and computing for several other MIT groups. The CMS-HI component will consist of 664 new CPUs (cores) 8 and 200 TByte disk capacity that will be acquired over a five-year period. The MIT center has been functioning as a heavy-ion computing facility for CMS HI program since In fact the very first CMS computers were purchased from the funds supplied by DOE Nuclear Physics office. The heavy-ion physics part was always tightly integrated into the overall functioning of the center. The simulation and simulated data analysis are done routinely at MIT. The configuration of disk space, CPU and network is optimized for typical usage of a CMS Tier-2 center: simulations and data analysis with tools that allow quick shipment of simulated data to other CMS HI centers like Vanderbilt or to FNAL for storage on tape. MIT will be used by central CMS data operation group for simulations and it will be very easy to add HI specific tasks to the list of jobs running at MIT. The organization of the joint HE-HI facility at MIT-Bates will be overseen by Profs. Boleslaw Wyslouch and Christoph Paus. Maxim Goncharov is the overall manager. Wei Li of the MIT heavy ion group provides part time support for the grid operations and he is in change of the overall simulations for the CMS HI program. The long-term plan is for the HI and HE groups to support one post-doctoral associate each who will spend roughly half of their time on computer facility management. In addition, on average one to three graduate students from each of the groups will devote a fraction of their time to support the facility operations. The Computer Services Group of LNS will provide important system management expertise and personnel (on average about 0.25 FTE). The CMS HI funds will support a system manager at the 0.25 FTE level to assist with daily operational tasks. Connection to external networks as well as management and maintenance of the network connections within MIT are provided by MIT Information Services and Technology (IST). In addition to continuing its ongoing role in generating and analyzing simulated events, the MIT facility will serve a critical role analyzing CMS HI data, most especially in the early period. The joint HI-Tier2 facility is now and will continue to be a fully functional CMS analysis center with all necessary installed software and other services. Therefore, the existing and all future installed CPUs will be ready immediately to perform the full suite of CMS functions with minimal effort. Depending on the early luminosity of the HE data from CMS, some fraction of the Tier2 CPU power may be available as well for opportunistic use for HI analysis. Combined with the existing CPU power devoted to HI at MIT and leveraging the cooperative arrangement with the Tier2 system, the MIT component of the full CMS HI computing effort can provide an important resource for data analysis as the Vanderbilt facility is installed and commissioned. Since the disk space at MIT is distributed among many computers the allocation of disk space will be very straightforward. Already now the disk space is divided between heavy-ion pools and the T2 pools. The CPU usage prioritization and monitoring will be done using Condor system. We will share CPU according to the fraction of hardware and manpower investment. Condor allows reservation of minimal number of slots and usage of all the farm resources during the periods of reduced activity. There is also a possibility of time-share: using of full power of the center for a period of time. This may be essential in the early days of running where quick turnaround of data analysis will be essential. Analysis of the simulated and reconstructed data can be done at MIT in a very straightforward fashion. The part of distributed disk space located on the HI computers will be divided into simulation 8 These are the core numbers at 8.5 HS06/core. As discussed in Section 3, we expect the cost/hs06 unit to drop substantially over the five year period, resulting in fewer cores ultimately being purchased. 12

17 and analysis areas. Analysis and simulation jobs running on any of center s CPUs will be able to independently access their respective areas. There will be one server dedicated to simple interactive activities by the users. The HI users will be able to run short interactive jobs on this dedicated machine. In addition they will be able to submit condor jobs directly and, preferably, use CMS CRAB job submission system. All of these facilities are available now for the usage of CMS HI collaborators. The division of resources between T2-like simulation activities and T3-like analysis activities will be more logical than physical. MIT already supports large user base from among the US CMS HI collaborators. There is extensive know-how within the MIT group on how to do analysis. The system is designed to smoothly and quickly accept new computing resources and continue to service the CMS HI program. The MIT-Bates center has one dedicated server that hosts home directories for all the interactive users. In addition four of the worker nodes are configured to allow occasional interactive activity. The interactive machines are all connected to a central network switch that allows 1 Gbps access to dcache storage on any other machine at MIT Bates and the access to machines outside of MIT. The center presently supports about 30 very active users with 20 from the CMS HI community. Each user has access to about 2 GB of backed-up disk space and they have direct access to the dcache storage shared with MC simulation results. There is also about 1 TB of dedicated user scratch space. In anticipation of the incoming HI data we plan to upgrade the interactive server to a more modern hardware with more fast interactive disk space. We expect to be able to handle as many as 100 users if needed, although it is more likely that the usage will be dominated by 10 heavy ion users. 3 Proposal Budget The budget for the development of the new HI T2 center at Vanderbilt is shown in Table 1. These costs are for the CPU nodes and and the disk storage pricing. The basis for this pricing is the major cluster node purchases that ACCRE made from The experience has been that acquisitions at ACCRE could double the performance approximately every two years for the same purchase price. In the early part of this period we were increasing processing per core. After this initial stage, however, the processing per core has not changed substantially but the number of cores per server node has been steadily increasing. For the attached pricing, the ACCRE financial analysis has used prices based on current bids for CY2010 then cut those prices by 25% for CY2011, another 25% for CY2012 (so that prices for CY2012 are half of CY2010), and continuing that trend for CY2013 and CY2014. We used the same rough price forecasting for disk. Although these projections seem reasonably conservative the outer 2 3 years are considered only a rough estimate. There is one factor from the ACCRE experience that is contrary to the model in the above CMS- HI proposal budget. That fact is that total purchasing has not been decreasing as the unit pricing decreases. Rather the total processing has increased at the same cost approximately every two years. This has occurred because many of the scientists who use ACCRE have increased their processing needs because they are able to take advantage of the reduced pricing to allow more complex science. Based on the amount of dedicated CPU power and disk storage space, the ACCRE management has determined that 3 FTE annual support will be needed for the HI compute center at Vanderbilt. Vanderbilt proposes contribute half the cost of that support each year during The staff at ACCRE have complete responsibility for maintaining the operations of all the hardware and will manage the external network connectivity. There will be a 24/7 trouble ticket system available at 13

18 Table 1: Funding Profile for the Vanderbilt T2 Center Category CY10 CY11 CY12 CY13 CY14 Total Compute/Disk Acquisitions New CPUs (cores) Total CPUs (cores) New Disk (TB) Total Disk (TB) Hardware and Staffing Costs to the DOE CPUs $130,550 $79,163 $208,075 $122,892 $0 $540,680 Disk $139,500 $4,500 $36,750 $41,810 $0 $222,560 Total Hardware $270,050 $83,663 $244,825 $164,702 $0 $763,240 Staffing (DOE Cost) $181,950 $189,337 $198,175 $205,298 $214,000 $988,760 Staff+Hardware Total $452,000 $273,000 $443,000 $370,000 $214,000 $1,752,00 Staffing Support Decomposition (FTEs) By DOE By Vanderbilt Staffing (Vanderbilt Cost) $181,629 $188,894 $197,717 $205,626 $213,851 $987,718 CPU, Disk, and FTE Cost Assumptions Cost/core with 3 GB Disk cost per TB Total cost per FTE $121,086 $125,929 $131,812 $137,084 $142,567 ACCRE to ensure a rapid fix of technical problems. The support of CMS-specific software at ACCRE will be shared by the HEP and the RHI groups at Vanderbilt. HI data processing operations at Vanderbilt will be the responsibility of members of the RHI group. Prof. Maguire will be set at 0.8 FTE, Prof. Velkovska will be set at 0.5 FTE, a research associate at Vanderbilt will be set at 0.5 FTE, and the set of four graduate students at Vanderbilt will be collectively assigned a net 0.25 FTE responsibility. In addition, a research associate currently at CERN will have a 0.5 FTE responsibility working with the DQM and data operations groups at the T0. Table 2: Funding Profile for FNAL T1 Tape Archive Category CY10 CY11 CY12 CY13 CY14 Total Tape Volume (PB) Cost to DOE $94,000 $57,000 a $50,000 $85,000 $88,000 $374,000 a Includes a $25,000 capital charge for the purchase of an LT05 tape drive The annual costs for supporting the HI tape archive component at the FNAL T1 are shown in in Table 2. The projected tape volumes shown in this table correspond to the tape volume estimates given in the data model appendix Table 10. The incremental cost of hosting HI data at FNAL are 14

19 estimated to be $110/tape slot, including overhead. This includes the media cost, the incremental tape library cost, and maintenance. The current technology being using by FNAL is LTO4 tapes at 800 GB/slot, filled to about 90%. From 2012 on we can anticipate the use of the LTO5 technology becoming available which will double the per-slot capacity. In addition we foresee a dedicated tape drive for HI data transfers. We propose to put this item into the 2011 funding, at an estimated cost of $25,000, as only then would the LTO5 technology become available. We will continue to use the current LTO4 tape drives in the year before. Table 3: Funding Profile for HI T2 Center at MIT Category CY10 CY11 CY12 CY13 CY14 Total Hardware Acquisitions New CPUs (cores) Total CPUs (cores) New Disk (TB) Total Disk (TB) Hardware Costs CPUs $42,000 $27,352 $15,400 $17,952 $19,008 $121,712 Disk $30,000 $4,500 $6,000 $2,260 $1,500 $44,260 Total hardware $72,000 $31,852 $21,400 $20,212 $20,508 $165,972 Staff $38,000 $39,148 $40,600 $42,788 $43,492 $204,028 Hardware+Staffing Total $110,000 $71,000 $62,000 $63,000 $64,000 $370,000 The budget for the development of the HI T2 center at MIT is shown in Table 3. The required HS06 power requirements, and the annual disk growth, for the MIT-Bates HI center are as detailed in the data model document. The same hardware unit cost assumptions are made for the MIT-Bates center as for the Vanderbilt center. It should be clear that although these two funding profile tables show the annual increase in compute power in terms of numbers of CPU cores, what is actually being done is to scale the compute number according to required number of HS06 units, as described in the data model document appendix. The Moore s Law trend has been incorporated as per the previous discussion after Table 1, giving a four-fold decrease in compute cost per HS06 unit over the five year span of the proposal. 4 Summary The updated proposal for CMS-HI computing in the US has been substantially revised from the initial version presented to the DOE-NP in Most significantly, the new proposal for CMS-HI computing is far more integrated with the rest of CMS computing, and takes advantage of other CMS computing resources and infrastructure in the most coherent fashion. In particular, the CMS-HI computing model now incorporates initial raw data processing at the T0, and the rapid transfer of files from the T0 to the T1 center at FNAL, in a manner which is entirely consistent with how the pp data will be initially processed. This one change has accomplished two goals: the initial HI data processing becomes more robust, and the exposure of the DOE-NP to time-critical file transfer mandates becomes vastly reduced. 15

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 Version V1: December 16 at 14:45 CST 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012 Continuation Report/Proposal for DOE Grant Number: DE SC0005220 1 May 15, 2012 Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

More information

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

August 31, 2009 Bologna Workshop Rehearsal I

August 31, 2009 Bologna Workshop Rehearsal I August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

12 Approval of a New PRESTO Agreement Between York Region and Metrolinx

12 Approval of a New PRESTO Agreement Between York Region and Metrolinx Clause 12 in Report No. 7 of Committee of the Whole was adopted, without amendment, by the Council of The Regional Municipality of York at its meeting held on April 20, 2017. 12 Approval of a New PRESTO

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

Figure 1: Summary Status of Actions Recommended in June 2016 Committee Report. Status of Actions Recommended # of Actions Recommended

Figure 1: Summary Status of Actions Recommended in June 2016 Committee Report. Status of Actions Recommended # of Actions Recommended Chapter 3 Section 3.05 Metrolinx Regional Transportation Planning Standing Committee on Public Accounts Follow-Up on Section 4.08, 2014 Annual Report In November 2015, the Standing Committee on Public

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Total Cost of Ownership: Benefits of the OpenText Cloud

Total Cost of Ownership: Benefits of the OpenText Cloud Total Cost of Ownership: Benefits of the OpenText Cloud OpenText Managed Services in the Cloud delivers on the promise of a digital-first world for businesses of all sizes. This paper examines how organizations

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system

DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system GENERAL PHILOSOPHY (Arthur Grossman, Steve Palumbi, and John Pringle) The three Principal

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

Advancing the MRJ project

Advancing the MRJ project Advancing the MRJ project 2017.1.23 2017 MITSUBISHI HEAVY INDUSTRIES, LTD. All Rights Reserved. Overview The Mitsubishi Regional Jet (MRJ) delivery date is adjusted from mid-2018 to mid-2020 due to revisions

More information

Total Cost of Ownership: Benefits of ECM in the OpenText Cloud

Total Cost of Ownership: Benefits of ECM in the OpenText Cloud Total Cost of Ownership: Benefits of ECM in the OpenText Cloud OpenText Managed Services brings together the power of an enterprise cloud platform with the technical skills and business experience required

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

REPORT 2015/149 INTERNAL AUDIT DIVISION

REPORT 2015/149 INTERNAL AUDIT DIVISION INTERNAL AUDIT DIVISION REPORT 2015/149 Audit of the information and communications technology operations in the Investment Management Division of the United Nations Joint Staff Pension Fund Overall results

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

SUBJECT: PRESTO operating agreement renewal update. Committee of the Whole. Transit Department. Recommendation: Purpose: Page 1 of Report TR-01-17

SUBJECT: PRESTO operating agreement renewal update. Committee of the Whole. Transit Department. Recommendation: Purpose: Page 1 of Report TR-01-17 Page 1 of Report TR-01-17 SUBJECT: PRESTO operating agreement renewal update TO: FROM: Committee of the Whole Transit Department Report Number: TR-01-17 Wards Affected: All File Numbers: 465-12, 770-11

More information

Governing Body 313th Session, Geneva, March 2012

Governing Body 313th Session, Geneva, March 2012 INTERNATIONAL LABOUR OFFICE Governing Body 313th Session, Geneva, 15 30 March 2012 Programme, Financial and Administrative Section PFA FOR INFORMATION Information and communications technology questions

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

NC Education Cloud Feasibility Report

NC Education Cloud Feasibility Report 1 NC Education Cloud Feasibility Report 1. Problem Definition and rationale North Carolina districts are generally ill-equipped to manage production server infrastructure. Server infrastructure is most

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

THE CUSTOMER SITUATION. The Customer Background

THE CUSTOMER SITUATION. The Customer Background CASE STUDY GLOBAL CONSUMER GOODS MANUFACTURER ACHIEVES SIGNIFICANT SAVINGS AND FLEXIBILITY THE CUSTOMER SITUATION Alliant Technologies is a Premier Service Provider for Red Forge Continuous Infrastructure

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Bill Boroski LQCD-ext II Contractor Project Manager

Bill Boroski LQCD-ext II Contractor Project Manager Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,

More information

Organization/Office: Secretariat of the United Nations System Chief Executives Board for Coordination (CEB)

Organization/Office: Secretariat of the United Nations System Chief Executives Board for Coordination (CEB) United Nations Associate Experts Programme TERMS OF REFERENCE Associate Expert (JPO) INT-021-14-P014-01-V I. General Information Title: Associate Expert in Interagency Coordination / Special to the Director

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

Data preservation for the HERA experiments at DESY using dcache technology

Data preservation for the HERA experiments at DESY using dcache technology Journal of Physics: Conference Series PAPER OPEN ACCESS Data preservation for the HERA experiments at DESY using dcache technology To cite this article: Dirk Krücker et al 2015 J. Phys.: Conf. Ser. 66

More information

Machine Learning in Data Quality Monitoring

Machine Learning in Data Quality Monitoring CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data

More information

ONLINE NON-DESTRUCTIVE EXAMINATION QUALIFICATION FOR MARINE SURVEYORS

ONLINE NON-DESTRUCTIVE EXAMINATION QUALIFICATION FOR MARINE SURVEYORS ONLINE NON-DESTRUCTIVE EXAMINATION QUALIFICATION FOR MARINE SURVEYORS Chris Cheetham (TWI Ltd) Andrew MacDonald (Lloyd s Register) David Howarth (Lloyd s Register) SYNOPSIS Lloyd's Register carries out

More information

CASE STUDY GLOBAL CONSUMER GOODS MANUFACTURER ACHIEVES SIGNIFICANT SAVINGS AND FLEXIBILITY THE CUSTOMER THE CHALLENGE

CASE STUDY GLOBAL CONSUMER GOODS MANUFACTURER ACHIEVES SIGNIFICANT SAVINGS AND FLEXIBILITY THE CUSTOMER THE CHALLENGE CASE STUDY GLOBAL CONSUMER GOODS MANUFACTURER ACHIEVES SIGNIFICANT SAVINGS AND FLEXIBILITY TenFour is a Premier Service Provider for Red Forge Continuous Infrastructure Service (CIS ). This case study

More information

HP s VLS9000 and D2D4112 deduplication systems

HP s VLS9000 and D2D4112 deduplication systems Silverton Consulting StorInt Briefing Introduction Particularly in today s economy, costs and return on investment (ROI) often dominate product selection decisions. However, gathering the appropriate information

More information

DR and EE Standards for SCE Buildings

DR and EE Standards for SCE Buildings Design & Engineering Services DR and EE Standards for SCE Buildings Prepared by: Design & Engineering Services Customer Service Business Unit Southern California Edison December 2011 Acknowledgements Southern

More information

NORTH CAROLINA NC MRITE. Nominating Category: Enterprise IT Management Initiatives

NORTH CAROLINA NC MRITE. Nominating Category: Enterprise IT Management Initiatives NORTH CAROLINA MANAGING RISK IN THE INFORMATION TECHNOLOGY ENTERPRISE NC MRITE Nominating Category: Nominator: Ann V. Garrett Chief Security and Risk Officer State of North Carolina Office of Information

More information

Future trends in distributed infrastructures the Nordic Tier-1 example

Future trends in distributed infrastructures the Nordic Tier-1 example Future trends in distributed infrastructures the Nordic Tier-1 example O. G. Smirnova 1,2 1 Lund University, 1, Professorsgatan, Lund, 22100, Sweden 2 NeIC, 25, Stensberggata, Oslo, NO-0170, Norway E-mail:

More information

Report on Collaborative Research for Hurricane Hardening

Report on Collaborative Research for Hurricane Hardening Report on Collaborative Research for Hurricane Hardening Provided by The Public Utility Research Center University of Florida To the Utility Sponsor Steering Committee January 2010 I. Introduction The

More information

UAE s National Integrated Planning for nuclear power infrastructure development

UAE s National Integrated Planning for nuclear power infrastructure development UAE s National Integrated Planning for nuclear power infrastructure development Technical Meeting on Topical Issues in the Development of Nuclear Power Infrastructure 2 February 2017 Linda Eid National

More information

Analytics-as-a-Service Firm Chooses Cisco Hyperconverged Infrastructure as a More Cost-Effective Agile Development Platform Compared with Public Cloud

Analytics-as-a-Service Firm Chooses Cisco Hyperconverged Infrastructure as a More Cost-Effective Agile Development Platform Compared with Public Cloud IDC ExpertROI SPOTLIGHT Analytics-as-a-Service Firm Chooses Cisco Hyperconverged Infrastructure as a More Cost-Effective Agile Development Platform Compared with Public Cloud Sponsored by: Cisco Matthew

More information

OCM ACADEMIC SERVICES PROJECT INITIATION DOCUMENT. Project Title: Online Coursework Management

OCM ACADEMIC SERVICES PROJECT INITIATION DOCUMENT. Project Title: Online Coursework Management OCM-12-025 ACADEMIC SERVICES PROJECT INITIATION DOCUMENT Project Title: Online Coursework Management Change Record Date Author Version Change Reference March 2012 Sue Milward v1 Initial draft April 2012

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

Cellular Phone Usage and Administration

Cellular Phone Usage and Administration Program Evaluation and Audit Cellular Phone Usage and Administration May 13, 2008 INTRODUCTION Background Many areas of the Metropolitan Council use cellular telephones to enhance and improve critical

More information

UNCLASSIFIED. R-1 ITEM NOMENCLATURE PE D8Z: Data to Decisions Advanced Technology FY 2012 OCO

UNCLASSIFIED. R-1 ITEM NOMENCLATURE PE D8Z: Data to Decisions Advanced Technology FY 2012 OCO Exhibit R-2, RDT&E Budget Item Justification: PB 2012 Office of Secretary Of Defense DATE: February 2011 BA 3: Advanced Development (ATD) COST ($ in Millions) FY 2010 FY 2011 Base OCO Total FY 2013 FY

More information

Resolution: Advancing the National Preparedness for Cyber Security

Resolution: Advancing the National Preparedness for Cyber Security Government Resolution No. 2444 of February 15, 2015 33 rd Government of Israel Benjamin Netanyahu Resolution: Advancing the National Preparedness for Cyber Security It is hereby resolved: Further to Government

More information

The Mission of the Abu Dhabi Smart Solutions and Services Authority. Leading ADSSSA. By Michael J. Keegan

The Mission of the Abu Dhabi Smart Solutions and Services Authority. Leading ADSSSA. By Michael J. Keegan Perspective on Digital Transformation in Government with Her Excellency Dr. Rauda Al Saadi, Director General, Abu Dhabi Smart Solutions and Services Authority By Michael J. Keegan Today s digital economy

More information

The Computation and Data Needs of Canadian Astronomy

The Computation and Data Needs of Canadian Astronomy Summary The Computation and Data Needs of Canadian Astronomy The Computation and Data Committee In this white paper, we review the role of computing in astronomy and astrophysics and present the Computation

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

University of Wyoming Mobile Communication Device Policy Effective January 1, 2013

University of Wyoming Mobile Communication Device Policy Effective January 1, 2013 University of Wyoming Mobile Communication Device Policy Effective January 1, 2013 Introduction and Purpose This policy allows the University to meet Internal Revenue Service (IRS) regulations and its

More information

The Global Research Council

The Global Research Council The Global Research Council Preamble The worldwide growth of support for research has presented an opportunity for countries large and small to work in concert across national borders. Cooperation and

More information

DRS Policy Guide. Management of DRS operations is the responsibility of staff in Library Technology Services (LTS).

DRS Policy Guide. Management of DRS operations is the responsibility of staff in Library Technology Services (LTS). Harvard University Library Office for Information Systems DRS Policy Guide This Guide defines the policies associated with the Harvard Library Digital Repository Service (DRS) and is intended for Harvard

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Convergence of BCM and Information Security at Direct Energy

Convergence of BCM and Information Security at Direct Energy Convergence of BCM and Information Security at Direct Energy Karen Kemp Direct Energy Session ID: GRC-403 Session Classification: Advanced About Direct Energy Direct Energy was acquired by Centrica Plc

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Global Infrastructure Connectivity Alliance Initiative

Global Infrastructure Connectivity Alliance Initiative Global Infrastructure Connectivity Alliance Initiative 1. Background on Global Infrastructure Connectivity Global Infrastructure Connectivity refers to the linkages of communities, economies and nations

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

WHO Secretariat Dr Oleg Chestnov Assistant Director-General Noncommunicable Diseases and Mental Health

WHO Secretariat Dr Oleg Chestnov Assistant Director-General Noncommunicable Diseases and Mental Health WHO Secretariat Dr Oleg Chestnov Assistant Director-General Noncommunicable Diseases and Mental Health WHO Secretariat Dr Douglas Bettcher Director Department for Prevention of NCDs UN General Assembly

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Management s Response to the Auditor General s Review of Management and Oversight of the Integrated Business Management System (IBMS)

Management s Response to the Auditor General s Review of Management and Oversight of the Integrated Business Management System (IBMS) APPENDI 2 ommendation () () 1. The City Manager in consultation with the Chief Information Officer give consideration to the establishment of an IBMS governance model which provides for senior management

More information

Version v November 2015

Version v November 2015 Service Description HPE Quality Center Enterprise on Software-as-a-Service Version v2.0 26 November 2015 This Service Description describes the components and services included in HPE Quality Center Enterprise

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

IBM Corporation. Global Energy Management System Implementation: Case Study. Global

IBM Corporation. Global Energy Management System Implementation: Case Study. Global Energy Management System Implementation: Case Study IBM Corporation ISO 50001 Registration: Results and Benefits It takes a global team to drive real success. Business case for energy management IBM is

More information

Student Union Social Programming Board Constitution

Student Union Social Programming Board Constitution Student Union Social Programming Board Constitution Preamble The Social Programming Board (SPB) is an Executive Entity of the Student Union at Washington University in Saint Louis, charged with providing

More information

University of Hawaii Hosted Website Service

University of Hawaii Hosted Website Service University of Hawaii Hosted Website Service Table of Contents Website Practices Guide About These Practices 3 Overview 3 Intended Audience 3 Website Lifecycle 3 Phase 3 Begins 3 Ends 3 Description 3 Request

More information

A Generic Multi-node State Monitoring Subsystem

A Generic Multi-node State Monitoring Subsystem A Generic Multi-node State Monitoring Subsystem James A. Hamilton SLAC, Stanford, CA 94025, USA Gregory P. Dubois-Felsmann California Institute of Technology, CA 91125, USA Rainer Bartoldus SLAC, Stanford,

More information

Russ Housley 21 June 2015

Russ Housley 21 June 2015 Introduction to the Internet Engineering Task Force Russ Housley 21 June 2015 Internet Engineering Task Force We make the net work The mission of the IETF is to produce high quality, relevant technical

More information

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY Preston Smith Director of Research Services RESEARCH DATA DEPOT AT PURDUE UNIVERSITY May 18, 2016 HTCONDOR WEEK 2016 Ran into Miron at a workshop recently.. Talked about data and the challenges of providing

More information

The IDN Variant TLD Program: Updated Program Plan 23 August 2012

The IDN Variant TLD Program: Updated Program Plan 23 August 2012 The IDN Variant TLD Program: Updated Program Plan 23 August 2012 Table of Contents Project Background... 2 The IDN Variant TLD Program... 2 Revised Program Plan, Projects and Timeline:... 3 Communication

More information

Tech Data s Acquisition of Avnet Technology Solutions

Tech Data s Acquisition of Avnet Technology Solutions Tech Data s Acquisition of Avnet Technology Solutions Creating a Premier Global IT Distributor: From the Data Center to the Living Room September 19, 2016 techdata.com 1 Forward-Looking Statements Safe

More information

CONCLUSIONS AND RECOMMENDATIONS

CONCLUSIONS AND RECOMMENDATIONS Chapter 4 CONCLUSIONS AND RECOMMENDATIONS UNDP and the Special Unit have considerable experience in South-South cooperation and are well positioned to play a more active and effective role in supporting

More information

TIER Program Funding Memorandum of Understanding For UCLA School of

TIER Program Funding Memorandum of Understanding For UCLA School of TIER Program Funding Memorandum of Understanding For UCLA School of This Memorandum of Understanding is made between the Office of Information Technology (OIT) and the School of ( Department ) with reference

More information

Continuing Professional Education Policy

Continuing Professional Education Policy Continuing Professional Education Policy March 1, 2017 TABLE OF CONTENTS Introduction 3 CPE Policy Background 4 CPE Policy Statement 4 The Credit System 5 The Policy Explained: Questions & Answers 6 Appendix

More information

Best practices in IT security co-management

Best practices in IT security co-management Best practices in IT security co-management How to leverage a meaningful security partnership to advance business goals Whitepaper Make Security Possible Table of Contents The rise of co-management...3

More information

Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW

Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW For Research and Service Centers Submitting Self-Study Reports Fall 2017 INTRODUCTION Primary responsibility for maintaining

More information

IUPUI eportfolio Grants Request for Proposals for Deadline: March 1, 2018

IUPUI eportfolio Grants Request for Proposals for Deadline: March 1, 2018 IUPUI eportfolio Grants Request for Proposals for 2018-2019 Deadline: March 1, 2018 Purpose IUPUI eportfolio Grants are intended to support the eportfolio Initiative s mission: The IUPUI eportfolio Initiative

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information