The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

Size: px
Start display at page:

Download "The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University"

Transcription

1 The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett 2, B. Wyslouch 1,3 November 1, Ecole Polytechnique/LLR, Paris, France 2 Vanderbilt University, Department of Physics and Astronomy, Nashville, TN Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA Tier 2 Project Director 1

2 Contents 1 Executive Summary 1 2 Introduction Physics Motivation Computing Model Overview The Computing Management Structures for US CMS-HI CMS-HI Internal Computing Organization ACCRE Computing Organization Within Vanderbilt University Computing Specifications Projected Raw Data Growth Projected Raw Data Reconstruction Requirement Projected Data Analysis Computing Requirement Projected Simulation Computing Requirement Projected Integrated Analysis and Simulation Load Projected Disk and Tape Storage Requirements Procurement and Implementation Plan External Networking Computing Nodes Disk Storage and Internal Networking L-Store Storage System Basis Storage Connectivity Depot System Internal Networking CMSSW Installation and Support Implementation Milestones and Deliverables Cost Summary 19 7 Monitoring and Reporting Procedures Internal Review Systems External Review Systems Allocation of Opportunistic Cycles to Other DOE-NP Programs Glossary of Acronyms 21 i

3 List of Figures 1 Management Organization Chart for the Vanderbilt Tier 2 equipment project Management Organization Chart for the US CMS-HI experiment Organization chart for the CMS-HI computing tasks Management Organization for ACCRE at Vanderbilt University Hardware design planned for CMS-HI computing at the ACCRE facility List of Tables 1 ACCRE Staff Effort on the CMS-HI Tier 2 Project Projected integrated luminosity and event totals for the CMS heavy ion runs Projected Raw Data Reconstruction Computing Effort Projected Data Analysis Computing Load Projected Simulation Computing Load Integrated Tier 2 Computing Load Compared to Total Available Resources Disk and Tape Storage for Raw Data Processing Schedule of Milestones, Deliverables, and Reviews For the Vanderbilt Tier Projected Expenditures for the Vanderbilt Tier 2 Center Funding Profile for Fermilab Tier 1 Tape Archive ii

4 1 Executive Summary This Tier 2 Project Management and Acquisition Plan (PMAP) 1 describes the steps which will be taken to create and monitor the major new computing resource for the CMS-HI program in the US. This resource is to be located at the Advanced Computing Center for Research and Education, ACCRE [1], on the campus of Vanderbilt University. The plan first describes the scope, requirements, and goals of this new Tier 2, the roles and responsibilities of the institutions participating in its development, the schedule of acquisitions, and the review systems which will be installed to monitor its operational performance for the Office of Nuclear Physics within the DOE Office of Nuclear Science. The plan has been guided by the reports of the external review panels which scrutinized the initial proposals for this resource at two meetings conducted on May 11, 2009 at DOE headquarters in Germantown, and in a follow-up review on June 2, 2010 held at the ACCRE facility itself. This PMAP forms the factual basis of three Memoranda of Understanding between Vanderbilt University CMS-HI group, the ACCRE computer center at Vanderbilt, the Fermilab Tier 1 project, and the CMS experiment at the LHC. All signees to those MOUs are being provided a copy of this PMAP. 2 Introduction 2.1 Physics Motivation Heavy ion collisions at the Large Hadron Collider (LHC) will allow an unprecedented expansion of the study of QCD in systems with extremely high energy density. Results from the Relativistic Heavy Ion Collider (RHIC) suggest that in high energy heavy ion collisions an equilibrated, strongly-coupled partonic system is formed. There is convincing evidence that this dense partonic medium is highly interactive, perhaps best described as a quark-gluon liquid, and is also almost opaque to fast partons. In addition, many surprisingly simple empirical relationships describing the global characteristics of particle production have been found. An extrapolation to LHC energies suggests that the heavy ion program has significant potential for major discoveries. Heavy ion studies at the LHC will either confirm and extend the theoretical picture emerging from the RHIC studies, or else challenge and redirect our understanding of strongly interacting matter at extreme densities. This will be accomplished by extending existing studies over a dramatic increase in energy and through a broad range of novel probes accessible at the LHC. These probes include high p T jets and photons, Z 0 bosons, the Υ states, D and B mesons, and high-mass dileptons. An overview of the CMS heavy ion detector capabilities and physics program can be found in [2]. 2.2 Computing Model Overview The CMS experiment has developed a comprehensive computing model which is described thoroughly in the CMS Computing Project Technical Design Report [3]. As far as is practical, the CMS-HI compute model will follow this overall design, departing from that framework only to suit the special needs and circumstances of the Heavy Ion research program in CMS. The cornerstone of the CMS compute model 1 A summary of acronym definitions is contained in a glossary section at the end of this document. Acronyms which are generally familiar to the relativistic heavy ion scientific community such as MIT and QCD will not be spelled out in this document text, but are given in the glossary for completeness. 1

5 is the extensive use of the Worldwide LHC Computing Grid (WLCG) infrastructure which has been built to serve the needs of all the LHC experiments in the participating nations. The components of this grid network are described elsewhere [4, 5, 6, 7, 8]. The basic organization of CMS computing is according to a multi-tiered system. The primary central tier for each LHC experiment is the Tier 0 facility located at CERN itself. For proton proton (pp) data, the Tier 0 site will rapidly process the raw data files into reconstruction (RECO) and Event Summary Data (ESD) files, using calibrations derived previously from the raw data files. Various participating nations have a Tier 1 site linked to the Tier 0 site, and the Tier 1 sites will receive designated fractions of the raw data, and the corresponding RECO and ESD file sets. The CMS Tier 1 site in the U.S. is at Fermilab. The Tier 1 sites will process the RECO files into Analyzed Object Data (AOD) files. Associated with each nation s Tier 1 site are several Tier 2 sites which will receive the AOD files from the Tier 1 site. These AOD files are scanned by physics analysis groups ( PAG ) located at individual institution sites which form the Tier 3 layer in CMS. Like the proton raw data, the heavy ion raw data will remain on disk buffers at the Tier 0 center for a few days at most, during which time these data undergo a prompt reconstruction pass using initial alignment and calibration constants. All the HI raw data and the output of the prompt reconstruction pass will be transferred to a tape archive center at the Fermilab Tier 1 center. While these files are still resident on disk at Fermilab they will be copied to disk storage at a new, enhanced Tier 2 center located at Vanderbilt University. The prompt reconstruction pass production will be analyzed at this Tier 2 by the CMS-HI physics groups. New reconstruction passes will be made on the raw data at Vanderbilt using improved alignment and calibration constants and upgraded reconstruction software, on a similar re-reco cycle as for the rest of CMS data production. The re-reco output will be the basis for further analyses by the CMS-HI physics groups. It also is anticipated that the Vanderbilt Tier 2 center will deliver some amounts of reconstruction files to at least four non-us Tier 2 centers in CMS, as well as to the existing HI data analysis center located at MIT which is being significantly augmented for this purpose. These other sites will contribute importantly to the physics analysis production in CMS-HI. Finally, the HI raw data will also be archived to tape at the Tier 0 center, where that archive is not intended to be re-read but serves as a second back-up copy for emergency purposes. The following section will present the management structure for the Vanderbilt Tier 2 computing project and its relation to the rest of the US CMS-HI project. Next the technical specifications of the Vanderbilt Tier 2 computing project are described. After that, the procurement and implementation plan for meeting these specifications will be described. There then follows the anticipated cost and schedule summary. Finally, the monitoring and reporting procedures, as well as the external review systems, will be presented. 3 The Computing Management Structures for US CMS-HI The Vanderbilt Tier 2 project is an award of equipment funds by the U.S. Department of Energy, Office of Nuclear Science in support of the research mission of the US CMS-HI experiment. The management organization for the Vanderbilt Tier 2 project is shown in Fig. 1. Prof. Charles Maguire of Vanderbilt University is the designated Project Manager having ultimate responsibility for the disposition of the project funds. In this role he will be assisted by a Technical Advisory board chaired by Prof. Boleslaw 2

6 Technical Advisory Commi=ee Bolek Wyslouch, Chair HI Physics Convenors (2) Tier0 Data Ops Liaison FNAL Tier1 Liaison ACCRE Technical Manager ACCRE OperaIons Manager Financial Advisory Board Carie Kennedy, Chair Physics Finance Staff Office of Nuclear Physics Program Manager Gulshan Rai Vanderbilt Tier2 Project Manager Charles Maguire CMS- HI ProducGon Local Manager VU Post- Doc TBA Tier2 Review Board Ken Bloom, Chair VU HEP Technical Liaison Dan Engh Figure 1: Management Organization Chart for the Vanderbilt Tier 2 equipment project Wyslouch, who is the overall US CMS-HI Project Manager. The technical advisory board will consist of the two Heavy Ion physics conveners designated by CMS management, liaison persons from the CMS Tier 0 and FNAL Tier 1 data operations groups, and the technical and operations managers of the ACCRE organization. It will be the responsibility of this technical advisory board to provide input to the Tier 2 Project Manager on the decisions for annual equipment purchases and to advise on the operational policies and priorities for the Vanderbilt Tier 2 facility. It is expected that the advisory committee will meet on a monthly basis while the Tier 2 is being constructed. A second advisory committee consisting of the ACCRE financial manager and the financial staff of the Vanderbilt Department of Physics and Astronomy will monitor the expenditures on this project for both hardware and ACCRE technical staff. There is already in the US CMS organization a Tier 2 review board headed by Prof. Ken Bloom of the University of Nebraska. For the past several years this board has been monitoring the performance of the existing US Tier 2 facilities which are associated with the Fermilab Tier 1 facility. There are bi monthly phone conferences reporting on the US Tier 2 operations, and quarterly meetings at which more detailed presentations are given by representatives from each of the Tier 2 facilities on all aspects of their CMS operations. So it is completely appropriate that the Vanderbilt Tier 2 be part of this ongoing review process. In fact, the Vanderbilt site began participation in the bi-monthly phone calls as of September A Vanderbilt post-doc will be assigned as the local data production manager and will report to the Tier 2 Project Manager. The local data production manager will be responsible for the daily 3

7 operations of the Tier 2 in concert with counterparts at the Fermilab Tier 1 and at the CERN Tier 0 data operations group. The local data manager will in turn supervise the graduate students in the Vanderbilt CMS-HI group who will be assigned tasks associated with the running and monitoring of data production jobs. Finally, at Vanderbilt the CMS-HEP group has been responsible for many of the middle-ware software infrastructure and hardware present at ACCRE. This group, although not under the supervision of the Tier 2 Project Manager, will continue to support these activities at ACCRE. Research Assistant Professor Dan Engh is the current liaison with this HEP support group at Vanderbilt. The US CMS-HI experiment management organization structure is depicted in Fig. 2. As already stated Prof. Boleslaw Wyslouch is the overall Project Manager reporting to the US DOE-NP. Prof. David Hofman of UIC serves as the Deputy Project Manager. There are three major systems in the CMS-HI project: the High Level Trigger Farm supervised by Prof. Gunther Roland of MIT, the Zero Degree Calorimeter supervised by the Prof. Michael Murphy of the University of Kansas, and the Offline Computing System supervised by Prof. Charles Maguire of Vanderbilt University. The HLT and the ZDC are both located on-site at CERN. The major component of the CMS-HI Offline Computing System is the Tier-2 computing facility located at Vanderbilt University. Figure 2: Management Organization Chart for the US CMS-HI experiment 4

8 HLT Operations Christof Roland Coordinates with the CMS HLT group to design the DAQ bandwidth Coo for HI running DQM Operations Julia Velkovska Supervises the on-line and off-line data quality monitoring Coo during HI running T0 Operations Markus Klute Coordinates the HI data operations at the T0 for prompt reco Coo and off-site transfers Project Director Bolek Wyslouch VU T2 Operations Charles Maguire MIT T2 Operations Christoph Paus Manages the raw data reconstruction and the analysis passes Coo at the Vanderbilt T2 site Manages the overall functioning of the MIT T2 site for CMS Coo simulations and analysis Non-US T2 Operations Coordinates the activities at the non-us R. Granier de Cassagnac T2 sites (Russia, Coo France, Brazil, Turkey) Software Coordinator Edward Wenger Coordinates with the CMS software group for the supervision Coo of HI code releases Simulations Manager Wei Li Coordinates the HI simulation production requests with the Coo CMS data operations group Analysis Coordinator Gunther Roland Overseas the analysis work of the HI Physics Interest Coo Groups Figure 3: Organization chart for the CMS-HI computing tasks 3.1 CMS-HI Internal Computing Organization In order to carry out the various HI computing operations, the computing organization for CMS-HI is as shown in Figure 3. Nine major computing responsibilities are depicted in this figure, along with the names of the persons assigned each responsibility. These nine persons form the internal computing committee for CMS-HI. That computing committee is chaired by the CMS-HI-US Projector Director, Bolek Wyslouch, with deputy chair, Charles Maguire, who also serves as the Off-Line Computing Director. In this role Prof. Maguire reports on behalf of the HI group to the CMS Computing Review Board (CRB). The CRB sets computing policies for all of CMS and reviews their effectiveness at each computing site. Some particular aspects of the CMS-HI computing responsibilities are worth elaborating in order to illustrate the close cooperation which has already been attained between HI computing operations and the rest of CMS computing: HLT Operations: The algorithms needed for the HI trigger decisions were the subject of in-depth review in CMS early in 2009, as well as at the May 2009 DOE-NP review. These algorithms are judged ready for the tag-and-pass commissioning run in November

9 DQM Operations: Prof. Velkovska spent her sabbatical year at CERN learning the standard CMS DQM operations. A research associate from the Vanderbilt University RHI group, Pelin Kurt, has been stationed at CERN since December 2009 with a primary responsibility for interfacing with the CMS DQM operations group. Vanderbilt University itself funded in 2009 a 30 m 2, CMS data operations center in a prime location of its Physics Department building in support of the DQM responsibility taken on by Prof. Velkovska. This data center began operations in early March 2010, just after the pp collisions restarted at the LHC. The center also has consoles dedicated to the monitoring the off-line data reconstruction responsibility which the Vanderbilt group has assumed. Tier 0 Operations: Dr. Klute has worked in person with many of the CMS Tier 0 data operations group (Dirk Hufnagel, Josh Bendavid, Stephen Gowdy, and David Evans) and has responsibility for guiding them through the processing of the HI data at the Tier 0, and its export to the Tier 1 site at Fermilab. Naturally, the HLT, DQM, and Tier 0 operations responsibles in HI will be in constant contact during the HI running, as these three tasks are mutually inter-dependent. VU Tier 2 Operations: Prof. Maguire has been working with the ACCRE technical staff for the past two years in order to bring that facility into operations for CMS-HI data reconstruction and analysis. Prof. Maguire will also serve as the primary liaison with the Fermilab Tier 1 operations group while data are being transferred between Fermilab and Vanderbilt. MIT HI Analysis Center Operations: This center at MIT-Bates is funded as a separate project, but it will be closely linked and synchronized in operations with the main Vanderbilt Tier 2 center. The MIT HI Analysis center will be responsible for the following: 1) Generating and analyzing simulated data for heavy ion events. 2) Functioning as an additional analysis center for the CMS-HI Physics Analysis Groups The organization of the joint HE-HI facility at MIT-Bates will be overseen by Profs. Boleslaw Wyslouch and Christoph Paus. Maxim Goncharov is the overall manager. Wei Li of the MIT heavy ion group provides part time support for the grid operations and he is in change of the overall simulations for the CMS HI program. The long-term plan is for the HI and HE groups to support one post-doctoral associate each who will spend roughly half of their time on computer facility management. In addition, on average one to three graduate students from each of the groups will devote a fraction of their time to support the facility operations. The Computer Services Group of LNS will provide important system management expertise and personnel. Non-US Tier 2 Operations: Dr. Raphael Granier de Cassagnac will be coordinating the transfer of HI reconstruction production to, and the analysis production activities at, the non-us Tier 2 centers. Having managed the transfer of the muon raw data stream from RHIC and its subsequent reconstruction at the Computer Center France complex during several years for the PHENIX collaboration, Dr. Granier de Cassagnac brings valued experience to CMS-HI in this critical role. Software Manager: Dr. Wenger has been deeply involved with the CMS software release validation group to bring the HI data processing software to be consistent with the implementation of the pp data processing software. This had not been the case prior to Over the last year, many aspects of the heavy-ion software have become more closely integrated with the rest of the CMS 6

10 program. For the first time in 2009, all simulation and reconstruction code has been published in the current software release, with new features continually being added according to the CMS release schedule. This has included the designation of a heavy-ion scenario in the production scripts, allowing a host of small modifications to be picked up by the standard pp work-flows. Close collaboration with the coordinators of the generation and simulation groups has allowed the implementation of a powerful new event embedding tool, especially suited to the needs of the heavy-ion program. This tool has recently been incorporated into three new release validation work-flows, for verifying the performance of the photon, jet, muon and track reconstruction algorithms in the heavy-ion environment for each new software pre-release. Simulations Manager: Likewise, the simulation production for HI purposes has been made part of the standards simulation production cycles managed by the center CMS data operations group. Specific HI simulation requests are funneled through the HI simulations manager, Wei Li, for coordination with the work of the central data operations group. Analysis Coordinator: The Analysis Coordinator oversees the work of the five major Physics Interest Groups in CMS-HI. In turn the HI Analysis Coordinator reports to the CMS Analysis Coordinator on their progress. It should also be mentioned that there is a strong participation by HI persons in pre-existing CMS pp analysis groups such as the QCD and b-physics groups. ACCRE Advanced Compu.ng Center For Research and Educa.on h#p:// University Oversight Commi1ee Dennis Hall Vice Provost for Research Susan Wente Vice Chancellor for Research Co- Chairs Steering Commi1ee Paul Sheldon (VUA&S), Chair Dave Piston (VUMC) Ron Schrimf (VUSE) Faculty Advisory Group Robert Weller, Chair 8 members including Charles Maguire ACCRE Staff Management Team Technical Director: Alan TackeD Financial/User Management: Carie Kennedy 12 Technical Staff Members Figure 4: Management Organization for ACCRE at Vanderbilt University 3.2 ACCRE Computing Organization Within Vanderbilt University The CMS-HI Tier 2 at Vanderbilt will be a major component within the ACCRE computing facility at Vanderbilt, projected to represent about 30% of the total ACCRE computing capacity in the next 7

11 five years. ACCRE has always been a faculty/research driven organization. It was founded by faculty, reports to a faculty steering committee with financial oversight, and is directed on an operational basis by a faculty advisory board. The current organizational, management, and advisory structure for ACCRE is illustrated in Fig. 4. Ultimate budget and management direction for ACCRE is the responsibility of an oversight committee led by the two senior University administration officers responsible for research in the University Central and the University Medical Centers, respectively. The CMS senior faculty at Vanderbilt are also involved in the oversight of ACCRE. Charles Maguire is a member of the faculty advisory board and Paul Sheldon is the chair of the faculty steering committee. On a daily basis, operation of ACCRE is shared by Alan Tackett, technical director, and Carie Lee Kennedy, management/finance director, who have worked together as a team for over six years. The staff of ACCRE is very stable and there is combined over fifty years of experience at ACCRE and over one hundred years industry experience. ACCRE has been planning for the implementation of CMS-HI and hired a new employee in January 2010 on a short-term basis in anticipation of this project being funded. This advance hiring (at Vanderbilt s expense) allowed this employee to be trained so he would be ready to deploy on this project when it was funded. The other ACCRE staff who will work on this project have been employed by ACCRE for a number of years so bring strong technical skills and experience to the project. The funding by DOE and the matching staff funding by Vanderbilt for the first year will be allocated according to Table 1. Table 1: ACCRE Staff Effort on the CMS-HI Tier 2 Project Staff Person Area of Expertise FTE by DOE FTE by VU Total FTE M. Binkley security, firewall, storage admin B. Brown storage admin, networking K. Buterbaugh cluster file systems, infrastructure S. De Ledesma storage software implementation M. Heller storage admin, software implementation C. Johnson cluster scheduling, hardware J. Mora cluster networking, infrastructure New Hire storage software implementation Total Computing Specifications The major stages of the heavy ion data processing model are summarized as follows: 1) The raw data are first processed at the Tier 0, including the standard work-flows there for Alignment, Calibration and Data Quality Monitoring, followed by a prompt reconstruction pass. The raw data, AlCa, and RECO output files are archived at the Tier 0, and these files are all to be sent to the Fermilab Tier 1 site for secondary archiving to tape. Because there are still uncertainties about the performance of the zero-suppression software for HI events, data will be streamed in the so-called virgin-raw mode in the first 2010 running. The zero suppression will be done 8

12 at the Tier 0 on the virgin-raw data set. The total compute power of the Tier 0 far exceeds the power needed to do both zero suppression and prompt reconstruction of the 2010 minimum bias HI data. 2) While still on disk storage, the data files at Fermilab are subscribed to by a new Tier 2 site for HI to be located at Vanderbilt University, and whose construction is the main focus of the project management plan. This component requires the commissioning of a reliable, high-speed ( 3 Gbps average) network functionality between Vanderbilt and Fermilab especially during the times when the LHC is colliding heavy ion beams. As mentioned previously, the Vanderbilt Tier 2 site will conduct raw data reconstruction passes in addition to supporting the usual Tier 2 functions for data analysis and simulations. The automated workflows for performing the data transfers, and for doing the reconstruction re-passes, need to be commissioned at Vanderbilt. 3) The Vanderbilt Tier 2 site will export some reconstruction output to certain other Tier 2 sites, e.g. Russia, France, Brazil, Turkey, and MIT, which have groups who have expressed interest in carrying out heavy ion physics analyses. The network links to these remote sites, typically on the order of 1 Gbps average, need to be established and sustained. 4) Final production files from the various HI Tier 2 sites, including Vanderbilt, will be archived to tape at the Fermilab Tier 1 site in the same manner as files produced from the processing of pp data. This component will also use network links to Fermilab, operating at the 1 Gbps rate. The Tier 2 functional requirements are driven by the expected annual raw data volumes to be acquired by the CMS detector during the HI running periods at the LHC. Those data volumes in turn are expected to ramp up, and change in character, in the years The hardware acquisition plan must anticipate the data volume growth and its impact on data production (reconstruction) and analysis. The next several subsections detail the expected raw data growth and the consequent impacts on the annual data processing and data storage requirements. From these annual requirements an acquisition plan has been developed. 4.1 Projected Raw Data Growth The numerical estimates for the different data flow stages follow the projected luminosity and data volumes for heavy ion running at the LHC. These current projections are given in Table 2 for the period , where there is intrinsic physics uncertainty in event sizes and overall detector data acquisition efficiency which will not be resolved until the initial data are in hand. For resource planning purposes the HI data model assumes that the first year event number and raw data volume amounts as shown in Table 2 are at the top end of the quoted range. The first year running will be in minimum bias mode. The High Level Trigger (HLT) will be used in a tag-and-pass mode such that the proposed trigger algorithms can be evaluated and optimized. By the time of the FY 2014 heavy ion run the HI data model assumes that the running conditions will have reached nominal year, steady state values. Hence, its expected to have the Vanderbilt Tier 2 center reach its nominal size in time for processing these data. 9

13 Table 2: Projected integrated luminosity and event totals for the CMS heavy ion runs FY Integrated Luminosity Events recorded (10 6 ) Raw data [TByte] µb (Min Bias) µb 1 50 (HLT/mid central) nb 1 75 (HLT/central) nb 1 75 (HLT/central) 300 Note: The heavy ion running at the LHC typically occurs in November of the calendar year, making it occur in the start of the next sequential fiscal year. In calendar year 2012 the current plan is for the LHC to be shutdown while necessary dipole magnet upgrades are installed. Thus there will be no heavy ion run in FY Projected Raw Data Reconstruction Requirement The annual raw data reconstruction requirements after the Tier 0 are the principle driver for the size of the Vanderbilt Tier 2 facility. Roughly speaking, the Vanderbilt center will be spending about half its power doing reconstruction re-passes and the other half of its power serving as the major analysis facility for the CMS-HI program. The estimates for the required computing power, in terms of HS06-sec, were developed from simulated events reconstructed using CMSSW in November 2009, assuming an 80% net efficiency factor for CPU utilization. The results of that study lead to the data reconstruction mission for CMS-HI being met according to the schedule shown in Table 3. Table 3: Projected Raw Data Reconstruction Computing Effort FY Trigger Events Compute Load Tier 0 RECO Re-passes Re-pass Time (10 6 ) (10 10 HS06-Sec) (Days) (Days/Re-pass) 2011 Min Bias HLT No Beam Reprocess 8.5 None HLT HLT Table 3 shows that the prompt reconstruction pass for CMS-HI can be accomplished at the Tier 0 each year within the prescribed one-month of data taking. Similarly, a standard number of reconstruction repasses can be accomplished at the Vanderbilt Tier 2 site each year within acceptable time constraints. The time estimates shown in Table 3 assume that the annual CPU power totals of the Vanderbilt site will be the following in HS06 units: 3268, 8588, 17708, for the fiscal years years 2011, 2012, 2013, and 2014 respectively. 4.3 Projected Data Analysis Computing Requirement The computing required for data analysis cannot be as accurately estimated as it can be for data reconstruction. One must look to other experiments, such as at RHIC, to get a measure of the amount 10

14 of analysis power required compared to reconstruction power. Based on this experience, the CMS-HI group is assuming a rough parity between analysis and reconstruction needs. Under that assumption, and assuming that there will be additional Tier 2 resources from non-us groups, and further taking into account the enhancement of the MIT Heavy Ion analysis center, the scenario for the analysis requirement is given in Table 4. For this table the number of analysis passes each year is the initial analysis pass of the prompt reco production from the Tier 0 plus the number of RECO re-passes shown in Table 3. Table 4: Projected Data Analysis Computing Load FY Compute Load Per Analysis Pass Number of Analysis Passes Total Compute Load (10 10 HS06-Sec) (10 10 HS06-Sec) a a The number of analysis re-passes in FY2013 is 2.7, since there is no prompt RECO pass in that year. 4.4 Projected Simulation Computing Requirement As was the case for the reconstruction requirement, the simulation production requirement is derived from estimates based on the use of CMSSW 3 3 4, and taking into account projections based the levels of simulation computing experienced by the RHIC experiments. The annual integrated simulation power estimates are shown in Table 5. The Compute Load column includes both the simulated event generation time and the far smaller reconstruction time. Table 5: Projected Simulation Computing Load FY Event Type Number of Simulated Events HS06-Sec/Event Total Compute Load (10 6 ) (10 4 ) (10 10 HS06-Sec) 2011 Min Bias Central Central Central Central Projected Integrated Analysis and Simulation Load The Vanderbilt Tier 2 site is not expected to be a major simulation resource in CMS-HI since it will be the main host of the reconstructed data files. As such, it would be more likely for the Vanderbilt site to be a resource for analysis jobs on those reconstructed data files. However, the projected analysis 11

15 plus simulation load for CMS-HI must be met by the combination of all resources: Vanderbilt, the four non-us Tier 2 HI sites, and the MIT HI analysis center. Hence, it is simpler to integrate over all possible resources and determine if the total of the available resources is compatible with the projected needs. This comparison estimate is shown in the last column of Table 6. Table 6: Integrated Tier 2 Computing Load Compared to Total Available Resources FY Analysis + Simulation Need Vanderbilt Tier 2 Total Tier 2 Base Tier 2 Ratio (10 11 HS06-sec) (10 11 HS06-sec) (10 11 HS06-sec) (Total/Need) % % % % % This table shows that the anticipated analysis and simulation requirements of the CMS-HI program can be met by a combination of the Vanderbilt Tier 2 resource (after subtracting the re-reconstruction power) and the rest of the CMS-HI Tier 2 base. The rest of the CMS-HI Tier 2 base is assumed to contribute a constant 3000 HS06 units during this time, while the MIT HI analysis center is projected to grow from 1900 to 7800 HS06 units. 4.6 Projected Disk and Tape Storage Requirements The Fermilab Tier 1 center will serve as the tape archive resource in the HI data model. The annual raw data volumes listed in Table 3 will be written to tape as soon as they arrive from the Tier 0 center during the one month of HI running in the LHC. Similarly, the prompt RECO output from the Tier 0 will also be archived at the Fermilab Tier 1 center. The raw data sets will be subscribed to and stored on disk at the Vanderbilt center for subsequent reconstruction re-passes. The prompt RECO files will be subscribed as well for the initial AOD and PAT file production at Vanderbilt. There will be no scheduled re-reads of raw data or prompt reco files from the Fermilab Tier 1 center unless there is a disk hardware failure at the Vanderbilt center causing the loss of the data set. The annual total disk and tape volumes are estimated based on simulated events sizes, with the usual caveat that these estimates will have to be refined once the real physics data becomes apparent in November According to the calculated event sizes and the assumed numbers of events the projections for the annual disk storage and tape volume requirements, given in Table 7. Since the Vanderbilt site is required to maintain a complete copy of the raw data on disk, as well as provide space for various cycles of reconstructed data, this mandates the initially large requirement of 0.49 PBytes of disk. Similarly, the Fermilab Tier 1 site must write to tape the initial raw data volume, and some subsequent production output, with the exception of the no-beam year in Hence, the tape storage requirement at Fermilab generally tracks higher than the disk storage requirement. 12

16 Table 7: Disk and Tape Storage for Raw Data Processing FY Event Type Number of Events Disk Storage at Vanderbilt Center Tape Storage at Fermilab Tier 1 (10 6 ) (PBytes) (PBytes) 2011 Min Bias HLT HLT 50 (from 2011) HLT HLT Procurement and Implementation Plan The procurement and implementation plan is discussed separately for the three major hardware components in the following subsections, along with the plans for the installation and continued support of the CMS Software (CMSSW) framework. 5.1 External Networking In 2008 Vanderbilt University commissioned a 10 Gbps external connection via a dedicated fiber to its connector service company in Atlanta, Southern Crossroads (SoX [9]). Through SoX, Vanderbilt can connect to multiple networks at 10 Gbps, including Internet2, the DOE s ESNET, National Lambda Rail (NLR) and several others. In addition, SoX has its own dedicated 10 Gbps link to Chicago Starlight and Vanderbilt could purchase use of this link for periods of time if necessary. The 10 Gbps connection from SoX to Vanderbilt lands on a network switch at the campus edge where ACCRE has two 10 Gbps ports on the switch. Therefore ACCRE has a direct connection to the external 10 Gbps link without intervening firewalls or routers and switches. It will be important to demonstrate sustained, high speed (5 Gbps or more) transfer rates between the CERN Tier 0 center and the Vanderbilt disk storage systems. Based on conversations with the CMS data operations group at CERN, Vanderbilt has been designated as a selected Tier 2 site which is entitled to receive data files directly from the Fermilab Tier 1 site via the standard PhEDEx file transfer software system. 5.2 Computing Nodes ACCRE has been procuring and implementing new hardware annually since The process includes the following steps: 1) Monitoring hardware options and technology changes on an on-going basis; 2) Understanding new technologies as they are released and determining what technologies are both feasible and are a reasonable cost to implement; 3) Attending and scouting out new ideas at the annual supercomputing conference. 13

17 4) Testing new software and/or hardware by ACCRE, sometimes via remote login, sometimes by receiving evaluation hardware and sometimes by purchasing hardware when extended testing is necessary. The ACCRE cluster originally contained 240 cores in 2003, which then built up to about 1,500 cores from 2005 through Over the last eighteen months due to greater research productivity and associated funding the cluster has ranged from 2,800 to 3,200 cores. The new hardware added in the last eighteen months is intended to meet both a significant increase in the demand of researchers and to replace older hardware units as they become obsolete. Based on the expected growth of other researchers at Vanderbilt and the scope of the CMS-HI grant, the DOE-funded CMS-HI component to be installed in the next four years will be about 30 40% of the overall ACCRE cluster size. ACCRE has a track record of running a cluster with multiple architectures for the last six years. This process works very well. They normally purchase the same type of hardware for multiple purchases in the same year, which allows both stability and the ability to adapt and change to new technologies. As an example, here is how the process worked out for the last two years. After monitoring and testing hardware, the hardware selected for 2009 ACCRE was dual quad Shanghai nodes with 32 GB memory from Penguin Computing in a 1U standard format. Throughout the process, ACCRE staff continued to watch hardware options and beginning in 2010 ACCRE switched to purchasing (and is still purchasing) dual quad Nehalem nodes with 24 GB memory. They evaluated both 1U nodes with a single power source and a four-node 2U chassis with redundant power. The later was the best price/performance combination. They have been purchasing these quad chassis nodes for six months. These nodes are manufactured by Supermicro and are currently purchased through a partnership of Condre and Serious Systems. At this time, this hardware is still the best option for ACCRE. Subject to alternative new special pricing discounts recently announced as being available to the LHC experiment collaborations, the first round of CMS-HI compute nodes will be two dual quad Nehalem processors and 24 GB memory. The first round of CMS-HI hardware computing purchases for processing the FY 2011 data have been scrutinized by a detector readiness workshop held at CERN in June 2010, just prior to the start of this project. This scale of this set of purchases is driven largely by the need to accommodate in advance the largest reasonable amount of data taken by the detector in the first HI run. That amount of data will be certainly large enough to achieve the specified physics goals of the first HI run. The second round of CMS-HI hardware computing purchase is likely to be in the third quarter of FY 2011 and the hardware to be purchased will be re-evaluated before that time. Nehalem and Westmere processors will be considered again as will the form factor to determine whether single, dual or quad nodes best meet the needs of CMS-HI. A full suite of alternate vendors will be reviewed. Again, the hardware selected will be continued for a period of time, probably one-year in extent. The scale of the second and future rounds of CMS-HI hardware computing purchases will be set according to what is learned from the FY2011 data. The presently projected scales of these purchases are given in Section 5, according to the specification requirements presented in Section 2. These projections will be reviewed annually, and modified appropriately, by an advisory committee as discussed in Section 6 on Project Management. 14

18 5.3 Disk Storage and Internal Networking L-Store Storage System Basis All data will be stored using L-Store [10]. L-Store is being actively developed by ACCRE staff and used for several campus related projects in addition to external collaborations. L-Store implements a complete virtual file system using Logistical Networking s Internet Backplane Protocol (IBP) as the underlying abstraction of distributed storage and the Apache Cassandra project as a scalable mechanism for managing metadata. L-Store provides rich functionalities in the form of role based authentication, automated resource discovery, RAID-like mirroring and striping of data for performance and fault tolerance, policy based data management, and transparent peer-to-peer interoperability of storage media. L-Store is designed to provide virtually unlimited scalability in both raw storage and associated file system metadata with scalable performance in both raw data movement and metadata queries. The system will not require a downtime to add either metadata or disk storage. These can be added on the fly with the space becoming immediately available Storage Connectivity The disk storage or depots are designed to be accessed from anywhere but the highest performance will be from compute resources using ACCREs internal network. Each depot and L-Server is connected to both the public internet and ACCREs own private network. The depots are connected to both networks at 10 Gbs and the L-Server s using 1 Gbs connections. The architectural overview is shown in Figure Depot System The first generation of storage depot will consist of dual quad-core processors and 36 2TB disk drives with dual 10 Gbs network connections. The large compute capability on the depots will be used to provide data integrity throughout the transfer process by performing block level checksums on both the disk and network transfers. The disk checksums are used for both read and write operations allowing for the detection of silent data corruption. Both checksum phases can be enabled or disabled independently of one another in order to maximize data integrity or throughput. The disk checksum is enabled or disabled on the initial file creation Internal Networking As stated previously the storage will be accessible from both ACCREs private network as well as the public internet. The backbone of each of these networks is comprised of a collection of Extreme Networks x650 switches. Each of these switches has Gbs ports with either SFP+ optics (x650-24a) or RJ45 connections (x650-24t) and dual 128Gb/s stacking ports to connect the switch into the network stack. 15

19 Figure 5: Hardware design planned for CMS-HI computing at the ACCRE facility CMSSW Installation and Support The Vanderbilt ACCRE facility is already a high functioning Tier 3 site in CMS. It appears on the CMS facility monitoring system as passing all the tests necessary for a Tier 3 designation. A request has been made to the Facility Operations group in CMS to upgrade the Vanderbilt site from a Tier 3 to a Tier 2 in their system. This change in status was proceeded by upgrades to the compute and storage element (CE and SE) servers at Vanderbilt to meet CMS specifications. A designated Tier 1 production server machine will also be installed at Vanderbilt, similar to such machines at the Tier 0 and the Fermilab Tier 1 sites, such that the Vanderbilt site can begin to address the data production tasks that will be done during raw data re-reconstruction passes. The CMSSW updates at ACCRE have been performed routinely and on a timely bases by Dr. Bockjoo Kim of the University of Florida. The Vanderbilt Tier 3 at ACCRE is completely up-to-date with the latest CMSSW releases in the SL5 operating system. 16

20 5.4 Implementation Milestones and Deliverables The schedule of milestones, deliverables, and reviews is presented in Table 8. This schedule assumes a spending authorization date of November 1, 2010, coinciding with the start of heavy ion data taking in November The second heavy ion running is assumed to take place in November 2011, after which there is a long shutdown for LHC upgrades. The third heavy ion run is assumed to take place in 2013, with one month of heavy ion running to take place every year after The schedule of milestones explicitly includes annual reviews of the operational performance of the Tier 2, as well as separate reviews of the projected hardware requirements for the next year of data acquisition and analysis. The reviews of the US CMS Tier 2 operational performance are already routinely done on behalf of the US CMS Software and Computing Project (USCMSSC) based at Fermilab. The Vanderbilt Tier 2 will be naturally incorporated into this cycle of reviews. Annual reviews for the next year s hardware requirements will be first conducted by a committee of CMS-HI participants, including the CMS-HI Project Manager, the HI Physics Analysis conveners, and the HI Tier 2 Project Manager. The estimate of these requirements, as well as the projected requirements for the heavy ion program at the central Tier 0 installation and at the non-us Tier 2 sites will be considered by the main CMS computing group. It is standard practice in all of the LHC experiments to prepare yearly requirements documents, which themselves are subject to external review. In this manner, the annual requirements estimates for the CMS-HI program will be established with the same rigorous review process as the requirements for the pp program in CMS. 17

21 Table 8: Schedule of Milestones, Deliverables, and Reviews For the Vanderbilt Tier 2 Milestone or Deliverable Fiscal Quarter Review of CPU and disk cost/performance estimates FY11, Q1 Placement of initial core (3268 HS06) and disk orders (485 TB) FY11, Q1 Commissioning of data transfer links Tier 0 Tier 1 Tier 2 FY11, Q2 Receipt of FY HI 2011 prompt reconstruction data from Fermilab FY11, Q2 Certification of Vanderbilt Tier 2 readiness status FY11, Q2 Receipt of FY HI 2011 raw data from Fermilab FY11, Q2 Certification of Vanderbilt re-reconstruction functionality FY11, Q3 Review of initial Vanderbilt center performance FY11, Q4 Review of hardware and tape archive requirements for FY 2012 HI running FY11, Q4 Placement of hardware and tape archive orders for FY 2012 HI running FY11, Q4 Installation of hardware for FY 2012 HI running FY11, Q4 Readiness review for immediate receipt of FY 2012 HI running FY11, Q4 Commissioning of operations for FY 2012 HI data running FY12, Q1 Review of Vanderbilt center performance for FY 2012 HI data FY12, Q4 Review of hardware requirements for 2012 FY12, Q4 Placement of hardware orders for 2012 re-analysis period FY12, Q4 Installation of hardware for 2012 re-analysis period FY12, Q4 Routine re-reconstruction and re-analysis operations for FY 2012 HI data FY13, Q2 Review of Vanderbilt center performance for 2012 operations FY13, Q2 Review of hardware and tape archive requirements for FY 2014 HI running FY13, Q4 Placement of hardware and tape archive orders for FY 2014 HI running FY13, Q4 Installation of hardware for FY 2014 HI running FY13, Q4 Readiness review for immediate receipt of FY 2014 HI running FY13, Q4 Commissioning of operations for FY 2014 HI data running FY14, Q1 Routine Tier 2 operations for FY 2014 HI data FY14, Q2 Routine re-reconstruction operations for FY 2014 HI data FY14, Q2 Review of Vanderbilt center performance for FY 2014 HI data FY14, Q4 18

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012 Continuation Report/Proposal for DOE Grant Number: DE SC0005220 1 May 15, 2012 Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

More information

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology Version February 18, 2010 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN 37235 3) Co-PIs

More information

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 Version V1: December 16 at 14:45 CST 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy

More information

August 31, 2009 Bologna Workshop Rehearsal I

August 31, 2009 Bologna Workshop Rehearsal I August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

PoS(High-pT physics09)036

PoS(High-pT physics09)036 Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform

More information

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # 1659348) Klara Jelinkova Joseph Ghobrial NSF Campus Cyberinfrastructure PI and Cybersecurity Innovation

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Report on Collaborative Research for Hurricane Hardening

Report on Collaborative Research for Hurricane Hardening Report on Collaborative Research for Hurricane Hardening Provided by The Public Utility Research Center University of Florida To the Utility Sponsor Steering Committee January 2010 I. Introduction The

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Connecticut Department of Department of Administrative Services and the Broadband Technology Opportunity Program (BTOP) 8/20/2012 1

Connecticut Department of Department of Administrative Services and the Broadband Technology Opportunity Program (BTOP) 8/20/2012 1 Connecticut Department of Department of Administrative Services and the Broadband Technology Opportunity Program (BTOP) 8/20/2012 1 Presentation Overview What is BTOP? Making BTOP work for our state What

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

INFORMATION TECHNOLOGY NETWORK ADMINISTRATOR ANALYST Series Specification Information Technology Network Administrator Analyst II

INFORMATION TECHNOLOGY NETWORK ADMINISTRATOR ANALYST Series Specification Information Technology Network Administrator Analyst II Adopted: July 2000 Revised : April 2004; August 2009; June 2014; February 2018 INFORMATION TECHNOLOGY NETWORK ADMINISTRATOR ANALYST Series Specification Information Technology Network Administrator Analyst

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En yo, Takashi Ichihara, Yasushi Watanabe and Satoshi Yokkaichi RIKEN Nishina Center for Accelerator-Based

More information

How Cisco IT Improved Development Processes with a New Operating Model

How Cisco IT Improved Development Processes with a New Operating Model How Cisco IT Improved Development Processes with a New Operating Model New way to manage IT investments supports innovation, improved architecture, and stronger process standards for Cisco IT By Patrick

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

Figure 1: Summary Status of Actions Recommended in June 2016 Committee Report. Status of Actions Recommended # of Actions Recommended

Figure 1: Summary Status of Actions Recommended in June 2016 Committee Report. Status of Actions Recommended # of Actions Recommended Chapter 3 Section 3.05 Metrolinx Regional Transportation Planning Standing Committee on Public Accounts Follow-Up on Section 4.08, 2014 Annual Report In November 2015, the Standing Committee on Public

More information

VMware vcloud Air Accelerator Service

VMware vcloud Air Accelerator Service DATASHEET AT A GLANCE The VMware vcloud Air Accelerator Service assists customers with extending their private VMware vsphere environment to a VMware vcloud Air public cloud. This Accelerator Service engagement

More information

Isaca EXAM - CISM. Certified Information Security Manager. Buy Full Product.

Isaca EXAM - CISM. Certified Information Security Manager. Buy Full Product. Isaca EXAM - CISM Certified Information Security Manager Buy Full Product http://www.examskey.com/cism.html Examskey Isaca CISM exam demo product is here for you to test the quality of the product. This

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

STRATEGIC PLAN. USF Emergency Management

STRATEGIC PLAN. USF Emergency Management 2016-2020 STRATEGIC PLAN USF Emergency Management This page intentionally left blank. Organization Overview The Department of Emergency Management (EM) is a USF System-wide function based out of the Tampa

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

Bill Boroski LQCD-ext II Contractor Project Manager

Bill Boroski LQCD-ext II Contractor Project Manager Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Abstract. The Data and Storage Services group at CERN is conducting

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Federal Government. Each fiscal year the Federal Government is challenged CATEGORY MANAGEMENT IN THE WHAT IS CATEGORY MANAGEMENT?

Federal Government. Each fiscal year the Federal Government is challenged CATEGORY MANAGEMENT IN THE WHAT IS CATEGORY MANAGEMENT? CATEGORY MANAGEMENT IN THE Federal Government Each fiscal year the Federal Government is challenged to accomplish strategic goals while reducing spend and operating more efficiently. In 2014, the Federal

More information

Organization/Office: Secretariat of the United Nations System Chief Executives Board for Coordination (CEB)

Organization/Office: Secretariat of the United Nations System Chief Executives Board for Coordination (CEB) United Nations Associate Experts Programme TERMS OF REFERENCE Associate Expert (JPO) INT-021-14-P014-01-V I. General Information Title: Associate Expert in Interagency Coordination / Special to the Director

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

The BABAR Database: Challenges, Trends and Projections

The BABAR Database: Challenges, Trends and Projections SLAC-PUB-9179 September 2001 The BABAR Database: Challenges, Trends and Projections I. Gaponenko 1, A. Mokhtarani 1, S. Patton 1, D. Quarrie 1, A. Adesanya 2, J. Becla 2, A. Hanushevsky 2, A. Hasan 2,

More information

MAJOR RESEARCH EQUIPMENT $240,450,000 AND FACILITIES CONSTRUCTION

MAJOR RESEARCH EQUIPMENT $240,450,000 AND FACILITIES CONSTRUCTION MAJOR RESEARCH EQUIPMENT $240,450,000 AND FACILITIES CONSTRUCTION The FY 2007 Budget Request for the Major Research Equipment and Facilities Construction (MREFC) account is $240.45 million, an increase

More information

ACCRE High Performance Compute Cluster

ACCRE High Performance Compute Cluster 6 중 1 2010-05-16 오후 1:44 Enabling Researcher-Driven Innovation and Exploration Mission / Services Research Publications User Support Education / Outreach A - Z Index Our Mission History Governance Services

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

NC Education Cloud Feasibility Report

NC Education Cloud Feasibility Report 1 NC Education Cloud Feasibility Report 1. Problem Definition and rationale North Carolina districts are generally ill-equipped to manage production server infrastructure. Server infrastructure is most

More information

LHC and LSST Use Cases

LHC and LSST Use Cases LHC and LSST Use Cases Depots Network 0 100 200 300 A B C Paul Sheldon & Alan Tackett Vanderbilt University LHC Data Movement and Placement n Model must evolve n Was: Hierarchical, strategic pre- placement

More information

High Availability and Disaster Recovery Solutions for Perforce

High Availability and Disaster Recovery Solutions for Perforce High Availability and Disaster Recovery Solutions for Perforce This paper provides strategies for achieving high Perforce server availability and minimizing data loss in the event of a disaster. Perforce

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

DYNES: DYnamic NEtwork System

DYNES: DYnamic NEtwork System DYNES: DYnamic NEtwork System Artur Barczyk California Institute of Technology / US LHCNet TERENA e2e Workshop Prague, November 29 th, 2010 1 OUTLINE What is DYNES Status Deployment Plan 2 DYNES Overview

More information

NATIONAL INFRASTRUCTURE COMMISSION CORPORATE PLAN TO

NATIONAL INFRASTRUCTURE COMMISSION CORPORATE PLAN TO NATIONAL INFRASTRUCTURE COMMISSION CORPORATE PLAN 2017-18 TO 2019-20 CONTENTS Introduction 3 Review of period from October 2015 to end 2016 3 Corporate Governance 4 Objectives and Business Activity Plan

More information

The Application of a Distributed Computing Architecture to a Large Telemetry Ground Station

The Application of a Distributed Computing Architecture to a Large Telemetry Ground Station The Application of a Distributed Computing Architecture to a Large Telemetry Ground Station Item Type text; Proceedings Authors Buell, Robert K. Publisher International Foundation for Telemetering Journal

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Click here to access the detailed budget summary. Total 1,280,000

Click here to access the detailed budget summary. Total 1,280,000 FedNet Appeal no. AA9 This appeal seeks a total of 1,28, 1 to fund programmes and activities to be implemented in 26 and 27. These programmes are aligned with the International Federation's Global Agenda,

More information

20331B: Core Solutions of Microsoft SharePoint Server 2013

20331B: Core Solutions of Microsoft SharePoint Server 2013 20331B: Core Solutions of Microsoft SharePoint Server 2013 Course Details Course Code: Duration: Notes: 20331B 5 days This course syllabus should be used to determine whether the course is appropriate

More information

Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction

Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction Cedric Flamant (CERN Summer Student) - Supervisor: Adi Bornheim Division of High Energy Physics, California Institute of Technology,

More information

Advancing the MRJ project

Advancing the MRJ project Advancing the MRJ project 2017.1.23 2017 MITSUBISHI HEAVY INDUSTRIES, LTD. All Rights Reserved. Overview The Mitsubishi Regional Jet (MRJ) delivery date is adjusted from mid-2018 to mid-2020 due to revisions

More information

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER V.V. Korenkov 1, N.A. Kutovskiy 1, N.A. Balashov 1, V.T. Dimitrov 2,a, R.D. Hristova 2, K.T. Kouzmov 2, S.T. Hristov 3 1 Laboratory of Information

More information

Microsoft Core Solutions of Microsoft SharePoint Server 2013

Microsoft Core Solutions of Microsoft SharePoint Server 2013 1800 ULEARN (853 276) www.ddls.com.au Microsoft 20331 - Core Solutions of Microsoft SharePoint Server 2013 Length 5 days Price $4290.00 (inc GST) Version B Overview This course will provide you with the

More information

Business Requirements Document (BRD) Template

Business Requirements Document (BRD) Template Business Requirements Document (BRD) Template Following is a template for a business requirements document (BRD). The document includes many best practices in use today. Don t be limited by the template,

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

REPORT 2015/149 INTERNAL AUDIT DIVISION

REPORT 2015/149 INTERNAL AUDIT DIVISION INTERNAL AUDIT DIVISION REPORT 2015/149 Audit of the information and communications technology operations in the Investment Management Division of the United Nations Joint Staff Pension Fund Overall results

More information

a new Remote Operations Center at Fermilab

a new Remote Operations Center at Fermilab LHC@FNAL a new Remote Operations Center at Fermilab J. Patrick, et al Fermilab J. Patrick - Major Challenges/MOPA02 1 Abstract Commissioning the LHC accelerator and experiments will be a vital part of

More information

The Computation and Data Needs of Canadian Astronomy

The Computation and Data Needs of Canadian Astronomy Summary The Computation and Data Needs of Canadian Astronomy The Computation and Data Committee In this white paper, we review the role of computing in astronomy and astrophysics and present the Computation

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

How to choose the right Data Governance resources. by First San Francisco Partners

How to choose the right Data Governance resources. by First San Francisco Partners How to choose the right Data Governance resources by First San Francisco Partners 2 Your organization is unique. It has its own strengths, opportunities, products, services and customer base. Your culture

More information

Summary of Data Management Principles

Summary of Data Management Principles Large Synoptic Survey Telescope (LSST) Summary of Data Management Principles Steven M. Kahn LPM-151 Latest Revision: June 30, 2015 Change Record Version Date Description Owner name 1 6/30/2015 Initial

More information

Total Cost of Ownership: Benefits of the OpenText Cloud

Total Cost of Ownership: Benefits of the OpenText Cloud Total Cost of Ownership: Benefits of the OpenText Cloud OpenText Managed Services in the Cloud delivers on the promise of a digital-first world for businesses of all sizes. This paper examines how organizations

More information

FOUR WAYS TO LOWER THE COST OF REPLICATION

FOUR WAYS TO LOWER THE COST OF REPLICATION WHITE PAPER I JANUARY 2010 FOUR WAYS TO LOWER THE COST OF REPLICATION How an Ultra-Efficient, Virtualized Storage Platform Brings Disaster Recovery within Reach for Any Organization FOUR WAYS TO LOWER

More information

MILLENNIUM PROJECT PROPOSAL

MILLENNIUM PROJECT PROPOSAL MILLENNIUM PROJECT PROPOSAL WANDERING VALLEY COMMUNITY COLLEGE Network Administrators: Ciprich, Mora, & Saidi ISM 6231 Fall 2000 Dr. Judy Wynekoop MILLENNIUM PROJECT PROPOSAL TABLE OF CONTENTS BACKGROUND

More information

Research Wave Program RAC Recommendation

Research Wave Program RAC Recommendation Research Wave Program RAC Recommendation I. Background The Internet2 Research Wave program will make available four channels to the research community out of the 88-channel optical system on every segment

More information

New England Telehealth Consortium. Healthcare Connect Fund Network Plan

New England Telehealth Consortium. Healthcare Connect Fund Network Plan New England Telehealth Consortium Healthcare Connect Fund Consortium Overview The (NETC) is a regional healthcare consortium developed in 2007 in response to the creation of the FCC s Rural Health Care

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

PROJECT FINAL REPORT. Tel: Fax:

PROJECT FINAL REPORT. Tel: Fax: PROJECT FINAL REPORT Grant Agreement number: 262023 Project acronym: EURO-BIOIMAGING Project title: Euro- BioImaging - Research infrastructure for imaging technologies in biological and biomedical sciences

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

Architecting the High Performance Storage Network

Architecting the High Performance Storage Network Architecting the High Performance Storage Network Jim Metzler Ashton, Metzler & Associates Table of Contents 1.0 Executive Summary...3 3.0 SAN Architectural Principals...5 4.0 The Current Best Practices

More information

Microsoft SharePoint Server 2013 Plan, Configure & Manage

Microsoft SharePoint Server 2013 Plan, Configure & Manage Microsoft SharePoint Server 2013 Plan, Configure & Manage Course 20331-20332B 5 Days Instructor-led, Hands on Course Information This five day instructor-led course omits the overlap and redundancy that

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

Company Presentation. August 2013

Company Presentation. August 2013 Company Presentation August 2013 1 Untapped User Base India: One of the Largest Consumer Economies India : 4th Largest Economy Globally GDP (PPP) : US$ 4.8 Tn GDP at Purchasing Power Parity in 2012 US$

More information

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 TECHNOLOGY BRIEF Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 ABSTRACT Xcellis represents the culmination of over 15 years of file system and data management

More information