The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

Size: px
Start display at page:

Download "The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University"

Transcription

1 The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett 2, B. Wyslouch 1,3 September 20, Ecole Polytechnique/LLR, Paris, France 2 Vanderbilt University, Department of Physics and Astronomy, Nashville, TN Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA Tier 2 Project Director Abstract This Project Management and Acquisition Plan (PMAP) documents the descriptions and controls that the Compact Muon Solenoid heavy ion group in the US (CMS-HI-US) will follow to meet the technical, cost, and schedule goals for the development of a new, Tier 2 computing facility at Vanderbilt University designed to meet the needs of the CMS-HI research program. 1

2 Contents 1 Executive Summary 1 2 Introduction Physics Motivation Computing Model Overview Computing Specifications Projected Raw Data Growth Projected Raw Data Reconstruction Requirement Projected Data Analysis Computing Requirement Projected Simulation Computing Requirement Projected Integrated Analysis and Simulation Load Projected Disk and Tape Storage Requirements Procurement and Implementation Plan External Networking Computing Nodes Disk Storage and Internal Networking L-Store Storage System Basis Storage Connectivity Depot System Internal Networking CMSSW Installation and Support Implementation Milestones and Deliverables Cost Summary 14 6 Computing Management Organization ACCRE Internal Computing Organization CMS-HI Internal Computing Organization Liaison Between ACCRE and CMS-HI Monitoring and Reporting Procedures Internal Review Systems External Review Systems Allocation of Opportunistic Cycles to Other DOE-NP Programs i

3 List of Figures 1 Hardware design planned for CMS-HI computing at the ACCRE facility Organization chart for the CMS-HI computing tasks List of Tables 1 Projected integrated luminosity and event totals for the CMS heavy ion runs Projected Raw Data Reconstruction Computing Effort Projected Data Analysis Computing Load Projected Simulation Computing Load Integrated Tier 2 Computing Load Compared to Total Available Resources Disk and Tape Storage for Raw Data Processing Schedule of Milestones, Reviews, and Deliverables For the Vanderbilt Tier Funding Profile for the Vanderbilt Tier 2 Center Funding Profile for FNAL Tier 1 Tape Archive ACCRE Staff Effort on the CMS-HI Tier 2 Project ii

4 1 Executive Summary This Tier 2 Project Management and Acquisition Plan (PMAP) describes the steps which will be taken to create and monitor the major new computing resource for the CMS-HI program in the US. This resource is to be located at the ACCRE [1] computing center on the campus of Vanderbilt University. The plan first describes the scope, requirements, and goals of this new Tier 2, the roles and responsibilities of the institutions participating in its development, the schedule of acquisitions, and the review systems which will be installed to monitor its operational performance for the Office of Nuclear Physics within the DOE Office of Nuclear Science. The plan has been guided by the reports of the external review panels which scrutinized the initial proposals for this resource at two meetings conducted on May 11, 2009 at DOE headquarters in Germantown, and in a follow-up review on June 2, 2010 held at the ACCRE facility itself. This PMAP forms the factual basis of three Memoranda of Understanding between Vanderbilt University CMS-HI group, the ACCRE computer center at Vanderbilt, the Fermilab Tier-1 project, and the CMS experiment at the LHC. All signees to those MOUs are being provided a copy of this PMAP. 2 Introduction 2.1 Physics Motivation Heavy ion collisions at the Large Hadron Collider (LHC) will allow an unprecedented expansion of the study of QCD in systems with extremely high energy density. Results from the Relativistic Heavy Ion Collider (RHIC) suggest that in high energy heavy ion collisions an equilibrated, strongly-coupled partonic system is formed. There is convincing evidence that this dense partonic medium is highly interactive, perhaps best described as a quark-gluon liquid, and is also almost opaque to fast partons. In addition, many surprisingly simple empirical relationships describing the global characteristics of particle production have been found. An extrapolation to LHC energies suggests that the heavy ion program has significant potential for major discoveries. Heavy ion studies at the LHC will either confirm and extend the theoretical picture emerging from the RHIC studies, or else challenge and redirect our understanding of strongly interacting matter at extreme densities. This will be accomplished by extending existing studies over a dramatic increase in energy and through a broad range of novel probes accessible at the LHC. These probes include high p T jets and photons, Z 0 bosons, the Υ states, D and B mesons, and high-mass dileptons. The CMS detector provides unique capabilities for focused measurements that exploit the new opportunities at the LHC. The CMS potential for heavy ion studies was recognized by the CMS collaboration, which has included a heavy ion group since its inception. The apparatus provides unprecedented coverage for tracking and both electromagnetic and hadronic calorimetry combined with high precision muon identification. The detector is read out by a very fast data acquisition system and allows for sophisticated triggering. In particular the trigger system is crucial in accessing the rare probes expected to yield the most direct insights into the properties of high density strongly interacting matter. These measurements will directly address the fundamental science questions in the field of high density QCD. An overview of the CMS heavy ion detector capabilities and physics program can be found in [2]. The U.S. CMS-HI group has played a lead role in expanding and developing the physics program and now has assumed many of the key responsibilities in preparing for CMS heavy ion data taking. 1

5 The U.S. groups first developed the tracking algorithms allowing the use of the CMS silicon tracker in heavy ion collisions, formulated the minimum bias and high level trigger strategies (HLT) for Pb+Pb running, and lead the computing and physics simulation effort, in particular in the area of tracking and jet and photon detection. 2.2 Computing Model Overview The CMS experiment has developed a comprehensive computing model which is described thoroughly in the CMS Computing Project Technical Design Report [3]. As far as is practical, the CMS-HI compute model will follow this overall design, departing from that framework only to suit the special needs and circumstances of the Heavy Ion research program in CMS. The cornerstone of the CMS compute model is the extensive use of the Worldwide LHC Computing Grid (WLCG) infrastructure which has been built to serve the needs of all the LHC experiments in the participating nations. The components of this grid network are described elsewhere [4, 5, 6, 7, 8]. The basic organization of CMS computing is according to a multi-tiered system. The primary central tier for each LHC experiment is the Tier 0 facility located at CERN itself. For the p+p data, the Tier 0 site will rapidly process the raw data files into reconstruction (RECO) and Event Summary Data (ESD) files, using calibrations derived previously from the raw data files. Various participating nations have a Tier 1 site linked to the Tier 0 site, and the Tier 1 sites will receive designated fractions of the raw data, and the corresponding RECO and ESD file sets. The CMS Tier 1 site in the U.S. is at Fermi National Accelerator. The Tier 1 sites will process the RECO files into Analyzed Object Data (AOD) files. Associated with each nation s Tier 1 site are several Tier 2 sites which will receive the AOD files from the Tier 1 site. These AOD files are scanned by physics analysis groups ( PAG ) located at individual institution sites which form the Tier 3 layer in CMS. Like the proton raw data, the HI raw data will remain on disk buffers at the Tier 0 center for a few days at most, during which time these data undergo a prompt reconstruction pass using initial alignment and calibration constants. All the HI raw data and the output of the prompt reconstruction pass will be transferred to a tape archive center at the Fermilab Tier 1 center. While these files are still resident on disk at FNAL they will be copied to disk storage at a new, enhanced Tier 2 center located at Vanderbilt University. The prompt reconstruction pass production will be analyzed at this Tier 2 by the CMS-HI physics groups. New reconstruction passes will be made on the raw data at Vanderbilt using improved alignment and calibration constants and upgraded reconstruction software, on a similar re-reco cycle as for the rest of CMS data production. The re-reco output will be the basis for further analyses by the CMS-HI physics groups. It also is anticipated that the Vanderbilt Tier 2 center will deliver some amounts of reconstruction files to at least four non-us Tier 2 centers in CMS, as well as to the existing HI data analysis center located at MIT which is being significantly augmented for this purpose. These other sites will contribute importantly to the physics analysis production in CMS-HI. Finally, the HI raw data will also be archived to tape at the Tier 0 center, where that archive is not intended to be re-read but serves as a second back-up copy for emergency purposes. The following section will present the technical specifications related to the Vanderbilt Tier 2 computing project. After that, the procurement and implementation plan for meeting these specifications will be described. Next, the management structure in place to oversee the development of the computing center will be detailed, including how that management must dynamically adjust to changes 2

6 in the computing specifications brought about by better knowledge of the underlying physics content. There then follows the anticipated cost and schedule summary. Finally, the monitoring and reporting procedures, as well as the external review systems, will be presented. 3 Computing Specifications The major stages of the heavy ion data processing model are summarized as follows: 1) The raw data are first processed at the Tier 0, including the standard work-flows there for Alignment, Calibration and Data Quality Monitoring, followed by a prompt reconstruction pass. The raw data, AlCa, and Reco output files are archived at the Tier 0, and these files are all to be sent to the FNAL Tier 1 site for secondary archiving to tape. Because there are still uncertainties about the performance of the zero-suppression software for HI events, data will be streamed in the so-called virgin-raw mode in the first 2010 running. The zero suppression will be done at the Tier 0 on the virgin-raw data set. We have determined that the total compute power of the Tier 0 far exceeds the power needed to do both zero suppression and prompt reconstruction of the 2010 minimum bias HI data. 2) While still on disk storage, the data files at FNAL are subscribed to by a new Tier 2 site for HI to be located at Vanderbilt University, and whose construction is the main focus of the project management plan. This component requires the commissioning of a reliable, high-speed ( 3 Gbps average) network functionality between Vanderbilt and FNAL especially during the times when the LHC is colliding heavy ion beams. As mentioned previously, the Vanderbilt Tier 2 site will conduct raw data reconstruction passes in addition to supporting the usual Tier 2 functions for data analysis and simulations. The automated workflows for performing the data transfers, and for doing the reconstruction re-passes, need to be commissioned at Vanderbilt. 3) The Vanderbilt Tier 2 site will export some reconstruction output to certain other Tier 2 sites, e.g. Russia, France, Brazil, Turkey, and MIT, which have groups who have expressed interest in carrying out heavy ion physics analyses. These network links to these remote sites, typically on the order of 1 Gbps average, need to be established and sustained. 4) Final production files from the various HI Tier 2 sites, including Vanderbilt will be archived to tape at the FNAL Tier 1 site in the same manner as files produced from the processing of pp data. This component will also used network links to FNAL, operating at the 1 Gbps rate. The Tier 2 functional requirements are driven by the expected annual raw data volumes to be acquire by the CMS detector during the HI running periods at the LHC. Those data volumes in turn are expected to ramp up, and change in character, in the years The hardware acquisition plan must anticipate the data volume growth and its impact on data production (reconstruction) and analysis. The next several subsections detail the expected raw data growth and the consequent impacts on the annual data processing and data storage requirements. From these annual requirements an acquisition plan has been developed. 3

7 3.1 Projected Raw Data Growth The numerical estimates for the different data flow stages follow the projected luminosity and data volumes for heavy ion running at the LHC. These current projections are given in Table 1 for the period , where there is intrinsic physics uncertainty in event sizes and overall detector data acquisition efficiency which will not be resolved until the initial data are in hand. For resource planning Table 1: Projected integrated luminosity and event totals for the CMS heavy ion runs Year Integrated Luminosity Events recorded (10 6 ) Raw data [TByte] µb (Min Bias) µb 1 50 (HLT/mid central) nb 1 75 (HLT/central) nb 1 75 (HLT/central) 300 purposes the HI data model assumes that the first year event number and raw data volume amounts as shown in Table 1 are at the top end of the quoted range. The first year running will be in minimum bias mode. The High Level Trigger (HLT) will be used in a tag-and-pass mode such that the proposed trigger algorithms can be evaluated and optimized. Based on the expected luminosity and live-time growth after 2010 all subsequent HI running requires the use the HLT. It is also anticipated that these later runs will have an effective zero suppression scheme implemented, with the net result that both the event number and the raw data volume in 2011 will be less than what these were in the minimum bias running conditions of The actual physics content of the raw data on average, however, will be more complex by virtue of the work done by the HLT. By the time of the 2013 run, when the LHC resumes operations, the HI data model assumes that the running conditions will have reached nominal year, steady state values. Hence, we should expect to have the Vanderbilt Tier 2 center reach its nominal size in time for processing the 2013 data. 3.2 Projected Raw Data Reconstruction Requirement The annual raw data reconstruction requirements after the Tier 0 are the principle driver for the size of the Vanderbilt Tier 2 facility. Roughly speaking, the Vanderbilt center will be spending about half its power doing reconstruction re-passes and the other half of its power serving as the major analysis facility for the CMS-HI program. The estimates for the required computing power, in terms of HS06- sec, were developed from simulated events reconstructed using CMSSW in November The estimates include an 80% net efficiency factor for CPU utilization. The results of that study lead to the data reconstruction mission for CMS-HI being met according to the schedule shown in Table 2. The Table 2 shows that the prompt reconstruction pass for CMS-HI can be accomplished at the Tier 0 each year within the prescribed one-month of data taking. Similarly, a standard number of reconstruction re-passes can be accomplished at the Vanderbilt Tier 2 site each year, with each re-pass taking a reasonably short period of time. This later calculation, in the last column of Table 2 assumes that the annual CPU power totals of the Vanderbilt site will be the following in HS06 units: 3268, 8588, 17708, for the years 2010, 2011, 2012, and 2013 respectively. 4

8 Table 2: Projected Raw Data Reconstruction Computing Effort Year Trigger Events Compute Load Tier 0 Reco Re-passes Re-pass Time (10 6 ) (10 10 HS06-Sec) (Days) (Days/Re-pass) 2010 Min Bias HLT No Beam Reprocess 8.5 None HLT HLT Projected Data Analysis Computing Requirement The projected data analysis requirement cannot be as rigidly estimated as for the data reconstruction requirement. One must look to other experiments, such as at RHIC, to get a measure of the amount of analysis power compared to reconstruction power. Based on this experience, the CMS-HI group is assuming a rough parity between analysis and reconstruction needs. Under that assumption, and assuming that there will be additional Tier 2 resources from non-us groups, and further taking into account the enhancement of the MIT Heavy Ion analysis center, then the scenario for the analysis requirement is given in Table 3. For this table the number of analysis passes each year is the initial analysis pass of the prompt reco production from the Tier 0 plus the number of reco re-passes shown in Table 2. Table 3: Projected Data Analysis Computing Load Year Compute Load Per Analysis Pass Number of Analysis Passes Total Compute Load (10 10 HS06-Sec) (10 10 HS06-Sec) a a The number of analysis re-passes in 2012 is 2.7, since there is no prompt reco pass in Projected Simulation Computing Requirement The simulation production model for the HI physics program differs significantly from that of the pp program. The HI program is not a golden event strategy wherein one is looking for special event topologies such as the decay of the Higgs boson. Instead, the HI physics strategy is largely one of statistical analysis. One chooses a specific physics signal, for example jet suppression or elliptic flow as a function of particle transverse momentum. These signals are examined typically as a function of the event s centrality (impact parameter) class, which is highly relevant in HI collisions but not so in pp collisions. The signals may be also analyzed in pp collisions. To the extent that particular 5

9 signals are prominent in the sets central HI collisions but not in peripheral collisions nor in pp collisions appropriately scaled, then one take this as evidence of an effect of the dense state of matter created in a central HI collision. The net result of these considerations is that simulation production for HI physics, as has been carried out in the past for the CERN SPS and the RHIC programs, involves the generation of a small fraction (few percent) of the real events instead of the generation of a comparable number of events as might be the case in the HEP physics program. However, because of the innate complexity of the HI events, the simulation time for these events is much greater than the raw data reconstruction time, e.g seconds for simulating a minimum bias event compared to 25 seconds for reconstructing it. As was the case for the reconstruction requirement, the simulation production requirement is derived from estimates based on the use of CMSSW The annual integrated simulation power estimates are shown in Table 4. The Compute Load column includes both the simulated event generation time and the far smaller reconstruction time. Table 4: Projected Simulation Computing Load Year Event Type Number of Simulated Events HS06-Sec/Event Total Compute Load (10 6 ) (10 4 ) (10 10 HS06-Sec) 2010 Min Bias Central Central Central Central Projected Integrated Analysis and Simulation Load The Vanderbilt Tier 2 site is not expected to be a major simulation resource in CMS-HI since it will be the main host of the reconstructed data files. As such, it would be more likely for the Vanderbilt site to be a resource for analysis jobs on those reconstructed data files. However, the projected analysis plus simulation load for CMS-HI must be met by the combination of all resources: Vanderbilt, the four non-us Tier 2 HI sites, and the MIT HI analysis center. Hence, it is simpler to integrate over all possible resources and determine if the total of the available resources is compatible with the projected needs. This comparison estimate is shown in the last column of Table 5. This table shows that the anticipated analysis and simulation requirements of the CMS-HI program can be met by a combination of the Vanderbilt Tier 2 resource (after subtracting the re-reconstruction power) and the rest of the CMS-HI Tier 2 base. The rest of the CMS-HI Tier 2 base is assumed to contribute at constant 3000 HS06 units during this time, while the MIT HI analysis center is projected to grow from 1900 to 7800 HS06 units. 3.6 Projected Disk and Tape Storage Requirements The FNAL Tier 1 center will serve as the tape archive resource in the HI data model. The annual raw data volumes listed in Table 2 will be written to tape as soon as they arrive from the Tier 0 center 6

10 Table 5: Integrated Tier 2 Computing Load Compared to Total Available Resources Year Analysis + Simulation Need Vanderbilt Tier 2 Total Tier 2 Base Tier 2 Ratio (10 11 HS06-sec) (10 11 HS06-sec) (10 11 HS06-sec) (Total/Need) % % % % , % during the one month of HI running in the LHC. Similarly, the prompt reco output from the Tier 0 will also be archived at the FNAL Tier 1 center. The raw data sets will be subscribed to and stored on disk at the Vanderbilt center for subsequent reconstruction re-passes. The prompt reco files will be subscribed as well for the initial AOD and PAT file production at Vanderbilt. There will be no scheduled re-reads of raw data or prompt reco files from the FNAL Tier 1 center unless there is a disk hardware failure at the Vanderbilt center causing the loss of the data set. It is anticipated that the annually acquired HI raw data sets written to disk at the Vanderbilt location will remain on completely on disk for a one year period during which multiple reconstruction re-passes may be performed. The exception will be during for which the some combination of the 2010 minimum bias data set and 2011 HLT data sets will remain on disk for two years with extra re-passes possible since the LHC is not assumed to be running in The annual total disk and tape volumes are estimated based on simulated events sizes, with the usual caveat that these estimates will have to be refined once the real physics data becomes apparent in November With the calculated event sizes and the assumed numbers of events we arrive at the following projections for the annual disk storage and tape volume requirements, given in Table 6. Table 6: Disk and Tape Storage for Raw Data Processing Year Event Type Number of Events Disk Storage at Vanderbilt Center Tape Storage at FNAL Tier 1 (10 6 ) (PBytes) (PBytes) 2010 Min Bias HLT HLT 50 (from 2011) HLT HLT Since the Vanderbilt site is required to maintain a complete copy of the raw data on disk, as well as provide space for various cycles of reconstructed data, this mandates the initially large requirement of 0.49 PBytes of disk. Similarly, the FNAL Tier 1 site must write to tape the initial raw data volume, and some subsequent production output, with the exception of the no-beam year in Hence, the tape storage requirement at FNAL generally tracks higher than the disk storage requirement. 7

11 4 Procurement and Implementation Plan The procurement and implementation plan is discussed separately for the three major hardware components in the following subsections, along with the plans for the installation and continued support of the CMS Software (CMSSW) framework. 4.1 External Networking In 2008 Vanderbilt University commissioned a 10 Gbps external connection via a dedicated fiber to its connector service company in Atlanta, Southern Crossroads (SoX [9]). Through SoX, Vanderbilt can connect to multiple networks at 10 Gbps, including Internet2, the DOE s ESNET, National Lambda Rail (NLR) and several others. In addition, SoX has its own dedicated 10 Gbps link to Chicago Starlight and Vanderbilt could purchase use of this link for periods of time if necessary. The 10 Gbps connection from SoX to Vanderbilt lands on a network switch at the campus edge where ACCRE has two 10 Gbps ports on the switch. Therefore ACCRE has a direct connection to the external 10 Gbps link without intervening firewalls or routers and switches. It will be important to demonstrate sustained, high speed (5 Gbps or more) transfer rates between the CERN Tier 0 center and the Vanderbilt disk storage systems. Based on our conversations with the CMS data operations group at CERN, Vanderbilt has been designated as a selected Tier 2 site which is entitled to receive data files directly from the FNAL Tier 1 site via the standard PhEDEx file transfer software system. In July 2010 we placed one of the CMS-HI graduate students, Benjamin Snook, at CERN to work with the CMS data operations group in learning the Tier 1 file transfer workflows. The first tests of transfer operation are in fact scheduled for the mid-summer in Upon completion of his work at the CERN Tier 0 in the summer, Mr. Snook will continue to provide Tier 1 service support at FNAL itself. 4.2 Computing Nodes ACCRE has been procuring and implementing new hardware annually since The process includes the following steps: 1) Monitoring hardware options and technology changes on an on-going basis; 2) Understanding new technologies as they are released and determining what technologies are both feasible and are a reasonable cost to implement; 3) Attending and scouting out new ideas at the annual supercomputing conference. 4) Testing new software and/or hardware by ACCRE, sometimes via remote login, sometimes by receiving evaluation hardware and sometimes by purchasing hardware when extended testing is necessary. The ACCRE cluster originally contained 240 cores in 2003, which then built up to about 1,500 cores from 2005 through Over the last eighteen months due to greater research productivity and associated funding the cluster has expanded to over 3,200 cores. The new hardware added in the last eighteen months is intended to meet both a significant increase in the demand of researchers 8

12 and to replace older hardware units as they become obsolete. Based on the expected growth of other researchers at Vanderbilt and the scope of the CMS-HI grant, the DOE-funded CMS-HI component to be installed in the next four years will be about 30 40% of the overall ACCRE cluster size. ACCRE has a track record of running a cluster with multiple architectures for the last six years. This process works very well. We normally purchase the same type of hardware for multiple purchases in the same year, which allows both stability and the ability to adapt and change to new technologies. As an example, here is how the process worked out for the last two years. After monitoring and testing hardware, the hardware selected for 2009 ACCRE was dual quad Shanghai nodes with 32 GB memory from Penguin Computing in a 1U standard format. Throughout the process, ACCRE staff continued to watch hardware options and beginning in 2010 ACCRE switched to purchasing (and is still purchasing) dual quad Nehalem nodes with 24 GB memory. We evaluated both 1U nodes with a single power source and a four-node chassis with redundant power. The later was the best price/performance combination. We have been purchasing these quad chassis nodes for six months. These nodes are manufactured by Supermicro and are currently purchased through a partnership of Condre and Serious Systems. At this time, this hardware is still the best option for ACCRE. Unless there are significant market changes in the next few months, the first round of CMS-HI compute nodes will be these quad chassis with four nodes, each node with two dual quad Nehalem processors and 24 GB memory. The first round of CMS-HI hardware computing purchases for 2010 have been scrutinized by a detector readiness workshop held at CERN in June 2010, just prior to the start of this project. This scale of this set of purchases is driven largely by the need to accommodate in advance the largest reasonable amount of data taken by the detector in the first HI run. That amount of data will be certainly large enough to achieve the specified physics goals of the first HI run. The second round of CMS-HI hardware computing purchase is likely to be in early 2011 and the precise hardware to be purchased at that time has not been determined. We will evaluate Nehalem and Westmere processors and again look at the form factor to determine whether single, dual or quad nodes best meet our needs. Again, the hardware selected will be continued for a period of time for all cluster purchases, both for DOE and for other grants. The scale of the second and future rounds of CMS-HI hardware computing purchases will be set according to what the physics informs us from the 2010 data acquisitions. The presently projected scales of these purchases are given in Section 5, according to the specification requirements presented in Section 2. These projections will be reviewed annually, and modified appropriately, by a committee in CMS-HI consisting of the detector project director (Bolek Wyslouch), the physics conveners (currently Gunther Roland and Raphael Granier de Cassagnac), and the offline computing manager (Charles Maguire). Implementation of new hardware is now routine. ACCRE has refreshed a large portion of the cluster in the last eighteen months so we have been routinely removing old hardware and adding new hardware without requiring cluster downtimes. In general, we strive to limit cluster maintenance downtimes to once every twelve to eighteen months. The most recent downtime was in March It was necessitated by a major upgrade to the Vanderbilt data center that houses our hardware. Since our hardware was physically relocated within the data center, we used the opportunity to also update ACCRE s network configuration. The most recent phase of the data center remodel occurred in late July 2010 during which time the ACCRE cluster continued to operate while this work took place. Cluster downtimes are scheduled to work around the requirements of users. This phase of the data center remodel program was completed on August 2,

13 4.3 Disk Storage and Internal Networking L-Store Storage System Basis All data will be stored using L-Store [10]. L-Store is being actively developed by ACCRE staff and used for several campus related projects in addition to external collaborations. L-Store implements a complete virtual file system using Logistical Networkings Internet Backplane Protocol (IBP) as the underlying abstraction of distributed storage and the Apache Cassandra project as a scalable mechanism for managing metadata. L-Store provides rich functionalities in the form of role based authentication, automated resource discovery, RAID-like mirroring and striping of data for performance and fault tolerance, policy based data management, and transparent peer-to-peer interoperability of storage media. L-Store is designed to provide virtually unlimited scalability in both raw storage and associated file system metadata with scalable performance in both raw data movement and metadata queries. The system will not require a downtime to add either metadata or disk storage. These can be added on the fly with the space becoming immediately available. Likewise L-Store supports the complete lifecycle management of hardware with the ability to retire hardware resulting in the automatic migration of data off the retiring hardware. IBP supports heterogeneous disk sizes and storage hardware; as a result L-Store can grow based on demand using the latest in technology. L-Store is comprised of three functional building blocks: L-Server, metadata, and depot. L-Servers are stateless servers that enforce user and system policies. Example polices are automatic data replication, fault-tolerance encoding, and data migration to remote sites to support automated work-flows. All state information is stored in the metadata service. Clients never directly access the metadata service. Instead they place requests to the L-Server. The client directly contacts the depots, where all file data is stored, for all disk I/O to increase performance. Since each L-Server is stateless they can be added/removed without impact. A client can contact any L-Server they chose to perform an operation. This way one can increase the number of L-Servers to meet client or policy demands Storage Connectivity The disk storage or depots are designed to be accessed from anywhere but the highest performance will be from compute resources using ACCREs internal network. Each depot and L-Server is connected to both the public internet and ACCREs own private network. The depots are connected to both networks at 10Gb/s and the L-Servers using 1 Gb/s connections. The architectural overview is shown in Figure Depot System The first generation of storage depot will consist of dual quad-core processors and 36 2TB disk drives with dual 10 Gb/s network connections. The large compute capability on the depots will be used to provide data integrity throughout the transfer process by performing block level checksums on both the disk and network transfers. The disk checksums are used for both read and write operations allowing for the detection of silent data corruption. Both checksum phases can be enabled or disabled independently of one another in order to maximize data integrity or throughput. The disk checksum is enabled or disabled on the initial file creation. 10

14 Figure 1: Hardware design planned for CMS-HI computing at the ACCRE facility Internal Networking As stated previously the storage will be accessible from both ACCREs private network as well as the public internet. The backbone of each of these networks is comprised of a collection of Extreme Networks x650 switches. Each of these switches has 24 10Gb/s ports with either SFP+ optics (x650-24a) or RJ45 connections (x650-24t) and dual 128Gb/s stacking ports to connect the switch into the network stack CMSSW Installation and Support The Vanderbilt ACCRE facility is already a high functioning Tier-3 site in CMS. It appears on the CMS facility monitoring system as passing all the tests necessary for a Tier-3 designation. A request has been made to the Facility Operations group in CMS to upgrade the Vanderbilt site from a Tier-3 to a Tier-3 in their system. This change in status will be proceeded by upgrades to the compute and storage element (CE and SE) servers at Vanderbilt to meet CMS specifications. A designated Tier 1 production server machine will also be installed as Vanderbilt, similar to such machines at the Tier-0 and the FNAL Tier 1 sites, such that the Vanderbilt site can begin to address the data production 11

15 tasks that will be done during raw data re-reconstruction passes. The CMSSW updates at ACCRE have been performed routinely and on a timely bases by Dr. Bockjoo Kim of the University of Florida. The Vanderbilt Tier-3 at ACCRE is completely up-to-date with the latest CMSSW releases in the SL5 operating system. 4.4 Implementation Milestones and Deliverables The schedule of milestones and deliverables is presented in Table 7. This schedule assumes a spending authorization date of September 30, 2010 and a start of heavy ion data taking in November The second heavy ion running is assumed to take place in November 2011, after which there is a long shutdown for LHC upgrades. The third heavy ion run is assumed to take place in 2013, with one month of heavy ion running to take place every year after The schedule of milestones explicitly includes annual reviews of the operational performance of the Tier-2, as well as separate reviews of the projected hardware requirements for the next year of data acquisition and analysis. The reviews of the US CMS Tier 2 operational performance are already routine done on behalf of the US CMS Software and Computing Project (USCMSSC) based out of Fermilab. The Vanderbilt Tier 2 will be naturally incorporated into this cycle of reviews. The annual reviews for the next year s hardware requirements will be first conducted by a committee of CMS-HI participants, include the CMS-HI Project Manager, the Physics Analysis conveners, and the Tier-2 Project Directory. The estimate of these requirements, as well as the projected requirements for the heavy ion program at the central Tier 0 installation and at the non-us Tier 2 sites will be considered by the main CMS computing group. It is standard practice in all of LHC experiments to prepare the next year requirements documents, which themselves are subject to external review. In this manner, the annual requirements estimates for the CMS-HI program will be established with the same rigorous review process as the requirements for the pp program in CMS. 12

16 Table 7: Schedule of Milestones, Reviews, and Deliverables For the Vanderbilt Tier 2 Milestone or Deliverable Fiscal Quarter Review of CPU and disk cost/performance estimates FY11, Q1 Placement of initial core (3268 HS06) and disk orders (485 TB) FY11, Q1 Commissioning of data transfer links Tier 0 Tier 1 Tier 2 FY11, Q1 Review of tape archive requirements for 2010 heavy ion data FY11, Q1 Installation of FNAL tape media for 2010 heavy ion running FY11, Q1 Installation of initial core and disk storage FY11, Q1 Operational testing of L-Store disk systems FY11, Q1 Operational testing of Tier 2 analysis systems FY11, Q1 Operational testing of Tier 1 re-reconstruction systems FY11, Q2 Receipt of HI 2010 prompt reconstruction data from FNAL FY11, Q2 Certification of Vanderbilt Tier 2 status FY11, Q2 Commissioning of Vanderbilt Tier 2 operations for CMS-HI users FY11, Q2 Certification of Vanderbilt Tier 1 re-reconstruction status FY11, Q3 Receipt of HI 2010 raw data from FNAL FY11, Q2 Commissioning of Vanderbilt Tier 1 re-reconstruction status FY11, Q3 Review of initial Vanderbilt center performance FY11, Q4 Review of hardware and tape archive requirements for 2011 HI running FY11, Q4 Placement of hardware and tape archive orders for 2011 HI running FY11, Q4 Installation of hardware for 2011 HI running FY11, Q4 Readiness review for immediate receipt of 2011 HI running FY11, Q4 Commissioning of operations for 2011 HI data running FY12, Q1 Routine Tier 2 operations for 2011 HI data FY12, Q2 Routine Tier 1 re-reconstruction operations for 2011 HI data FY12, Q2 Review of Vanderbilt center performance for 2011 HI data FY12, Q4 Review of hardware requirements for 2012 FY12, Q4 Placement of hardware orders for 2012 re-analysis period FY12, Q4 Installation of hardware for 2012 re-analysis period FY12, Q4 Routine Tier 2 re-analysis operations for 2011 HI data FY13, Q2 Routine Tier 1 re-reconstruction operations for 2011 HI data FY13, Q2 Review of Vanderbilt center performance for 2012 operations FY13, Q2 Review of hardware and tape archive requirements for 2013 HI running FY13, Q4 Placement of hardware and tape archive orders for 2013 HI running FY13, Q4 Installation of hardware for 2013 HI running FY13, Q4 Readiness review for immediate receipt of 2013 HI running FY13, Q4 Commissioning of operations for 2013 HI data running FY14, Q1 Routine Tier 2 operations for 2013 HI data FY14, Q2 Routine Tier 1 re-reconstruction operations for 2013 HI data FY14, Q2 Review of Vanderbilt center performance for 2013 HI data FY14, Q4 13

17 5 Cost Summary In order to achieve the computing requirements and meet the milestones schedule shown previously, the annual budgets for the development of the new HI Tier 2 center at Vanderbilt are shown in Table 8. Table 8: Funding Profile for the Vanderbilt Tier 2 Center Category CY10 CY11 CY12 CY13 CY14 Total Compute/Disk Acquisitions New CPUs (cores) New CPUs (HS06) Total CPUs (cores) Total CPUs (HS06) New Disk (TB) Total Disk (TB) Hardware and Staffing Costs to the DOE CPUs $136,600 $196,000 $264,000 $112,000 $0 $709,600 Disk $121,250 $56,000 $45,000 $15,255 $0 $237,505 Total Hardware $258,850 $252,000 $309,000 $127,255 $0 $947,105 Staffing (DOE Cost) $180,476 $188,285 $195,816 $203,649 $211,795 $980,021 Staff+Hardware Total $439,326 $440,285 $504,816 $330,904 $211,795 $1,927,126 Staffing Support Decomposition (FTEs) By DOE By Vanderbilt Staffing (Vanderbilt Cost) $180,476 $188,285 $195,816 $203,648 $211,795 $980,021 CPU, Disk, and FTE Cost Assumptions Cost/core with 3 GB Disk cost per TB Total cost per FTE $120,317 $125,523 $130,544 $135,766 $141,197 These costs are for the CPU nodes, the disk storage pricing, and for operating staff support. The specific operating staff assignments are shown in Table 10. The annual costs for supporting the HI tape archive component at the FNAL Tier 1 are shown in in Table 9. The projected tape volumes are according to the anticipated raw data volumes to be produced at the LHC. The incremental cost of hosting HI data at FNAL are estimated to be $110/tape slot, including overhead. This includes the media cost, the incremental tape library cost, and maintenance. The current technology being using by FNAL is LTO4 tapes at 800 GB/slot, filled to about 90%. From 2012 on we can anticipate the use of the LTO5 technology becoming available which will double the per-slot capacity. In addition we foresee a dedicated tape drive for HI data transfers. We propose to put this item into the 2011 funding, at an estimated cost of $25,000, as only then would the LTO5 technology become available. We will continue to use the current LTO4 tape drives in the year before. Prof. Charles F. Maguire is the designated manager for the Tier 2 Project at Vanderbilt University. He will have the ultimate single responsibility for authorizing spending for items in this project at 14

18 Vanderbilt and at Fermilab. The spending on this project will be reported to the CMS-HI project manager, and the DOE-NP operations office on a regular basis by the sponsored research accounting office of Vanderbilt University. Table 9: Funding Profile for FNAL Tier 1 Tape Archive Category CY10 CY11 CY12 CY13 CY14 Total Tape Volume (PB) Cost to DOE $94,000 $103,000 a $40,000 $116,000 $120,000 $473,000 a Includes a $25,000 capital charge for the purchase of an LT05 tape drive 6 Computing Management Organization 6.1 ACCRE Internal Computing Organization ACCRE has always been a faculty/research driven organization. It was founded by faculty, reports to a faculty steering committee with financial oversight, and is directed on an operational basis by a faculty advisory board. The CMS faculty are also involved in the oversight of ACCRE. Charles Maguire is a member of the faculty advisory board and Paul Sheldon is the chair of the faculty steering committee. On a daily basis, operation of ACCRE is shared by Alan Tackett, technical director, and Carie Lee Kennedy, management/finance director, who have worked together as a team for over six years. The staff of ACCRE is very stable and there is combined over fifty years of experience at ACCRE and over one hundred years industry experience. ACCRE has been planning for the implementation of CMS-HI and hired a new employee in January 2010 on a short-term basis in anticipation of this project being funded. This advance hiring (at Vanderbilt s expense) allowed us to train this employee so he would be ready to deploy on this project when it was funded. This individual will transition by August 15 to work 80% on this project. The other ACCRE staff who will work on this project have been employed by ACCRE for a number of years so bring strong technical skills and experience to the project. The funding by DOE and the matching staff funding by Vanderbilt for the first year will be allocated according to Table CMS-HI Internal Computing Organization In order to carry out the various HI computing operations, the computing organization for CMS-HI is as shown in Figure 2. Nine major computing responsibilities are depicted in this figure, along with the names of the persons assigned each responsibility. These nine persons form the internal computing committee for CMS-HI. That computing committee is chaired by the CMS-HI-US Projector Director, Bolek Wyslouch, with deputy chair, Charles Maguire, who also serves as the Off-Line Computing Director. In this role Prof. Maguire reports on behalf of the HI group to the CMS Computing Review Board (CRB). The CRB sets computing policies for all of CMS and reviews their effectiveness at each computing site. Some particular aspects of the CMS-HI computing responsibilities are worth elaborating in order to illustrate the close cooperation which has already been attained between HI computing operations and 15

19 Table 10: ACCRE Staff Effort on the CMS-HI Tier 2 Project Staff Person Area of Expertise FTE by DOE FTE by VU Total FTE M. Binkley security, firewall, storage admin B. Brown storage admin, networking K. Buterbaugh cluster file systems, infrastructure S. De Ledesma storage software implementation M. Heller storage admin, software implementation C. Johnson cluster scheduling, hardware J. Mora cluster networking, infrastructure New Hire storage software implementation Total the rest of CMS computing: HLT Operations: The algorithms needed for the HI trigger decisions were the subject of in-depth review in CMS early in 2009, as well as at the May 2009 DOE-NP review. These algorithms are judged ready for the tag-and-pass commissioning run in November DQM Operations: Prof. Velkovska spent her sabbatical year at CERN learning the standard CMS DQM operations. A research associate from the Vanderbilt University RHI group, Pelin Kurt, has been stationed at CERN since December 2009 with a primary responsibility for interfacing with the CMS DQM operations group. Vanderbilt University itself is funding 1 a 30 m 2, CMS data operations center in a prime location of its Physics Department building in support of the DQM responsibility taken on by Prof. Velkovska. This data center began operations in early March 2010, just after the pp collisions restarted at the LHC. The center also has consoles dedicated to the monitoring the off-line data reconstruction responsibility which the Vanderbilt group has assumed. Tier 0 Operations: Dr. Klute has worked in person with many of the CMS Tier 0 data operations group (Dirk Hufnagel, Josh Bendavid, Stephen Gowdy, and David Evans) and has responsibility for guiding them through the processing of the HI data at the Tier 0, and its export to the Tier 1 site at FNAL. Naturally, the HLT, DQM, and Tier 0 operations responsibles in HI will be in constant contact during the HI running, as these three tasks are mutually inter-dependent. VU Tier 2 Operations: Prof. Maguire has been working with the ACCRE technical staff for the past two years in order to bring that facility into operations for CMS-HI data reconstruction and analysis. Prof. Maguire will also serve as the primary liaison with the FNAL Tier 1 operations group while data are being transferred between FNAL and Vanderbilt. 1 Construction of this CMS data center began in mid-december 2009 and three months later. occupancy on February 22, Furniture and computer equipment This center was fully operational when pp collisions resumed at the LHC in mid-march, and it participated in the March 30 world-wide media event at CERN celebrating the successful commissioning of the LHC for the physics research program. During pp data taking this center will be staffed by members of the local RHI and HEP groups. The total cost of $110,000, including equipment, is being borne by Vanderbilt University. 16

20 HLT Operations Christof Roland Coordinates with the CMS HLT group to design the DAQ bandwidth Coo for HI running DQM Operations Julia Velkovska Supervises the on-line and off-line data quality monitoring Coo during HI running T0 Operations Markus Klute Coordinates the HI data operations at the T0 for prompt reco Coo and off-site transfers Project Director Bolek Wyslouch VU T2 Operations Charles Maguire MIT T2 Operations Christoph Paus Manages the raw data reconstruction and the analysis passes Coo at the Vanderbilt T2 site Manages the overall functioning of the MIT T2 site for CMS Coo simulations and analysis Non-US T2 Operations Coordinates the activities at the non-us R. Granier de Cassagnac T2 sites (Russia, Coo France, Brazil, Turkey) Software Coordinator Edward Wenger Coordinates with the CMS software group for the supervision Coo of HI code releases Simulations Manager Wei Li Coordinates the HI simulation production requests with the Coo CMS data operations group Analysis Coordinator Gunther Roland Overseas the analysis work of the HI Physics Interest Coo Groups Figure 2: Organization chart for the CMS-HI computing tasks MIT HI Analysis Center Operations: This center at MIT-Bates is funded as a separate project, but it will be closely linked and synchronized in operations with the main Vanderbilt Tier 2 center. The MIT HI Analysis center will be responsible for the following: 1) Generating and analyzing simulated data for heavy ion events. 2) Functioning as an additional analysis center for the CMS-HI Physics Analysis Groups The organization of the joint HE-HI facility at MIT-Bates will be overseen by Profs. Boleslaw Wyslouch and Christoph Paus. Maxim Goncharov is the overall manager. Wei Li of the MIT heavy ion group provides part time support for the grid operations and he is in change of the overall simulations for the CMS HI program. The long-term plan is for the HI and HE groups to support one post-doctoral associate each who will spend roughly half of their time on computer facility management. In addition, on average one to three graduate students from each of the groups will devote a fraction of their time to support the facility operations. The Computer Services Group of LNS will provide important system management expertise and personnel. 17

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University

The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett

More information

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology

5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology Version February 18, 2010 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN 37235 3) Co-PIs

More information

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012 Continuation Report/Proposal for DOE Grant Number: DE SC0005220 1 May 15, 2012 Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012

More information

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1

CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 Version V1: December 16 at 14:45 CST 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy

More information

August 31, 2009 Bologna Workshop Rehearsal I

August 31, 2009 Bologna Workshop Rehearsal I August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing

More information

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal

Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear

More information

Data Processing and Analysis Requirements for CMS-HI Computing

Data Processing and Analysis Requirements for CMS-HI Computing CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

PoS(High-pT physics09)036

PoS(High-pT physics09)036 Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Report on Collaborative Research for Hurricane Hardening

Report on Collaborative Research for Hurricane Hardening Report on Collaborative Research for Hurricane Hardening Provided by The Public Utility Research Center University of Florida To the Utility Sponsor Steering Committee January 2010 I. Introduction The

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

VMware vcloud Air Accelerator Service

VMware vcloud Air Accelerator Service DATASHEET AT A GLANCE The VMware vcloud Air Accelerator Service assists customers with extending their private VMware vsphere environment to a VMware vcloud Air public cloud. This Accelerator Service engagement

More information

Connecticut Department of Department of Administrative Services and the Broadband Technology Opportunity Program (BTOP) 8/20/2012 1

Connecticut Department of Department of Administrative Services and the Broadband Technology Opportunity Program (BTOP) 8/20/2012 1 Connecticut Department of Department of Administrative Services and the Broadband Technology Opportunity Program (BTOP) 8/20/2012 1 Presentation Overview What is BTOP? Making BTOP work for our state What

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction

Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction Cedric Flamant (CERN Summer Student) - Supervisor: Adi Bornheim Division of High Energy Physics, California Institute of Technology,

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Abstract. The Data and Storage Services group at CERN is conducting

More information

Bill Boroski LQCD-ext II Contractor Project Manager

Bill Boroski LQCD-ext II Contractor Project Manager Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

LHC and LSST Use Cases

LHC and LSST Use Cases LHC and LSST Use Cases Depots Network 0 100 200 300 A B C Paul Sheldon & Alan Tackett Vanderbilt University LHC Data Movement and Placement n Model must evolve n Was: Hierarchical, strategic pre- placement

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

NC Education Cloud Feasibility Report

NC Education Cloud Feasibility Report 1 NC Education Cloud Feasibility Report 1. Problem Definition and rationale North Carolina districts are generally ill-equipped to manage production server infrastructure. Server infrastructure is most

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

STRATEGIC PLAN. USF Emergency Management

STRATEGIC PLAN. USF Emergency Management 2016-2020 STRATEGIC PLAN USF Emergency Management This page intentionally left blank. Organization Overview The Department of Emergency Management (EM) is a USF System-wide function based out of the Tampa

More information

Performance of the ATLAS Inner Detector at the LHC

Performance of the ATLAS Inner Detector at the LHC Performance of the ALAS Inner Detector at the LHC hijs Cornelissen for the ALAS Collaboration Bergische Universität Wuppertal, Gaußstraße 2, 4297 Wuppertal, Germany E-mail: thijs.cornelissen@cern.ch Abstract.

More information

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # 1659348) Klara Jelinkova Joseph Ghobrial NSF Campus Cyberinfrastructure PI and Cybersecurity Innovation

More information

The Transition to Networked Storage

The Transition to Networked Storage The Transition to Networked Storage Jim Metzler Ashton, Metzler & Associates Table of Contents 1.0 Executive Summary... 3 2.0 The Emergence of the Storage Area Network... 3 3.0 The Link Between Business

More information

Architecting the High Performance Storage Network

Architecting the High Performance Storage Network Architecting the High Performance Storage Network Jim Metzler Ashton, Metzler & Associates Table of Contents 1.0 Executive Summary...3 3.0 SAN Architectural Principals...5 4.0 The Current Best Practices

More information

ACCRE High Performance Compute Cluster

ACCRE High Performance Compute Cluster 6 중 1 2010-05-16 오후 1:44 Enabling Researcher-Driven Innovation and Exploration Mission / Services Research Publications User Support Education / Outreach A - Z Index Our Mission History Governance Services

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

M E M O R A N D U M. To: California State Lottery Commission Date: October 16, Item 9(c): Approval to Hire Project Management Consultant

M E M O R A N D U M. To: California State Lottery Commission Date: October 16, Item 9(c): Approval to Hire Project Management Consultant M E M O R A N D U M To: California State Lottery Commission Date: From: Joan M. Borucki Director Prepared By: Linh Nguyen Chief Deputy Director Subject: Item 9(c): Approval to Hire Project Management Consultant

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN CERN E-mail: mia.tosi@cern.ch During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2 10 34 cm 2 s 1 with an average pile-up

More information

Community Development Commission of the County of Los Angeles

Community Development Commission of the County of Los Angeles VDI (Virtual Desktop Infrastructure) Implementation 2018 NACo Achievement Awards 1. Abstract of the Program. (~200 words or less). Summarize the program include the program description, the purpose, and

More information

Advancing the MRJ project

Advancing the MRJ project Advancing the MRJ project 2017.1.23 2017 MITSUBISHI HEAVY INDUSTRIES, LTD. All Rights Reserved. Overview The Mitsubishi Regional Jet (MRJ) delivery date is adjusted from mid-2018 to mid-2020 due to revisions

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Microsoft SQL Server on Stratus ftserver Systems

Microsoft SQL Server on Stratus ftserver Systems W H I T E P A P E R Microsoft SQL Server on Stratus ftserver Systems Security, scalability and reliability at its best Uptime that approaches six nines Significant cost savings for your business Only from

More information

Next Generation Backup: Better ways to deal with rapid data growth and aging tape infrastructures

Next Generation Backup: Better ways to deal with rapid data growth and aging tape infrastructures Next Generation Backup: Better ways to deal with rapid data growth and aging tape infrastructures Next 1 What we see happening today. The amount of data businesses must cope with on a daily basis is getting

More information

Tracking and flavour tagging selection in the ATLAS High Level Trigger

Tracking and flavour tagging selection in the ATLAS High Level Trigger Tracking and flavour tagging selection in the ATLAS High Level Trigger University of Pisa and INFN E-mail: milene.calvetti@cern.ch In high-energy physics experiments, track based selection in the online

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

How Cisco IT Improved Development Processes with a New Operating Model

How Cisco IT Improved Development Processes with a New Operating Model How Cisco IT Improved Development Processes with a New Operating Model New way to manage IT investments supports innovation, improved architecture, and stronger process standards for Cisco IT By Patrick

More information

Direct photon measurements in ALICE. Alexis Mas for the ALICE collaboration

Direct photon measurements in ALICE. Alexis Mas for the ALICE collaboration Direct photon measurements in ALICE Alexis Mas for the ALICE collaboration 1 Outline I - Physics motivations for direct photon measurements II Direct photon measurements in ALICE i - Conversion method

More information

Muon Reconstruction and Identification in CMS

Muon Reconstruction and Identification in CMS Muon Reconstruction and Identification in CMS Marcin Konecki Institute of Experimental Physics, University of Warsaw, Poland E-mail: marcin.konecki@gmail.com An event reconstruction at LHC is a challenging

More information

NORTH CAROLINA NC MRITE. Nominating Category: Enterprise IT Management Initiatives

NORTH CAROLINA NC MRITE. Nominating Category: Enterprise IT Management Initiatives NORTH CAROLINA MANAGING RISK IN THE INFORMATION TECHNOLOGY ENTERPRISE NC MRITE Nominating Category: Nominator: Ann V. Garrett Chief Security and Risk Officer State of North Carolina Office of Information

More information

Solution Brief: Archiving with Harmonic Media Application Server and ProXplore

Solution Brief: Archiving with Harmonic Media Application Server and ProXplore Solution Brief: Archiving with Harmonic Media Application Server and ProXplore Summary Harmonic Media Application Server (MAS) provides management of content across the Harmonic server and storage infrastructure.

More information

How Cisco IT Deployed Cisco Firewall Services Modules at Scientific Atlanta

How Cisco IT Deployed Cisco Firewall Services Modules at Scientific Atlanta How Cisco IT Deployed Cisco Firewall Services Modules at Scientific Atlanta Deployment increases network security, reliability, and availability. Cisco IT Case Study / Security / Firewall Services Module:

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

CMS Conference Report

CMS Conference Report Available on CMS information server CMS CR 2005/021 CMS Conference Report 29 Septemebr 2005 Track and Vertex Reconstruction with the CMS Detector at LHC S. Cucciarelli CERN, Geneva, Switzerland Abstract

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 TECHNOLOGY BRIEF Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 ABSTRACT Xcellis represents the culmination of over 15 years of file system and data management

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Summary of Data Management Principles

Summary of Data Management Principles Large Synoptic Survey Telescope (LSST) Summary of Data Management Principles Steven M. Kahn LPM-151 Latest Revision: June 30, 2015 Change Record Version Date Description Owner name 1 6/30/2015 Initial

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

Technology Insight Series

Technology Insight Series IBM ProtecTIER Deduplication for z/os John Webster March 04, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved. Announcement Summary The many data

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

Data Quality Monitoring at CMS with Machine Learning

Data Quality Monitoring at CMS with Machine Learning Data Quality Monitoring at CMS with Machine Learning July-August 2016 Author: Aytaj Aghabayli Supervisors: Jean-Roch Vlimant Maurizio Pierini CERN openlab Summer Student Report 2016 Abstract The Data Quality

More information

FOUR WAYS TO LOWER THE COST OF REPLICATION

FOUR WAYS TO LOWER THE COST OF REPLICATION WHITE PAPER I JANUARY 2010 FOUR WAYS TO LOWER THE COST OF REPLICATION How an Ultra-Efficient, Virtualized Storage Platform Brings Disaster Recovery within Reach for Any Organization FOUR WAYS TO LOWER

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

July 13, Via to RE: International Internet Policy Priorities [Docket No ]

July 13, Via  to RE: International Internet Policy Priorities [Docket No ] July 13, 2018 Honorable David J. Redl Assistant Secretary for Communications and Information and Administrator, National Telecommunications and Information Administration U.S. Department of Commerce Washington,

More information

Frontline Interoperability Test Team Case Studies

Frontline Interoperability Test Team Case Studies Frontline Interoperability Test Team Case Studies Frontline IOT Means Maximum Device Compatibility Case Summary A large Bluetooth developer (Customer X) created a new Bluetooth-enabled phone for commercial

More information

Total Cost of Ownership: Benefits of ECM in the OpenText Cloud

Total Cost of Ownership: Benefits of ECM in the OpenText Cloud Total Cost of Ownership: Benefits of ECM in the OpenText Cloud OpenText Managed Services brings together the power of an enterprise cloud platform with the technical skills and business experience required

More information

Total Cost of Ownership: Benefits of the OpenText Cloud

Total Cost of Ownership: Benefits of the OpenText Cloud Total Cost of Ownership: Benefits of the OpenText Cloud OpenText Managed Services in the Cloud delivers on the promise of a digital-first world for businesses of all sizes. This paper examines how organizations

More information

Opportunities A Realistic Study of Costs Associated

Opportunities A Realistic Study of Costs Associated e-fiscal Summer Workshop Opportunities A Realistic Study of Costs Associated X to Datacenter Installation and Operation in a Research Institute can we do EVEN better? Samos, 3rd July 2012 Jesús Marco de

More information

Information Technology Services. Informational Report for the Board of Trustees October 11, 2017 Prepared effective August 31, 2017

Information Technology Services. Informational Report for the Board of Trustees October 11, 2017 Prepared effective August 31, 2017 Information Technology Services Informational Report for the Board of Trustees October 11, 2017 Prepared effective August 31, 2017 Information Technology Services TABLE OF CONTENTS UPDATE ON PROJECTS &

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

SPARC 2 Consultations January-February 2016

SPARC 2 Consultations January-February 2016 SPARC 2 Consultations January-February 2016 1 Outline Introduction to Compute Canada SPARC 2 Consultation Context Capital Deployment Plan Services Plan Access and Allocation Policies (RAC, etc.) Discussion

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

First results from the LHCb Vertex Locator

First results from the LHCb Vertex Locator First results from the LHCb Vertex Locator Act 1: LHCb Intro. Act 2: Velo Design Dec. 2009 Act 3: Initial Performance Chris Parkes for LHCb VELO group Vienna Conference 2010 2 Introducing LHCb LHCb is

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information