Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal
|
|
- Gyles Lawson
- 6 years ago
- Views:
Transcription
1 Office of Nuclear Physics Report Review of the Compact Muon Solenoid (CMS) Collaboration Heavy Ion Computing Proposal May 11, 2009
2 Evaluation Summary Report The Department of Energy (DOE), Office of Nuclear Physics (NP) completed its review on May 11, 2009 of the proposal received from the U.S. Compact Muon Solenoid (CMS) Heavy Ion (HI) Collaboration (CMS HI) for funding of substantial computing resources to be located at the Advanced Computing Center for Research and Education (ACCRE) at Vanderbilt University, and at the Bates Computing Facility at the Massachusetts Institute of Technology (MIT). The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) will accelerate heavy ion beams for a period of one month each year. The proposal under review described the computing capabilities that will be needed to store, process, and analyze the heavy ion collision data from the CMS experiment. Computing for the LHC program is organized in a hierarchy of computing facilities, starting with the single Tier-0 facility at CERN, which receives data directly from the experiments, to numerous Tier-3 end user analysis stations. After initial processing at CERN, data is distributed to large computer centers (Tier-1) around the world, with sufficient storage capacity for a large fraction of the data, and support for the computing grid. Tier-1 centers make the data available to Tier-2 centers for performing specific analysis tasks. The CMS HI collaboration presented a plan in which the CERN Tier-0 facility will allocate sufficient processing power to perform the real-time detector calibrations, transfer the raw (RAW) data to a dedicated Tier-1 facility at Vanderbilt University (VU), and archive a backup of the RAW data. The RAW data will be processed at VU and converted to reconstructed (RECO) data. The RAW and RECO components, stored together as FEVT data, will form the primary archive. Vanderbilt University will also serve as a Tier-2 facility, where users (Tier-3) will be able to access the processed data for analysis. This computing model appears to depart from the CERN specifications and requirements which are described in the CMS Computing Technical Design Report (TDR) referenced in the CMS HI proposal. In addition to the reconstruction of the RAW data, resources were requested for the production and analysis of Monte Carlo (MC) simulation data. Presently, these activities are performed at the High Energy Physics (HEP) CMS Tier-2 computing center at MIT, but on a smaller scale. The proposal discussed two options for satisfying the simulation requirements. The first and preferred option provided a dedicated small computing cluster sited at the MIT Bates Computing Facility. In the second option, the simulation work is consolidated with the Tier-1/2 computing facility at VU, but managed by the MIT collaborators. The cost differential between these two alternatives was shown to be minimal. The panel recognized the significance and merit of the proposal, in that a dedicated computing resource, roughly of the scope proposed, is essential for the success of the U.S. CMS HI effort. 1
3 The strong institutional interest of Vanderbilt University is clearly expressed by the availability of infrastructure at the ACCRE facility and the contributed support of 8.25 FTE s over a 5 year period. The CMS HI Monte Carlo simulations facility in the first option would be housed in a new computing facility at the MIT Bates center. Both institutions are clearly willing to contribute significant resources towards the construction and operation of the respective facilities. The panel noted that most of the simulation studies requested in the 2006 DOE Science Review report were performed at the MIT center and that MIT has experience hosting a CMS Tier-2 center for the high energy physics (HEP) group since However, the panel believes the CMS HI proposal requires further development and elaboration in several crucial areas. Significant comments were made concerning the integration of the VU computing component within the computing framework of the CMS collaboration and ACCRE, performance and technical specifications, management and operations, workforce requirements, and formal agreements. The main issues that need to be addressed include: Justification of the formal performance requirements of the proposed CMS HI computing center(s) that integrates the CERN Tier-0 obligations, the service level requirements including end users, and the resources of non-doe participants and other countries. The analysis and storage model articulated in the CMS HI Collaboration proposal differs from the one described in the CMS Computing Technical Design Report (TDR 1 ). In the TDR, heavy ion data would be partially processed at the CERN Tier-0 center during the heavy ion run. The remainder would be processed at the CERN Tier-0 center during the 4-6 month LHC downtime and possibly at one or more dedicated Tier-2 centers. It appears that the CERN facility is the primary custodian of the first copy of the RAW and RECO data, since no distinction is drawn by CERN between proton and heavy ion data. On computing capacity, the TDR specifies that the combined capacity of the Tier-0 and Tier-1 centers will be sized such that each year, 3 complete re-processing passes of the RAW data can be completed. At the review presentation, the VU computing center was sized to allow for 1.5 full reconstruction passes per year. A consistent computing strategy and a plan that specifies and integrates services relating to CERN needs to be developed. The relationship between DOE NP supported grid resources (VU and MIT) for heavy ion research and the grid resources available to the larger CMS HEP collaboration needs to be clarified. Also the formal arrangements with respect to NP pledged resources to the CERN Worldwide LHC Computing Grid (WLCG) need to be defined. 1 Document referenced in the proposal as CERN-LHCC
4 The US CMS HI computing resources should be driven by technical specifications that are independent of specific hardware choices. Well-established performance metrics should be used to quantify the needs in a way that can be mapped to any processor technology the market may be offering over the course of the next few years. Expected improvements of the CPU processor cost/performance ratio suggest that the requested budget for CPU hardware could be high. The management, interaction and coordination model between the Tier-2 center(s) and Tier-3 clients is not well formulated. It will be important to document that user institutions will have sufficient resources to access the VU computing center and how the use of the facility by multiple U.S. and international clients will be managed. ACCRE should articulate its plans and facility investments needed to support the development of a custom infrastructure suited to the needs of the US CMS HI collaboration and conversely, to what extent the US CMS HI collaboration will need to adapt to the existing ACCRE infrastructure (e.g. the use of L-Store at Vanderbilt, when other CMS computing centers use dcache). The CMS HI proposal provided insufficient information to allow the panel to make a clear recommendation regarding the one- and two-site computing center options. A case should be made beyond stating that a two center solution might result in more access to opportunistic compute cycles, and that the simulation expertise existing in the MIT CMS HI group should not be lost. Based on (near) cost equality of the two solutions, there was no strong argument for the one-site solution. As the lead institution for the CMS HI simulation effort, it might be appropriate for MIT to divide responsibilities between two sites provided a welldefined, fully integrated computing plan is presented. A detailed plan for external oversight of the responsiveness and quality of operation of the computing center(s) should be developed. Draft Memoranda of Understanding (MoUs) should be prepared between the appropriate parties, separately for the VU and the Bates Computing Facilities, that clearly define management, operations, and service level responsibilities. CMS HI should further develop its computing operations model, indicating how it will handle allocations, priority allocation of temporarily available CPU capacity to DOE NP supported programs, grid support, etc. If assumptions are being made regarding CMS HEP resources (e.g. grid support), these should be clarified. The size of the workforce associated with data transport, production, production re-passes, calibration and Monte Carlo simulation efforts, and the general challenges of running in a grid environment should be carefully examined and documented. 3
5 The Tier-1 quality-of-service (QoS) requirements were specifically developed for the HEP LHC experiments, but they might be relaxed for the CMS HI effort in view of its spooled approach to the grid. CMS HI and the WLCG are encouraged to examine the possibility of tailoring the Tier-1 requirements to reflect the needs of the U.S. heavy ion research community. DOE Recommendations The US CMS HI Collaboration is requested to resubmit its computing proposal, together with separate documents if necessary, that responds to the concerns expressed in the reviewers reports for further evaluation. The resubmission is due to the DOE Office of Nuclear Physics by December 31,
6 Appendix A: Charge Memorandum Dear Professor/Dr.: Thank you for agreeing to participate as a member of a committee to review the proposal received from the U.S. Compact Muon Spectrometer Heavy Ion (US CMS HI) Collaboration, requesting support from the Office of Nuclear Physics for substantial computing resources. This review will take place at the Department of Energy (DOE) Headquarters in Germantown on May 11, To maintain the best possible program in nuclear physics, it is essential for us to obtain the most highly qualified technical opinions on this proposal. Your contribution is important in this regard, and we welcome your critical evaluation of the US CMS HI computing proposal. In particular, we are interested in your considerations of: a) The significance and merits of the proposed computing plan for the US CMS HI collaboration; b) The completeness and feasibility of the US CMS HI computing plans considering such factors as networking and infrastructure support; c) The cost effectiveness of the proposed plans and the appropriateness of the size of the requested budget; d) A critical evaluation of the two alternative solutions; and e) The resources and interest of the institution(s) at which the computing center(s) might be located, and the contributions of non-doe supported US CMS HI institutions. In your evaluations, you should address whether the funding requests are explicitly tied to accomplishment of annual and long-term facility performance goals. Does the proposed plan incorporate independent and quality evaluations of sufficient scope and quality conducted on a regular basis to ensure optimal utilization of the US CMS HI computing center(s)? Has the proposal conducted a credible analysis of alternatives that include trade-offs between cost, schedule, and performance goals? Please feel free to make comparisons with existing computing facilities or similar proposals with which you are familiar. The results of this review should establish the scientific need for the computing capabilities, and in turn, clearly defined deliverables, tasks, and capability/performance facility parameters necessary to assure that the science can be accomplished during the first five years of Large Hadron Collider Heavy Ion operations. This computing review will form the second session of a one-day review, comprising presentations followed by executive discussions and report writing, and a brief close-out. The review will be chaired by Dr. Gulshan Rai, Program Manager for Heavy Ion Nuclear 5
7 Physics, assisted by Dr. Helmut Marsiske, Program Manager for Instrumentation. You will be asked to write individual letter reports on your evaluation. Your letter report will be held in strictest confidence, so please be candid in your written remarks. Your letter reports will be due to Dr. Rai one week after the conclusion of the review. An agenda and background material will be sent to you in a later correspondence. If you have any questions about the review, please contact Dr. Rai at (301) , ( Gulshan.Rai@science.doe.gov). For logistics questions, please contact Brenda May at (301) or Brenda.May@science.doe.gov. I greatly appreciate your efforts in preparing for this review. It is an important process that allows our office to understand the scientific need for the computing resources. I look forward to a very informative and stimulating review. Sincerely, Enclosure cc: Boleslaw Wyslouch, MIT Richard G. Milner, MIT Eugene A. Henry Acting Associate Director of the Office of Science for Nuclear Physics 6
8 Appendix B: Agenda and List of Reviewers May 11, 2009 Department of Energy, Germantown Headquarters, Maryland Room E301 Conference Room 8:30 am Executive Session 9:00 am Heavy Ion Physics with CMS: Physics Plans and Preparations G. Roland (<40min) 10:00 am Trigger and Data Acquisition C. Roland (<30 min) 10:45 am US Participation in CMS Heavy Ion program. 11:30 am Break + Executive Session + Lunch 1:30 pm Computing facility for CMS Heavy Ions in the US 2:30 pm Vanderbilt Computing Center ACCRE B. Wyslouch (<20min) C. Maguire (<40min) A. Tackett (< 20min) 3:20 pm MIT Computing Center at Bates B. Wyslouch (<20min) 3:20 pm Break 3:30 pm Q&A Home work questions 3:50 pm Executive Session + Report Writing 6:30 pm End of Review 7
9 CMS HEAVY ION REVIEW May 11, 2009 Review Panel Members Dr. Michael Ernst Building 510M Brookhaven National Laboratory Upton, NY (631) Dr. Anthony Frawley Department of Physics Florida State University MC: 4350 Tallahassee, FL (850) Dr. Timothy Hallman Physics Department Brookhaven National Laboratory Building 510A Upton, NY (631) Prof. Itzhak Tserruya Department of Particle Physics Weizmann Institute of Science Rehovot Israel Dr. William (Chip) Watson Thomas Jefferson National Accelerator Facility Jefferson Avenue Mail Stop 16A Newport News, VA (757) DOE Participants Dr. Eugene A. Henry Office of Nuclear Physics U. S. Department of Energy SC-26/Germantown Building 1000 Independence Avenue Washington, DC (301) Dr. Helmut Marsiske Office of Nuclear Physics U. S. Department of Energy SC-26.2/Germantown Building 1000 Independence Avenue Washington, DC (301) Dr. Gulshan Rai Office of Nuclear Physics U. S. Department of Energy SC-26.1/Germantown Building 1000 Independence Avenue Washington, DC (301) Dr. Hubert van Hecke Office of Nuclear Physics U. S. Department of Energy SC-26.1/Germantown Building 1000 Independence Avenue Washington, DC (301)
CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1
CMS-HI-US Computing Proposal Update to the U.S.D.O.E 1 Version V1: December 16 at 14:45 CST 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy
More information5) DOE/Office of Science Program Office: Nuclear Physics, Heavy Ion. 7) Collaborating Institution: Massachusetts Institute of Technology
Version February 18, 2010 1) Applicant Institution: Vanderbilt University 2) Institutional Address: Department of Physics and Astronomy Box 1807 Station B Vanderbilt University Nashville, TN 37235 3) Co-PIs
More informationContinuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012
Continuation Report/Proposal for DOE Grant Number: DE SC0005220 1 May 15, 2012 Continuation Report/Proposal: Vanderbilt CMS Tier 2 Computing Project Reporting Period: November 1, 2010 to March 31, 2012
More informationIEPSAS-Kosice: experiences in running LCG site
IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of
More informationThe CMS Computing Model
The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+
More informationConference The Data Challenges of the LHC. Reda Tafirout, TRIUMF
Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment
More informationVirtualizing a Batch. University Grid Center
Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum
More informationThe Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University
The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett
More informationSoftware and computing evolution: the HL-LHC challenge. Simone Campana, CERN
Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge
More informationCC-IN2P3: A High Performance Data Center for Research
April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion
More informationNew strategies of the LHC experiments to meet the computing requirements of the HL-LHC era
to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider
More informationAugust 31, 2009 Bologna Workshop Rehearsal I
August 31, 2009 Bologna Workshop Rehearsal I 1 The CMS-HI Research Plan Major goals Outline Assumptions about the Heavy Ion beam schedule CMS-HI Compute Model Guiding principles Actual implementation Computing
More informationThe Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University
The Project Management and Acquisition Plan for the CMS-HI Tier 2 Computing Facility at Vanderbilt University R. Granier de Cassagnac 1, C. Kennedy 2, C. Maguire 2,4, G. Roland 3, P. Sheldon 2, A. Tackett
More informationReliability Engineering Analysis of ATLAS Data Reprocessing Campaigns
Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationSummary of the LHC Computing Review
Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationA New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment
EPJ Web of Conferences 108, 02023 (2016) DOI: 10.1051/ epjconf/ 201610802023 C Owned by the authors, published by EDP Sciences, 2016 A New Segment Building Algorithm for the Cathode Strip Chambers in the
More informationSpanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"
Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year
More informationThe creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM
The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE
More informationGrid Computing Activities at KIT
Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy
More informationPoS(High-pT physics09)036
Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform
More informationGrid Computing a new tool for science
Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50
More informationIntroduction to Geant4
Introduction to Geant4 Release 10.4 Geant4 Collaboration Rev1.0: Dec 8th, 2017 CONTENTS: 1 Geant4 Scope of Application 3 2 History of Geant4 5 3 Overview of Geant4 Functionality 7 4 Geant4 User Support
More informationACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development
ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term
More informationN. Marusov, I. Semenov
GRID TECHNOLOGY FOR CONTROLLED FUSION: CONCEPTION OF THE UNIFIED CYBERSPACE AND ITER DATA MANAGEMENT N. Marusov, I. Semenov Project Center ITER (ITER Russian Domestic Agency N.Marusov@ITERRF.RU) Challenges
More informationThe LHC Computing Grid
The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current
More informationAvailability measurement of grid services from the perspective of a scientific computing centre
Journal of Physics: Conference Series Availability measurement of grid services from the perspective of a scientific computing centre To cite this article: H Marten and T Koenig 2011 J. Phys.: Conf. Ser.
More informationNational Ecological Observatory Network Observatory Design and Requirements
National Ecological Observatory Network Observatory Design and Requirements Brian Damiani/NEON Systems Engineering Intro/Scope System Definition Observatory Architecture/Design Observatory Interfaces (Inter-Segment
More informationREPORT 2015/149 INTERNAL AUDIT DIVISION
INTERNAL AUDIT DIVISION REPORT 2015/149 Audit of the information and communications technology operations in the Investment Management Division of the United Nations Joint Staff Pension Fund Overall results
More informationPerformance of the ATLAS Inner Detector at the LHC
Performance of the ALAS Inner Detector at the LHC hijs Cornelissen for the ALAS Collaboration Bergische Universität Wuppertal, Gaußstraße 2, 4297 Wuppertal, Germany E-mail: thijs.cornelissen@cern.ch Abstract.
More informationComputing for LHC in Germany
1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''
More informationAdding timing to the VELO
Summer student project report: Adding timing to the VELO supervisor: Mark Williams Biljana Mitreska Cern Summer Student Internship from June 12 to August 4, 2017 Acknowledgements I would like to thank
More informationISTITUTO NAZIONALE DI FISICA NUCLEARE
ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko
More informationExperience of the WLCG data management system from the first two years of the LHC data taking
Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz
More informationLHCb Computing Resources: 2018 requests and preview of 2019 requests
LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:
More informationStephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)
Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC
More informationLHC Computing Models
LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the
More informationFuture trends in distributed infrastructures the Nordic Tier-1 example
Future trends in distributed infrastructures the Nordic Tier-1 example O. G. Smirnova 1,2 1 Lund University, 1, Professorsgatan, Lund, 22100, Sweden 2 NeIC, 25, Stensberggata, Oslo, NO-0170, Norway E-mail:
More informationOverview. About CERN 2 / 11
Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They
More informationLHCb Computing Resources: 2019 requests and reassessment of 2018 requests
LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last
More informationSLHC-PP DELIVERABLE REPORT EU DELIVERABLE: Document identifier: SLHC-PP-D v1.1. End of Month 03 (June 2008) 30/06/2008
SLHC-PP DELIVERABLE REPORT EU DELIVERABLE: 1.2.1 Document identifier: Contractual Date of Delivery to the EC Actual Date of Delivery to the EC End of Month 03 (June 2008) 30/06/2008 Document date: 27/06/2008
More informationPrecision Timing in High Pile-Up and Time-Based Vertex Reconstruction
Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction Cedric Flamant (CERN Summer Student) - Supervisor: Adi Bornheim Division of High Energy Physics, California Institute of Technology,
More informationATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP
ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP 1 Contents Energy Frontier Particle Physics Large Hadron Collider (LHC) LHC Experiments: mainly ATLAS Requirements on computing Worldwide LHC Computing
More informationThe evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model
Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050
More informationCSCS CERN videoconference CFD applications
CSCS CERN videoconference CFD applications TS/CV/Detector Cooling - CFD Team CERN June 13 th 2006 Michele Battistin June 2006 CERN & CFD Presentation 1 TOPICS - Some feedback about already existing collaboration
More informationCompact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005
Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and
More informationData Processing and Analysis Requirements for CMS-HI Computing
CMS-HI Computing Specifications 1 Data Processing and Analysis Requirements for CMS-HI Computing Charles F. Maguire, Version August 21 Executive Summary The annual bandwidth, CPU power, data storage, and
More informationThe Grid: Processing the Data from the World s Largest Scientific Machine
The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),
More informationEvaluation of the computing resources required for a Nordic research exploitation of the LHC
PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson
More informationAGIS: The ATLAS Grid Information System
AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS
More informationarxiv: v1 [physics.ins-det] 1 Oct 2009
Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,
More informationThe IRIS Data Management Center maintains the world s largest system for collecting, archiving and distributing freely available seismological data.
The IRIS Data Management Center maintains the world s largest system for collecting, archiving and distributing freely available seismological data. DATA Data are open and freely available via the internet
More information150 million sensors deliver data. 40 million times per second
CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger
More informationFast pattern recognition with the ATLAS L1Track trigger for the HL-LHC
Fast pattern recognition with the ATLAS L1Track trigger for the HL-LHC On behalf of the ATLAS Collaboration Uppsala Universitet E-mail: mikael.martensson@cern.ch ATL-DAQ-PROC-2016-034 09/01/2017 A fast
More informationFirst Experience with LCG. Board of Sponsors 3 rd April 2009
First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large
More informationSECTION 115 MATERIALS CERTIFICATION SCHOOLS PROGRAM
SECTION 115 MATERIALS CERTIFICATION SCHOOLS PROGRAM The Materials Certification Schools (MCS) Program is offered by the Virginia Department of Transportation (VDOT) for individuals who wish to receive
More informationThe JINR Tier1 Site Simulation for Research and Development Purposes
EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes
More informationATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract
ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to
More informationCMS Computing Model with Focus on German Tier1 Activities
CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS
More informationBill Boroski LQCD-ext II Contractor Project Manager
Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,
More informationCITP Mentoring Program Guidelines
CITP Mentoring Program Guidelines 1 AICPA CITP Mentoring Program Guidelines 2017 American Institute of CPAs. All rights reserved. DISCLAIMER: The contents of this publication do not necessarily reflect
More informationSLAS Special Interest Group Charter Application
SLAS Special Interest Group Charter Application SLAS is an international community of more than 15,000 individual scientists, engineers, researchers, technologists and others from academic, government
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationMachine Learning in Data Quality Monitoring
CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data
More informationScientific Data Policy of European X-Ray Free-Electron Laser Facility GmbH
6 June 2017 Scientific Data Policy of European X-Ray Free-Electron Laser Facility GmbH as approved by the Council on 30 June 2017 1 Preface... 2 2 Definitions... 2 3 General principles... 5 4 Raw data
More informationSystem upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.
System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo
More informationComputing / The DESY Grid Center
Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT
More informationDIAL: Distributed Interactive Analysis of Large Datasets
DIAL: Distributed Interactive Analysis of Large Datasets D. L. Adams Brookhaven National Laboratory, Upton NY 11973, USA DIAL will enable users to analyze very large, event-based datasets using an application
More informationInvitation to Participate
Invitation to Participate 2009 For the lead agency participation in New York State Quality Rating and Improvement System -- QUALITYstarsNY field test Page 1 of 7 101 West 31 st Street, 7th Floor New York,
More informationTHE WHITE HOUSE. Office of the Press Secretary. For Immediate Release September 23, 2014 EXECUTIVE ORDER
THE WHITE HOUSE Office of the Press Secretary For Immediate Release September 23, 2014 EXECUTIVE ORDER - - - - - - - CLIMATE-RESILIENT INTERNATIONAL DEVELOPMENT By the authority vested in me as President
More informationComputing Resources Scrutiny Group
CERN RRB 17 056 April 17 Computing Resources Scrutiny Group C Allton (UK), V Breton (France), G Cancio Melia (CERN), A Connolly(USA), M Delfino (Spain), F Gaede (Germany), J Kleist (Nordic countries),
More informationA distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1
A distributed tier-1 L Fischer 1, M Grønager 1, J Kleist 2 and O Smirnova 3 1 NDGF - Nordic DataGrid Facilty, Kastruplundgade 22(1), DK-2770 Kastrup 2 NDGF and Aalborg University, Department of Computer
More information26 February Office of the Secretary Public Company Accounting Oversight Board 1666 K Street, NW Washington, DC
3701 Algonquin Road, Suite 1010 Telephone: 847.253.1545 Rolling Meadows, Illinois 60008, USA Facsimile: 847.253.1443 Web Sites: www.isaca.org and www.itgi.org 26 February 2007 Office of the Secretary Public
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationData Management for the World s Largest Machine
Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,
More informationPROJECT FINAL REPORT. Tel: Fax:
PROJECT FINAL REPORT Grant Agreement number: 262023 Project acronym: EURO-BIOIMAGING Project title: Euro- BioImaging - Research infrastructure for imaging technologies in biological and biomedical sciences
More information1- ASECAP participation
EETS POSITION PAPER ASECAP is the European Association of Operators of Toll Road Infrastructures, whose members networks today span more than 50,266 km of motorways, bridges and tunnels across 22 countries.
More informationTracking and flavour tagging selection in the ATLAS High Level Trigger
Tracking and flavour tagging selection in the ATLAS High Level Trigger University of Pisa and INFN E-mail: milene.calvetti@cern.ch In high-energy physics experiments, track based selection in the online
More informationMonitoring of Computing Resource Use of Active Software Releases at ATLAS
1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,
More informationFrom raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider
From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference
More informationRequest for Proposals. The National Center for State Courts (NCSC) requests proposals for the. Graphic Design of Trends in State Courts 2018
Request for Proposals The National Center for State Courts (NCSC) requests proposals for the Graphic Design of Trends in State Courts 2018 A six-month contract beginning January 15, 2018 Date of RFP Release:
More informationPROOF-Condor integration for ATLAS
PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline
More informationCan We Reliably Benchmark HTA Organizations? Michael Drummond Centre for Health Economics University of York
Can We Reliably Benchmark HTA Organizations? Michael Drummond Centre for Health Economics University of York Outline of Presentation Some background Methods Results Discussion Some Background In recent
More informationMEETING AGENDA March Final Draft
MEETING AGENDA 5 th IPHE Steering Committee Meeting 28 29 March 2006 Final Draft Morris J. Wosk Centre for Dialogue & Delta Vancouver Suites 550 West Hastings Street, Vancouver, BC V6B 1L6 Tel: 604.689.8188
More informationTravelling securely on the Grid to the origin of the Universe
1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of
More informationThe Computation and Data Needs of Canadian Astronomy
Summary The Computation and Data Needs of Canadian Astronomy The Computation and Data Committee In this white paper, we review the role of computing in astronomy and astrophysics and present the Computation
More informationPOSTGRADUATE CERTIFICATE IN LEARNING & TEACHING - REGULATIONS
POSTGRADUATE CERTIFICATE IN LEARNING & TEACHING - REGULATIONS 1. The Postgraduate Certificate in Learning and Teaching (CILT) henceforth the Course - comprises two modules: an Introductory Certificate
More informationALICE Run3/Run4 Computing Model simulation software
ALICE Run3/Run4 Computing Model simulation software Armenuhi.Abramyan, Narine.Manukyan Alikhanyan National Science Laboratory (Yerevan Physics Institute) @cern.ch Outline O2 Computing System upgrade program
More informationImplementing Online Calibration Feed Back Loops in the Alice High Level Trigger
Implementing Online Calibration Feed Back Loops in the Alice High Level Trigger Oliver Berroteran 2016-08-23 Supervisor: Markus Fasel, CERN Abstract The High Level Trigger (HLT) is a computing farm consisting
More informationMuon Collider Input ideas and proposals for the European Strategy Update
Padova July 3, 2018 Muon Collider Input ideas and proposals for the European Strategy Update Nadia Pastrone Enrico Fermi - American Physical Society, NY, Jan. 29th 1954 What can we learn with High Energy
More informationReprocessing DØ data with SAMGrid
Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton
More informationChallenges of the LHC Computing Grid by the CMS experiment
2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment
More informationOrganization/Office: Secretariat of the United Nations System Chief Executives Board for Coordination (CEB)
United Nations Associate Experts Programme TERMS OF REFERENCE Associate Expert (JPO) INT-021-14-P014-01-V I. General Information Title: Associate Expert in Interagency Coordination / Special to the Director
More informationReport on Collaborative Research for Hurricane Hardening
Report on Collaborative Research for Hurricane Hardening Provided by The Public Utility Research Center University of Florida To the Utility Sponsor Steering Committee January 2010 I. Introduction The
More informationTIER Program Funding Memorandum of Understanding For UCLA School of
TIER Program Funding Memorandum of Understanding For UCLA School of This Memorandum of Understanding is made between the Office of Information Technology (OIT) and the School of ( Department ) with reference
More informationToronto Hydro Response to December 2013 Ice Storm Independent Review Panel Report
Toronto Hydro Response to December 2013 Ice Storm Independent Review Panel Report Media Briefing Toronto, ON June 18, 2014 Part 1: Introduction David McFadden Chair, Independent Review Panel 3 Independent
More informationThe CMS data quality monitoring software: experience and future prospects
The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data
More informationADC 329 Use of Borrowed and Migration Codes in DLMS Supplements
ADC 329 Use of Borrowed and Migration Codes in DLMS Supplements 1. ORIGINATING SERVICE/AGENCY AND POC INFORMATION: a. System POC: Department of Defense (DoD) Defense Automatic Addressing System Center
More informationHigh Throughput WAN Data Transfer with Hadoop-based Storage
High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San
More information