CMS Computing Model with Focus on German Tier1 Activities
|
|
- Rachel Long
- 6 years ago
- Views:
Transcription
1 CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg,
2 Overview The Large Hadron Collider The Compact Muon Solenoid CMS Computing Model CMS in Germany CMS Workflow & Data Transfers CMS Service Challenges Monitoring HappyFace Project CMS Tier1 Computing - DESY DVSEM,
3 Large Hadron Collider ALICE CMS ATLAS LHCb Proton-Proton Collider Circumference: 27 km Beam Energy: 7 TeV/c² Below Surface: 100 m Temperature: -271 C Energy Use: 1 TWh/a 4 Large Experiments CMS (General-Purpose) Atlas (General-Purpose) LHCb (Physics of b-quarks) Alice (Lead Ion Collisions) 2 Smaller Experiments LHCf and Totem CMS Tier1 Computing - DESY DVSEM,
4 Compact Muon Solenoid Muon Chambers Hadronic Calorimetry Magnetic Retrun Yoke Technical Details: Total Weigh: Diameter: Total Length: Magnetic Field Solenoid: Yoke: t 15 m 21,5 m 4 Tesla 2 Tesla Silicon Tracker Readout Channels, e.g. Tracker: 10 Mio. Superconducting Solenoid Electromagnetic Calorimetry Forward Calorimetry Collission Rate: 40 MHz CMS Tier1 Computing - DESY DVSEM,
5 Pictures of CMS CMS Tier1 Computing - DESY DVSEM,
6 CMS Collaboration CMS 38 Nations 182 Institutions > 2000 Scientists & Engineers CMS Tier1 Computing - DESY DVSEM,
7 Physics Motivation Test of the Standard Model (at the TeV energy scale) Search for the Higgs-Boson In each proton-proton collision, more than 1000 particles are created. The decay products allow to conclude on underlying physics processes of the collision. Physics beyond the SM (e.g. SUSY, extra dimensions,...) CMS Tier1 Computing - DESY DVSEM,
8 Trigger and Event Rate 60 TB/sec Level 1 Trigger Reduction with ASICs (Hardware) Collision Rate: 40 MHz Event-Size: 1,5 MB 150 GB/sec Tape & HDD Storage for Offline- Analysis Recorded Events: 150 per second 225 MB/sec High Level Trigger Software Data Reduction CMS Tier1 Computing - DESY DVSEM,
9 [ TB ] [ TB ] [ MSI2k ] Expected Resources CPU Year Disk Space MSS Space Year Year CMS Tier1 Computing - DESY DVSEM,
10 CMS Computing Model LHC experiments have decided to use distributed computing and storage resources. The Worldwide LHC Computing Grid (WLCG). Grid services based on the glite middleware. Computing centres arranged in a four-tiered hierarchical structure. Availability and resources are regulated by MOU (e.g. Downtime per year, response time, etc.). CMS Tier1 Computing - DESY DVSEM,
11 CMS Tier Structure CMS Tier1 Computing - DESY DVSEM,
12 WLCG Resources Taken from the WLCG Real Time Monitor: CMS Tier1 Computing - DESY DVSEM,
13 Main Tier Responsibilities Tier0: Storage of RAW detector data. First reconstruction of physics objects after data-taking. Tier1: Host one dedicated copy of RAW and reconstructed data outside the Tier0. Re-processing of stored data. Skimming (creation of small sub data samples) Tier2: Monte Carlo production/simulation. Calibration activities. Resources for physics groups analyses. Tier3: Individual user analyses. Interactive logins (for development & debugging) CMS Tier1 Computing - DESY DVSEM,
14 German CMS Members Uni Hamburg DESY Hamburg/ Zeuthen RWTH Aachen FSP-CMS (BMBF) Forschungszentrum Karlsruhe Uni Karlsruhe CMS Tier1 Computing - DESY DVSEM,
15 Tier1 GridKa Located at Forschungszentrum Karlsruhe (KIT, Campus Nord) Part of and operated by the Steinbuch Centre for Computing (SCC, KIT) Multi-VO Tier centre (supports 8 experiments) 4 LHC experiments: CMS, ATLAS, ALICE, LHCb 4 non-lhc experiments: CDF, D0, BaBar, Compass CMS Tier1 Computing - DESY DVSEM,
16 GridKa Resources (2008) Network: CPU: 10 Gbit link CERN GridKa 3 x 10 Gbit link to 3 European Tier1s 10 Gbit link to DFN/X-Win (e.g. Tier2 connections) CMS Resources: Storage: Based on dcache (developed at DESY and FNAL) CMS Disk: ~ 650 TB + D-Grid: ~ 40 TB CMS Tape: ~ 900 TB ~ 800 cores, 2 GB Ram per core D-Grid Resources: ~ 500 cores, 2 GB Ram per core CMS Tier1 Computing - DESY DVSEM,
17 Tier2/3 and NAF Resources Besides the Tier1 resources, considerable CPU and disk capacities are available in Germany (August 2008): Tier2 (German CMS Tier2 federation): DESY: RWTH Aachen: 400 cores, 185 TB disk 360 cores, 99 TB disk Tier3: RWTH Aachen: Uni Karlsruhe: Uni Hamburg: 850 cores, 165 TB disk 500 cores, 150 TB disk 15 TB disk NAF (National Analysis Facility): DESY: 245 cores, 32 TB disk D-Grid resources, partially usable for FSP-CMS: RWTH Aachen: GridKA: 600 cores, 231 TB disk 500 cores, 40 TB disk CMS Tier1 Computing - DESY DVSEM,
18 CMS Workflow 1 Transfers 2 Transfers 3 CMS Tier1 Computing - DESY DVSEM,
19 PhEDEx Data Transfers Physics Experiment Data Export (CMS data placement tool): Various agents/daemons are run at the tier sites (depending on the tier level) upload, download, MSS stage-in/out, DB,... Provides automatic WAN data transfers, based on subscriptions Automatic load-balancing File transfer routing topology (FTS) Automatic bookkeeping (database entries, logging) Consistency checks (DB vs. filesystem) File integrity checks CMS Tier1 Computing - DESY DVSEM,
20 Other Involved Components Monte Carlo Production Agent (ProdAgent) CMS Remote Analysis Builder (CRAB) Dataset Bookkeeping System (DBS) Data Location Service (DLS) Trivial File Catalog (TFC) Grid middleware (glite) SE, CE, RB, UI,... DBS: Relates dataset/block and site DLS: Relates dataset/block and logical file name (LFN) TFC: Relates LFN and local file name Very complex system Needs intense testing, debugging and optimisation. CMS Tier1 Computing - DESY DVSEM,
21 WLCG Job Workflow User Interface Computing Element 6. Job execution Task Queue RB Storage glite WMS WM Proxy 2. Job Workload Manager 4. Job Job Controller CondorG 6. Grid enabled data transfer & access Information Service Match Maker Information Supermarket Information System Storage Element CMS Tier1 Computing - DESY DVSEM,
22 CMS Service Challenges - 1 Computing, Service and Analysis (CSA) Challenges: Test the readiness for data-taking CSA06, CSA07 and CSA08 were performed to test the computing and network infrastructure and the computing resources at a level of 25%, 50% and 100% required for the LHC start-up. The whole CMS workflow was tested: Production/Simulation Prompt reconstruction at the Tier0 Re-reconstruction at the Tier1s Data distribution to Tier1s and Tier2s Data Analyses CMS Tier1 Computing - DESY DVSEM,
23 CMS Service Challenges - 2 Cumulative data volume transferred during CSA08: > 2 PetaByte in 4 weeks Transfer rate from the Tier0 to the Tier1s: < 600 MB/sec Goal was achieved only partially, problems with: Storage systems (CASTOR at CERN) Problems identified and fixed afterwards. CMS Tier1 Computing - DESY DVSEM,
24 CMS Service Challenges - 3 CMS Grid-job statistics: May to August 2008 German centres GridKa, DESY and Aachen performed extremely well, compared to all other CMS tier sites. CMS Tier1 Computing - DESY DVSEM,
25 Results of CMS Challenges Service Goal 2008 Status 2008 Goal 2007 Status 2007 Goal 2006 Status 2006 Tier-0 Reco Rate Hz Achieved 100 Hz Only at bursts 50 Hz Achieved Tier-0 Tier-1 Transfer Rate 600 MB/sec Achieved partially 300 MB/sec Only at bursts 150 MB/sec Achieved Tier-1 Tier-2 Transfer Rate MB/sec Achieved MB/sec Achieved partially MB/sec Achieved Tier-1 Tier-1 Transfer Rate 100 MB/sec Achieved 50 MB/sec Achieved partially N/A - Tier-1 Job Submission jobs/day Achieved jobs/day Achieved jobs/day jobs/day Tier-2 Job Submission jobs/day Achieved jobs/day jobs/day jobs/day Achieved Monte Carlo Simulation 1.5 x 10 9 events/year Achieved 50 x 10 6 events/month Achieved N/A - CMS Tier1 Computing - DESY DVSEM,
26 First Real CMS Data 20 TB/day 10 TB/day Data Imports to GridKa of cosmic muon samples (per day) Stable running over large periods of time First real beam event: LHC Beam hitting collimators near CMS (Sept. 10th 2008). GridKa was the first CMS tier centre outside CERN with real beam data. CMS Tier1 Computing - DESY DVSEM,
27 Monitoring - Overview Distributed Grid resources are complex systems. To provide a stable and reliable service, monitoring of the different services and resources is indispensable. Use monitoring to discover failures before the actual services are affected. Different types of monitoring: Central monitoring of all Tier centres Status of the high level grid services and their interconnection Availability of the physical resources of the Tier centres. Central monitoring of experiment specific components Local monitoring at the different computing sites Local monitoring of experiment specific components Many different monitoring instances for different purposes! CMS Tier1 Computing - DESY DVSEM,
28 Grid Monitoring Central WLCG Monitoring: Service Availability Monitoring (SAM) Standardised Grid jobs, testing the basic grid functionality of a Tier centre. Includes all Grid services of a site. Centrally send to the Tiers (once per hour) Output available on a web page with history Once a critical SAM test has failed on a site, no user jobs will be send to it anymore. CMS Tier1 Computing - DESY DVSEM,
29 Experiment Monitoring Besides the basic grid functionality, experiment specific components and functionalities are tested. Access point to the results is the CMS ARDA Dashboard. SAM: Also tests of dedicated functionalities of the Tier centres requested by CMS. Job Summary: Type of the job and its exit code. History per centre and activity. PhEDEx data transfers: Transfer quality for CMS datasets. Collects error messages in case of failures. Gives overview on the transferred data samples available at the different sites. CMS Tier1 Computing - DESY DVSEM,
30 Local Site Monitoring Each computing centre has its own monitoring infrastructure: Nagios Ganglia... Offers monitoring for hardware, services and network connections. Adapted for the local particularities. Automated notification in case of failures by , instant messages, SMS,... Experiment specific local monitoring: Is the local software installation accessible? Are all required services properly configured? CMS Tier1 Computing - DESY DVSEM,
31 USCHI Ultra Sophisticated CMS Hardware Information System Provides specific tests of the local infrastructure of a site. Mainly driven by previously observed problems. Designed in a modular way to ease the implementation of new tests. Currently, the following tests are running locally at GridKa: Availability of the CMS production software area Sufficient free inodes of the file system Basic grid functionality (e.g. user interface commands) FTS transfer channels (are they enabled, working, etc.) Remaining space of the disk-only area Status of the dcache tape-write buffers Functionality of basic VOBox commands PhEDEx grid proxy validity Output of the test provided in xml format Developed in Karlsruhe A. Scheurer, A. Trunov CMS Tier1 Computing - DESY DVSEM,
32 Multi-Monitoring Drawbacks The current monitoring infrastructure is characterised by a multitude of different sources to query for information about one single site. Limitations of the current monitoring infrastructure: Information is not clearly arranged: Too much information not related to a dedicated site. Different site design and history plots / formats / technologies. Nearly impossible to correlate information from different sources. Uncomfortable to use: Access many different websites to get all needed information about one site. Different web interfaces require parameters (time range, site name,...) Often needs a long time to download / update plots (> 1 minute!) Increases administrational effort and slows down the identification of errors. A meta-monitoring system can help to improve this situation! CMS Tier1 Computing - DESY DVSEM,
33 Ideas for Meta-Monitoring Only one website to access all relevant information for one site Simple warning system (with symbols) Single point of access for all relevant information (+ their history) Caching of important external plots for a fast access Backend for implementation of own tests Easy implementation of new tests Self-defined rules for service status and warning-alerts triggering Correlation of data from different sources Fast, simple and up to date All monitoring information should be renewed every 15 minutes Short access / loading times of the web page Easy to install and use - no complex database Access to every test result in less then 3 mouse clicks Provide links to raw information from the data source The meta-monitoring system HappyFace Project offers this functionality. CMS Tier1 Computing - DESY DVSEM,
34 HappyFace Project CMS Tier1 Computing - DESY DVSEM,
35 HappyFace Project - 2 Our HappyFace instance is currently monitoring the following quantities at GridKa: Local CMS infrastructure: the USCHI test-set Incoming/outgoing PhEDEx transfers: General transfer quality. Classification of error type. Classification of destination and source errors. Status of the WLCG production environment Queries SAM test results and displays eventually failed tests including a hyperlink to the error report. CMS Dashboard: Search Job Summary Service for failed jobs, print statistics of the exit codes. Fairshare of CMS Displays the fraction of running CMS jobs Caching of all plots necessary for monitoring CMS Tier1 Computing - DESY DVSEM,
36 HappyFace Project - 3 Instances of HappyFace currently used by: Uni Karlsruhe RWTH Aachen Uni Hamburg Uni Göttingen Code and documentation available under: Developed in Karlsruhe V. Buege, V. Mauch, A. Trunov Future plans: Platform to share HappyFace modules Provide standardised output in xml format of all HappyFace instances Contact: Viktor Mauch: mauch@ekp.uni-karlsruhe.de Volker Buege: volker.buege@cern.ch CMS Tier1 Computing - DESY DVSEM,
37 Summary CMS uses Grid technology for data access and processing. Multiple service challenges have proven the readiness for first data. PhEDEx is used for reliable large-scale data management. Experience has shown, that monitoring is indespensable for the operation of such a heterogeneous and complex system. Multitude of monitoring services available but uncomfortable to use. Meta-monitoring HappyFace Project : Very useful for daily administration of services at e.g. GridKa CMS Tier1 Computing - DESY DVSEM,
38 Backup Backup CMS Tier1 Computing - DESY DVSEM,
39 GridKa Resource Requests *Numbers for 2013 uncertain. **Numbers for 2014 unofficial. CMS Tier1 Computing - DESY DVSEM,
The CMS Computing Model
The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+
More informationChallenges of the LHC Computing Grid by the CMS experiment
2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment
More informationEditorial Manager(tm) for Journal of Physics: Conference Series Manuscript Draft
Editorial Manager(tm) for Journal of Physics: Conference Series Manuscript Draft Manuscript Number: CHEP191 Title: The HappyFace Project Article Type: Poster Corresponding Author: Viktor Mauch Corresponding
More informationVirtualizing a Batch. University Grid Center
Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum
More informationGrid Computing Activities at KIT
Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy
More informationConference The Data Challenges of the LHC. Reda Tafirout, TRIUMF
Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment
More informationStephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)
Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC
More informationFrom raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider
From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference
More informationData Transfers Between LHC Grid Sites Dorian Kcira
Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationCouchDB-based system for data management in a Grid environment Implementation and Experience
CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment
More informationThe creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM
The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE
More informationIEPSAS-Kosice: experiences in running LCG site
IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of
More informationComputing for LHC in Germany
1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''
More informationDESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities
DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group
More informationComputing / The DESY Grid Center
Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT
More informationThe LHC Computing Grid
The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationDESY. Andreas Gellrich DESY DESY,
Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was
More informationStorage and I/O requirements of the LHC experiments
Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,
More informationChallenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk
Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2
More informationData handling and processing at the LHC experiments
1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for
More informationATLAS Experiment and GCE
ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors
More informationSpanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"
Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year
More informationExperience with Data-flow, DQM and Analysis of TIF Data
Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,
More informationThe evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model
Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationComputing at Belle II
Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationCMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February
UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB
More informationThe LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland
The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability
More informationForschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss
Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz
More informationISTITUTO NAZIONALE DI FISICA NUCLEARE
ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko
More informationGrid Data Management
Grid Data Management Week #4 Hardi Teder hardi@eenet.ee University of Tartu March 6th 2013 Overview Grid Data Management Where the Data comes from? Grid Data Management tools 2/33 Grid foundations 3/33
More informationComputing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013
Computing at the Large Hadron Collider Frank Würthwein Professor of Physics of California San Diego November 15th, 2013 Outline The Science Software & Computing Challenges Present Solutions Future Solutions
More informationBig Data Analytics and the LHC
Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We
More informationSummary of the LHC Computing Review
Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,
More informationPROOF-Condor integration for ATLAS
PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline
More informationOverview. About CERN 2 / 11
Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They
More information1. Introduction. Outline
Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon
More informationMonte Carlo Production on the Grid by the H1 Collaboration
Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring
More informationLarge scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS
Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.
More informationThe Global Grid and the Local Analysis
The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between
More informationDetector Control LHC
Detector Control Systems @ LHC Matthias Richter Department of Physics, University of Oslo IRTG Lecture week Autumn 2012 Oct 18 2012 M. Richter (UiO) DCS @ LHC Oct 09 2012 1 / 39 Detectors in High Energy
More informationMeta-Monitoring for performance optimization of a computing cluster for data-intensive analysis
IEKP-KA/2016-XXX Meta-Monitoring for performance optimization of a computing cluster for data-intensive analysis Meta-Monitoring zur Leistungsoptimierung eines Rechner-Clusters für datenintensive Analysen
More informationData Access and Data Management
Data Access and Data Management in grids Jos van Wezel Overview Background [KIT, GridKa] Practice [LHC, glite] Data storage systems [dcache a.o.] Data and meta data Intro KIT = FZK + Univ. of Karlsruhe
More informationSoftware and computing evolution: the HL-LHC challenge. Simone Campana, CERN
Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge
More informationExperience of the WLCG data management system from the first two years of the LHC data taking
Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz
More informationHigh Throughput WAN Data Transfer with Hadoop-based Storage
High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San
More informationLHCb Computing Resources: 2018 requests and preview of 2019 requests
LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:
More informationThe National Analysis DESY
The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?
More informationWLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.
WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear
More informationSupport for multiple virtual organizations in the Romanian LCG Federation
INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information
More informationThe LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008
The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments
More informationStorage Resource Sharing with CASTOR.
Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing
More informationInstallation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing
Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of
More informationRUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE
RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,
More informationComputing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1
ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)
More informationPreparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch
Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first
More informationThe Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland
Available on CMS information server CMS CR -2012/140 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 13 June 2012 (v2, 19 June 2012) No
More informationReal-time dataflow and workflow with the CMS tracker data
Journal of Physics: Conference Series Real-time dataflow and workflow with the CMS tracker data To cite this article: N D Filippis et al 2008 J. Phys.: Conf. Ser. 119 072015 View the article online for
More informationCERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008
CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics
More informationATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract
ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to
More informationAnalisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI
Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not
More informationMonitoring of Computing Resource Use of Active Software Releases at ATLAS
1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,
More informationarxiv: v1 [physics.ins-det] 1 Oct 2009
Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,
More informationb-jet identification at High Level Trigger in CMS
Journal of Physics: Conference Series PAPER OPEN ACCESS b-jet identification at High Level Trigger in CMS To cite this article: Eric Chabert 2015 J. Phys.: Conf. Ser. 608 012041 View the article online
More informationIvane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU
Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:
More informationHEP Grid Activities in China
HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform
More informationReprocessing DØ data with SAMGrid
Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton
More informationDIRAC pilot framework and the DIRAC Workload Management System
Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online
More informationBatch Services at CERN: Status and Future Evolution
Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge
More informationDeploying a CMS Tier-3 Computing Cluster with Grid-enabled Computing Infrastructure
Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 7-8-2016 Deploying a CMS Tier-3 Computing Cluster with Grid-enabled Computing Infrastructure
More informationAustrian Federated WLCG Tier-2
Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1
More informationPoS(EGICF12-EMITC2)106
DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent
More informationDistributed Data Management on the Grid. Mario Lassnig
Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems
More informationN. Marusov, I. Semenov
GRID TECHNOLOGY FOR CONTROLLED FUSION: CONCEPTION OF THE UNIFIED CYBERSPACE AND ITER DATA MANAGEMENT N. Marusov, I. Semenov Project Center ITER (ITER Russian Domestic Agency N.Marusov@ITERRF.RU) Challenges
More informationPhilippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan
Philippe Laurens, Michigan State University, for USATLAS Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan ESCC/Internet2 Joint Techs -- 12 July 2011 Content Introduction LHC, ATLAS,
More informationTackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO
Tackling tomorrow s computing challenges today at CERN CERN openlab CTO CERN is the European Laboratory for Particle Physics. CERN openlab CTO The laboratory straddles the Franco- Swiss border near Geneva.
More informationWorkload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova
Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present
More informationCMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster
CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:
More informationGrid Computing: dealing with GB/s dataflows
Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 21 March 2011 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/
More informationMuon Reconstruction and Identification in CMS
Muon Reconstruction and Identification in CMS Marcin Konecki Institute of Experimental Physics, University of Warsaw, Poland E-mail: marcin.konecki@gmail.com An event reconstruction at LHC is a challenging
More informationThe INFN Tier1. 1. INFN-CNAF, Italy
IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),
More informationCSCS CERN videoconference CFD applications
CSCS CERN videoconference CFD applications TS/CV/Detector Cooling - CFD Team CERN June 13 th 2006 Michele Battistin June 2006 CERN & CFD Presentation 1 TOPICS - Some feedback about already existing collaboration
More informationUsing the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez
Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data Maaike Limper Emil Pilecki Manuel Martín Márquez About the speakers Maaike Limper Physicist and project leader Manuel Martín
More informationand the GridKa mass storage system Jos van Wezel / GridKa
and the GridKa mass storage system / GridKa [Tape TSM] staging server 2 Introduction Grid storage and storage middleware dcache h and TSS TSS internals Conclusion and further work 3 FZK/GridKa The GridKa
More informationI Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC
I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri
More informationARC integration for CMS
ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki
More informationThe ATLAS Production System
The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and
More informationAdding timing to the VELO
Summer student project report: Adding timing to the VELO supervisor: Mark Williams Biljana Mitreska Cern Summer Student Internship from June 12 to August 4, 2017 Acknowledgements I would like to thank
More informationLHCb Computing Strategy
LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy
More informationTier-2 DESY Volker Gülzow, Peter Wegner
Tier-2 Planning @ DESY Volker Gülzow, Peter Wegner DESY DV&IT 1 Outline LCG- requirements and concept Current status and plans for DESY Conclusion Plans for Zeuthen DESY DV&IT 2 LCG requirements and concepts
More informationEarly experience with the Run 2 ATLAS analysis model
Early experience with the Run 2 ATLAS analysis model Argonne National Laboratory E-mail: cranshaw@anl.gov During the long shutdown of the LHC, the ATLAS collaboration redesigned its analysis model based
More informationMonitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY
Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.
More informationCHIPP Phoenix Cluster Inauguration
TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch
More informationBig Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback
Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of
More informationAvailability measurement of grid services from the perspective of a scientific computing centre
Journal of Physics: Conference Series Availability measurement of grid services from the perspective of a scientific computing centre To cite this article: H Marten and T Koenig 2011 J. Phys.: Conf. Ser.
More informationThe CMS data quality monitoring software: experience and future prospects
The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data
More informationWe are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors
We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 4,100 116,000 120M Open access books available International authors and editors Downloads Our
More information