Distributed Monte Carlo Production for
|
|
- Randolph Joseph
- 6 years ago
- Views:
Transcription
1 Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011
2 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP Computing Summary DOE Review March 2011 Joel Snow Langston University 2
3 Introduction Covers my tenure as MC production coordinator Simulation data (MC) crucial to physics analysis Tevatron luminosity and hence raw data volume is at record levels Challenge for analysts and production Personnel & computing resources migrating to LHC experiments DZero strategy Increase automation Leverage resources and support DOE Review March 2011 Joel Snow Langston University 3
4 Evolution Mature experiment, but nimble history of adopting innovative technologies distributed data handling - SAM early adopter of the grid for production - SAMGrid significant investment in these technologies Grid technology allows opportunistic usage DZero can mix traditional dedicated and opportunistic resources Grid interoperability Leverages resources and support, reduces personnel needs per CPU hour DOE Review March 2011 Joel Snow Langston University 4
5 Sequential data Access via Metadata Fermilab system first used by DZero SAM distributed data handling system predates grid Set of servers working together to store and retrieve files and metadata Permanent storage and local disk caches Database tracks location, metadata of files, job processing history Delivers files to jobs (using GridFTP over WAN), provides job submission capabilities DOE Review March 2011 Joel Snow Langston University 5
6 SAMGrid Fermilab developed grid first used by DZero for global MC production in 2004 SAMGrid = SAM + Job and Information Management (JIM) components Provides the user with transparent remote job submission, data processing and status monitoring. VDT based (Globus + Condor) Logically consists of Multiple execution sites Resource selector Multiple Job Submission (Scheduler) sites Multiple Clients (User Interface) to Submission site. DOE Review March 2011 Joel Snow Langston University 6
7 SAMGrid Interoperability As Open Science Grid (OSG) and LHC Computing Grid (LCG) became operational it was desirable to leverage these resources for DZero FNAL and DZero developed and deployed SAMGrid interoperability with both LCG and OSG resources Execution site acts as a Forwarding node packages SAMGrid jobs for OSG/LCG job submission via Condor-G DOE Review March 2011 Joel Snow Langston University 7
8 Consolidation, Automation, Exploitation SAMGrid sites require operational manpower and expert support People power and FNAL support migrating to LHC experiments Increase automation - Automc Reduce number of SAMGrid sites, increase use of OSG and LCG comes with support provides opportunistic job slots DOE Review March 2011 Joel Snow Langston University 8
9 MC production gets work from the SAM Request System Physics groups' MC requests are parametrized and prioritized as a Python object Production System DOE Review March 2011 Joel Snow Langston University 9
10 Automatic Monte Carlo Request Processing Developed Automc System in use at FNAL Handles official DZero MC production at all but 2 sites From approved request to final data storage Easy to use minimizes manpower needs Site independent deploy for any grid site (SAMGrid, OSG, LCG) capable of managing many sites Handle recovery of common failures Integrates with existing MC request priority protocol DOE Review March 2011 Joel Snow Langston University 10
11 AutoMC Monitoring Running at FNAL & managing production at 39 sites DOE Review March 2011 Joel Snow Langston University 11
12 Production System Resources MC production uses a variety of dedicated and opportunistic resources on 4 continents Non-grid site at ccin2p3 Lyon (FR) very productive, flexible Native Samgrid sites: FZU (CZ), GridKa (DE), LUHEP (US), USTC (CN) LCG resources: CE's, SE's, and Samgrid-LCG infrastructure in FR, UK, NL OSG resources: OSG resources: CE's, SE's, and Samgrid-OSG infrastructure in US DOE Review March 2011 Joel Snow Langston University 12
13 MC Production Results Looking back at the last 30 days Averaging 5.8M events per day and totaling 172.8M events in 30 days DOE Review March 2011 Joel Snow Langston University 13
14 MC Production Results Looking back at the last year (2010/02/ /02/14) cumulative since September Averaging 49M events per week and totaling 2.6B events in a year DOE Review March 2011 Joel Snow Langston University 14
15 MC Production Results Looking back at the last year by production segment 52 week averages per week (2010/02/ /02/14) Non-grid: 19.8M, OSG: 11.4M, Samgrid: 12.6M, LCG: 4.9M DOE Review March 2011 Joel Snow Langston University 15
16 MC Production Results Looking back at the last year by production segment Cumulative since September, 2005 Production Last Year By Segment Nongrid OSG Samgrid LCG 52 week totals (2010/02/ /02/14) Non-grid: 1041M, OSG: 596M, Samgrid: 658M, LCG: 257M 40.8% 23.3% 25.8% 10.1% DOE Review March 2011 Joel Snow Langston University 16
17 MC Production Geographic Events Last Year: Distribution 1.1% 22.5% Europe 1925M N. America 574M 0.9% Asia 29M S. America 24M 75.4% (2010/02/ /02/14) Europe S. America N. America Asia DOE Review March 2011 Joel Snow Langston University 17
18 MC Production Results Looking back at the last 5.5 years (2005/09/ /02/14) cumulative since September Averaging 19.2M events per week and totaling 2.82B events DOE Review March 2011 Joel Snow Langston University 18
19 MC Production Results Looking back at the last 5.5 years by production segment 5.5 year averages per week (2005/09/ /02/14) Non-grid: 8.0M, OSG: 4.8M, Samgrid: 5.3M, LCG: 1.1M DOE Review March 2011 Joel Snow Langston University 19
20 MC Production Results Looking back at the last 5.5 years by production segment Cumulative since September, 2005 Production Last Year By Segment Nongrid OSG Samgrid LCG 5.5 year totals (2005/09/ /02/14) Non-grid: 2.26B, OSG: 1.37B, Samgrid: 1.51B, LCG: 306M 41.5% 25.2% 27.7% 5.6% DOE Review March 2011 Joel Snow Langston University 20
21 Production Results Last 7 Years DZero MC Production in Millions of Events per year ending 12/26 DZero MC Production in Millions of Events LCG OSG SAMGrid Non-Grid Year Total Non-Grid SAMGrid OSG LCG DOE Review March 2011 Joel Snow Langston University 21
22 Production Results Last 7 Years DZero MC Production in Terabytes of Data per year ending 12/26 DZero MC Production in Terabytes of Data LCG OSG SAMGrid Non-Grid Year Total Non-Grid SAMGrid OSG LCG DOE Review March 2011 Joel Snow Langston University 22
23 OU DZero MC Production 2005/09/ /02/14 OUHEP produced 306 M events and 28.4 TB data Last year OUHEP produced 139 M events and 14.0 TB data 2010/02/ /02/14 Cumulative since Sept DOE Review March 2011 Joel Snow Langston University 23
24 LU DZero MC Production 2005/09/ /02/14 LUHEP produced 15.5 M events and 1.36 TB data Last year LUHEP produced 4.6 M events and 450 GB data 2010/02/ /02/14 Cumulative since Sept DOE Review March 2011 Joel Snow Langston University 24
25 LUHEP Computing 2 grid enabled clusters both producing DØ MC Old Samgrid cluster- 12 job slots New OSG cluster - 12 job slots with small associated SE used as DØ cache DOE Review March 2011 Joel Snow Langston University 25
26 Condor Q's at LUHEP SAMGrid Last Year OSG DOE Review March 2011 Joel Snow Langston University 26
27 Summary DZero 's early deployment of grid technology and automation has dramatically increased MC production First deployment SAM distributed data handling system Early SAMGrid deployment Use of OSG and LCG resources through interoperability with SAMGrid First opportunistic usage of OSG Storage Elements Automated MC production system Anticipate adequate MC through the last analysis DOE Review March 2011 Joel Snow Langston University 27
Reprocessing DØ data with SAMGrid
Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton
More informationATLAS COMPUTING AT OU
ATLAS COMPUTING AT OU Outline HORST SEVERINI OU DOE REVIEW FEBRUARY 1, 2010 Introduction US ATLAS Grid Computing and Open Science Grid (OSG) US ATLAS Tier 2 Center OU Resources and Network Summary and
More informationDØ Southern Analysis Region Workshop Goals and Organization
DØ Southern Analysis Region Workshop Goals and Organization DØRAM Jae Yu SAR Workshop, UTA Apr. 18 19, 2003 Present Status Workshop Goals Arrangements Why Remote Analysis System? Total Run II data size
More informationLong Term Data Preservation for CDF at INFN-CNAF
Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University
More informationHigh Throughput WAN Data Transfer with Hadoop-based Storage
High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San
More informationFermilab FERMILAB-Conf-03/207-E August 2003
Fermilab FERMILAB-Conf-03/207-E August 2003 CHEP 2003, UCSD, La Jolla, California, March 24 28,2003 1 DØ Data Handling Operational Experience K. Yip Brookhaven National Laboratory, Upton, NY 11973 A. Baranovski,
More informationDOSAR Grids on Campus Workshop October 2, 2005 Joel Snow Langston University Outline
DOSAR Grids on Campus Workshop October 2, 2005 Joel Snow Langston University What is DOSAR? History of DOSAR Goals of DOSAR Strategy of DOSAR Outline DOSAR Achievements Perspectives Conclusions What is
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationThe LHC Computing Grid
The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current
More informationGrid Computing Activities at KIT
Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy
More informationChallenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk
Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2
More informationA large-scale International IPv6 Network. A large-scale International IPv6 Network.
A large-scale International IPv6 Network www.6net.org 6NET is: one of the largest Internet research projects from the European Commission preparing the Next Generation Internet a major international IPv6
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationWas ist dran an einer spezialisierten Data Warehousing platform?
Was ist dran an einer spezialisierten Data Warehousing platform? Hermann Bär Oracle USA Redwood Shores, CA Schlüsselworte Data warehousing, Exadata, specialized hardware proprietary hardware Introduction
More informationUK Tier-2 site evolution for ATLAS. Alastair Dewhurst
UK Tier-2 site evolution for ATLAS Alastair Dewhurst Introduction My understanding is that GridPP funding is only part of the story when it comes to paying for a Tier 2 site. Each site is unique. Aim to
More informationSAM at CCIN2P3 configuration issues
SAM at CCIN2P3 configuration issues Patrice Lebrun - IPNL/IN2P3 CCIN2P3 present actions Computing and data storage services for about 45 experiments Regional Center services for: EROS II BaBar ( Tier A)
More informationA data handling system for modern and future Fermilab experiments
Journal of Physics: Conference Series OPEN ACCESS A data handling system for modern and future Fermilab experiments To cite this article: R A Illingworth 2014 J. Phys.: Conf. Ser. 513 032045 View the article
More informationCMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status
CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status High Level Requirements for user analysis computing Code Development Environment Compile, run,
More informationThe evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model
Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationDeploying virtualisation in a production grid
Deploying virtualisation in a production grid Stephen Childs Trinity College Dublin & Grid-Ireland TERENA NRENs and Grids workshop 2 nd September 2008 www.eu-egee.org EGEE and glite are registered trademarks
More informationConference The Data Challenges of the LHC. Reda Tafirout, TRIUMF
Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment
More informationForschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss
Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz
More informationCHIPP Phoenix Cluster Inauguration
TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch
More informationUW-ATLAS Experiences with Condor
UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS
More informationCMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February
UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB
More informationOutline. ASP 2012 Grid School
Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationData Transfers Between LHC Grid Sites Dorian Kcira
Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference
More informationInstallation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing
Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of
More informationComputing for LHC in Germany
1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''
More informationOverview of ATLAS PanDA Workload Management
Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration
More information1.Remote Production Facilities
1.Remote Production Facilities Over the next five years it is expected that Remote Production facilities will provide the bulk of processing power for the DØ collaboration. It is envisaged that there will
More informationWorkload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova
Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present
More informationOSG Lessons Learned and Best Practices. Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session
OSG Lessons Learned and Best Practices Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session Introduction Ziggy wants his supper at 5:30 PM Users submit most jobs at 4:59
More informationGRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi
GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science
More informationCMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster
CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:
More informationEGEE and Interoperation
EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The
More informationBig Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback
Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of
More informationLarge scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS
Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.
More informationSPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2
EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment
More informationIntroduction to Grid Computing
Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able
More informationThe Grid: Processing the Data from the World s Largest Scientific Machine
The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),
More informationDistributed production managers meeting. Armando Fella on behalf of Italian distributed computing group
Distributed production managers meeting Armando Fella on behalf of Italian distributed computing group Distributed Computing human network CNAF Caltech SLAC McGill Queen Mary RAL LAL and Lyon Bari Legnaro
More informationTier2 Centers. Rob Gardner. University of Chicago. LHC Software and Computing Review UC San Diego Feb 7-9, 2006
Tier2 Centers Rob Gardner University of Chicago LHC Software and Computing Review UC San Diego Feb 7-9, 2006 Outline ATLAS Tier2 centers in the computing model Current scale and deployment plans Southwest
More informationGeographical failover for the EGEE-WLCG Grid collaboration tools. CHEP 2007 Victoria, Canada, 2-7 September. Enabling Grids for E-sciencE
Geographical failover for the EGEE-WLCG Grid collaboration tools CHEP 2007 Victoria, Canada, 2-7 September Alessandro Cavalli, Alfredo Pagano (INFN/CNAF, Bologna, Italy) Cyril L'Orphelin, Gilles Mathieu,
More informationThe LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland
The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability
More informationMonitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY
Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.
More informationA scalable storage element and its usage in HEP
AstroGrid D Meeting at MPE 14 15. November 2006 Garching dcache A scalable storage element and its usage in HEP Martin Radicke Patrick Fuhrmann Introduction to dcache 2 Project overview joint venture between
More informationELFms industrialisation plans
ELFms industrialisation plans CERN openlab workshop 13 June 2005 German Cancio CERN IT/FIO http://cern.ch/elfms ELFms industrialisation plans, 13/6/05 Outline Background What is ELFms Collaboration with
More informationThe INFN Tier1. 1. INFN-CNAF, Italy
IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),
More informationLHCb Computing Resource usage in 2017
LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April
More informationComputing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator
Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,
More informationD0 Grid: CCIN2P3 at Lyon
D0 Grid: CCIN2P3 at Lyon Patrice Lebrun D0RACE Wokshop Feb. 12, 2002 IN2P3 Computing Center French D0RACE Contact men: Laurent Duflot Patrice Lerbun National Computing Center of IN2P3 Resources shared
More informationThe ATLAS Distributed Analysis System
The ATLAS Distributed Analysis System F. Legger (LMU) on behalf of the ATLAS collaboration October 17th, 2013 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Amsterdam
More informationImplementing GRID interoperability
AFS & Kerberos Best Practices Workshop University of Michigan, Ann Arbor June 12-16 2006 Implementing GRID interoperability G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, C. Scio**,
More informationBatch Services at CERN: Status and Future Evolution
Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge
More informationThe PanDA System in the ATLAS Experiment
1a, Jose Caballero b, Kaushik De a, Tadashi Maeno b, Maxim Potekhin b, Torre Wenaus b on behalf of the ATLAS collaboration a University of Texas at Arlington, Science Hall, PO Box 19059, Arlington, TX
More informationImproving Enterprise Search at Microsoft
Improving Enterprise Search at Microsoft How FAST Search Server 2010 for SharePoint Powers Worldwide Intranet Search at Microsoft Technical White Paper Published: February 2012 The following content may
More informationIEPSAS-Kosice: experiences in running LCG site
IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of
More informationDØ Regional Analysis Center Concepts
DØ Regional Analysis Center Concepts L. Lueking, representing the DØ Remote Analysis Task Force FNAL, Batavia, IL 60510 The DØ experiment is facing many exciting challenges providing a computing environment
More informationEdinburgh (ECDF) Update
Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1
More informationHEP Grid Activities in China
HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform
More informationAMAZON S3 FOR SCIENCE GRIDS: A VIABLE SOLUTION?
AMAZON S3 FOR SCIENCE GRIDS: A VIABLE SOLUTION? Mayur Palankar and Adriana Iamnitchi University of South Florida Matei Ripeanu University of British Columbia Simson Garfinkel Harvard University Amazon
More informationVirtualization. A very short summary by Owen Synge
Virtualization A very short summary by Owen Synge Outline What is Virtulization? What's virtulization good for? What's virtualisation bad for? We had a workshop. What was presented? What did we do with
More informationA L I C E Computing Model
CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document
More informationThe ATLAS Production System
The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and
More informationOn the EGI Operational Level Agreement Framework
EGI-InSPIRE On the EGI Operational Level Agreement Framework Tiziana Ferrari, EGI.eu EGI Chief Operations Officer 1 Outline EGI and its ecosystem EGI Service Infrastructure Operational level agreements
More informationUniversity of Johannesburg South Africa. Stavros Lambropoulos Network Engineer
University of Johannesburg South Africa Stavros Lambropoulos Network Engineer History of the UJ Research Cluster User Groups Hardware South African Compute Grid (SA Grid) Status Applications Issues Future
More informationOne Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing
One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool D. Mason for CMS Software & Computing 1 Going to try to give you a picture of the CMS HTCondor/ glideinwms global pool What s the use case
More informationThe LHC Computing Grid
The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires
More informationvsan Mixed Workloads First Published On: Last Updated On:
First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity
More informationThe European DataGRID Production Testbed
The European DataGRID Production Testbed Franck Bonnassieux CNRS/UREC ENS-Lyon France DataGrid Network Work Package Manager Franck.Bonnassieux@ens-lyon.fr Presentation outline General DataGrid project
More informationA User-level Secure Grid File System
A User-level Secure Grid File System Ming Zhao, Renato J. Figueiredo Advanced Computing and Information Systems (ACIS) Electrical and Computer Engineering University of Florida {ming, renato}@acis.ufl.edu
More information30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy
Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of
More informationSoftware and computing evolution: the HL-LHC challenge. Simone Campana, CERN
Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge
More informationPROOF-Condor integration for ATLAS
PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline
More informationISTITUTO NAZIONALE DI FISICA NUCLEARE
ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko
More information<Insert Picture Here> Introducing Oracle WebLogic Server on Oracle Database Appliance
Introducing Oracle WebLogic Server on Oracle Database Appliance Oracle Database Appliance with WebLogic Server Simple. Reliable. Affordable. 2 Virtualization on Oracle Database Appliance
More informationNimble Storage Adaptive Flash
Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform
More informationOn-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers CHEP 2016 - San Francisco, United States of America Gunther Erli, Frank Fischer, Georg Fleig, Manuel Giffels, Thomas
More informationSite Report. Stephan Wiesand DESY -DV
Site Report Stephan Wiesand DESY -DV 2005-10-12 Where we're headed HERA (H1, HERMES, ZEUS) HASYLAB -> PETRA III PITZ VUV-FEL: first experiments, X-FEL: in planning stage ILC: R & D LQCD: parallel computing
More informationXSEDE Campus Bridging Tools Jim Ferguson National Institute for Computational Sciences
September 26, 2014 XSEDE Campus Bridging Tools Jim Ferguson National Institute for Computational Sciences jwf@utk.edu Quick Advertisement: Student Programs Research Experience 12-18 students per summer,
More informationA short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN) 10-15 PetaByte/year The LHC challenge Data analysis requires at least ~100k typical PC processor cores Scientists in tens
More informationOptimizing Parallel Access to the BaBar Database System Using CORBA Servers
SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,
More informationStorage Resource Sharing with CASTOR.
Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing
More informationMonte Carlo Production on the Grid by the H1 Collaboration
Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring
More informationI Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011
I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds
More informationSurveillance Dell EMC Storage with Verint Nextiva
Surveillance Dell EMC Storage with Verint Nextiva Sizing Guide H14897 REV 1.3 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published September 2017 Dell believes the information
More informationXRAY Grid TO BE OR NOT TO BE?
XRAY Grid TO BE OR NOT TO BE? 1 I was not always a Grid sceptic! I started off as a grid enthusiast e.g. by insisting that Grid be part of the ESRF Upgrade Program outlined in the Purple Book : In this
More informationLCG MCDB Knowledge Base of Monte Carlo Simulated Events
LCG MCDB Knowledge Base of Monte Carlo Simulated Events ACAT 2007, Amsterdam Sergey Belov, JINR, Dubna belov@jinr.ru http://mcdb.cern.ch LCG MCDB group: S. Belov, JINR L. Dudko, SINP MSU W. Pokorski, CERN
More informationARC integration for CMS
ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki
More informationEvolution of the ATLAS PanDA Workload Management System for Exascale Computational Science
Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,
More informationeinfrastructures Concertation Event
einfrastructures Concertation Event Steve Crumb, Executive Director December 5, 2007 OGF Vision & Mission Our Vision: The Open Grid Forum accelerates grid adoption to enable scientific discovery and business
More informationVEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief
Digital Business Demands Performance and Scale As enterprises shift to online and softwaredriven business models, Oracle infrastructure is being pushed to run at exponentially higher scale and performance.
More informationAustrian Federated WLCG Tier-2
Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1
More informationThe Changing. Introduction. IT Business Brief. Published By IT Business Media Cofounders Jim Metzler
MAY 19, 2004 Branch Office Networking IT Business State-of-the-Market Brief By Jim Metzler The Changing Branch Office Network State of the Market Brief Introduction Branch offices are key to the success
More informationSTATE OF STORAGE IN VIRTUALIZED ENVIRONMENTS INSIGHTS FROM THE MIDMARKET
STATE OF STORAGE IN VIRTUALIZED ENVIRONMENTS INSIGHTS FROM THE MIDMARKET PAGE 1 ORGANIZATIONS THAT MAKE A GREATER COMMITMENT TO VIRTUALIZING THEIR OPERATIONS GAIN GREATER EFFICIENCIES. PAGE 2 SURVEY TOPLINE
More informationTier2 Centre in Prague
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the t Academy of Sciences of the Czech Republic Outline Supported groups Hardware Middleware and software Current status 2 Particle
More informationHigh-Energy Physics Data-Storage Challenges
High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)
More information