ATLAS COMPUTING AT OU

Size: px
Start display at page:

Download "ATLAS COMPUTING AT OU"

Transcription

1 ATLAS COMPUTING AT OU Outline HORST SEVERINI OU DOE REVIEW FEBRUARY 1, 2010 Introduction US ATLAS Grid Computing and Open Science Grid (OSG) US ATLAS Tier 2 Center OU Resources and Network Summary and Outlook

2 Introduction OUHEP involved in computing efforts for both ATLAS and DØ Long standing involvements in ATLAS Data Challenges (DC) and DØ MC and Data Reprocessing efforts Also very active in various Open Science Grid (OSG) activities in recent years Working closely with Langston University and OSU as part of OCHEP Also with LU, UT Arlington, and UNM as part of the US ATLAS SW Tier 2 Center Using OUHEP desktop cluster, small OSG testbed cluster, medium OCHEP Tier 2 cluster, and large OSCER resources 2 OU DOE Review

3 US ATLAS Grid Computing and Open Science Grid OUHEP part of ATLAS Grid Computing and the OpenScienceGrid (OSG) since its very beginning OUHEP efforts include testing, integration, and deployment of OSG infrastructure for ATLAS computing: Early install and testing of development OSG releases Debugging and bug fixing Participate in integration efforts Installation and continuous monitoring of production releases Latest cycle of integration (1.1.7) and deployment (1.2.6) just finished 3 OU DOE Review

4 US ATLAS Grid Computing and Open Science Grid OU also involved in OSG Monitoring and Accounting (Karthik) RSV: Resource and Service Validation service; integration and debugging, as well as testing new features Gratia: Job Accounting and History; integration, debugging, and web interface updates GIP: Generic Information Provider; improving installed capacity reporting : dynamic automated reporting on site computing and storage capacities 4 OU DOE Review

5 US ATLAS Grid Computing and Open Science Grid Recent involvement in OSG Education and Outreach (Horst) Invited to co-lead several international OSG grid schools: Johannesburg, South Africa, and Sao Paulo, Brazil OSG Liaison for new GridUNESP VO; visit to Sao Paulo to help founding of OSG VO and site installation (DOSAR project) Valuable because of our cross disciplinary expertise in both OSG grid middleware and ATLAS software 5 OU DOE Review

6 US ATLAS Tier 2 Center US ATLAS SW Tier 2 Center (OU in Collaboration with LU, UTA, UNM) very successful Main focus is computing, in particular Grid computing and Distributed Analysis All of the above also major components of OCHEP efforts PanDA (Production and Distributed Analysis) Production (both MC and DA) running very successfully on OCHEP Tier 2 cluster Also involved in US ATLAS network throughput efforts: testing and improvement of PerfSonar (Karthik) Other OU Resources for PanDA Production are being worked on: large OSCER cluster (Sooner) and OU Condor pool (Horst) 6 OU DOE Review

7 US ATLAS Tier 2 Hardware 61 Node (260 Core) 2.33/3.2 GHz Xeon-64, 2 GB RAM per Core 10 Support Nodes (5 head, 5 storage) 2.33/3.2 GHz Xeon TB of usable DDN/IBRIX3 storage (24 TB raw) ROCKS 4.1 (RHEL4 64 bit) OSG tier2-01: head node tier2-02: storage transfer node (gsiftp) tier2-05: SRM (storage resource manager) Monitoring: Ganglia, MonALISA, Gratia, RSV, cron scripts 7 OU DOE Review

8 US ATLAS Tier 2 Hardware Just ordered additional 200 TB high performance (Lustre) storage and additional 34 dual-quad compute nodes This will greatly improve our ability for MC generation throughput and particularly data analysis (requires large amounts of disk space) But the increased cluster size will also require more man power; Problem: Karthik is funded by ending EPSCoR grant 8 OU DOE Review

9 US ATLAS Tier 2 Hardware 9 OU DOE Review

10 10 OU DOE Review

11 Other OUHEP Resources OUHEP Tier 3 cluster: 39 Node (77 CPU) 2 GHz P4/Xeon, 20 TB storage OSG Production site, OUHEP SAM station, OSG SAM station Used for DØ SAMGrid production, ATLAS MC, and local theory calculations About to add 4 more dual-quad nodes with additional 40 TB with $20k ARRA funds Will be used for ATLAS Analysis and PROOF farm OUHEP ITB: 8 Node 1.4 GHz P4, 80 GB storage OSG Integration site Used for OSG and SAMGrid integration testing 11 OU DOE Review

12 Other OU Resources Large OSCER MPI cluster, Sooner 534 Node (4272 core) 2.0 GHz Xeon-6, 150 TB storage Used for DØ computing as available (opportunistic) Will be used for ATLAS Tier 2 computing as soon as ATLAS DDM local site mover available 750 Node Condor pool: 3.0 GHz P4, 1 GB RAM, 40 GB HD, 100 Mbps network Distributed over Campus PC labs WinXP Host OS with CoLinux and Condor inside Used for DØ computing, and hopefully ATLAS as well in the future 12 OU DOE Review

13 OU Network OU connected at 10 Gbps to NLR and I2 via OneNet / GPN OU Campus back bone at 10 Gbps 10 Gbps connection straight from SRTC 3-4 Gbps from Tier 2 cluster to BNL Still some issues with asymmetric routing and throughput BNL-OU 13 OU DOE Review

14 Summary and Outlook OUHEP continues to be major contributor to ATLAS and DØ computing, as well as OSG Particularly many aspects of Grid and Distributed computing Many things accomplished Much more to do Grow Tier 2 cluster by factor of two each year; start with storage hopefully by the end of this month This will require more man power and more hardware funding 14 OU DOE Review

15 Summary and Outlook (cont.) Start utilizing OSCER resources better (for both ATLAS and DØ Computing) Aid expansion of OU Condor pool to open up more resources Continue to make make major contributions to the development, integration, and deployment of the grid middleware and ATLAS software I.e., OSG integration, deployment, monitoring, and accounting, as well as ATLAS distributed production and data management All achievable, but continued personnel and hardware funding crucial 15 OU DOE Review

DOSAR Grids on Campus Workshop October 2, 2005 Joel Snow Langston University Outline

DOSAR Grids on Campus Workshop October 2, 2005 Joel Snow Langston University Outline DOSAR Grids on Campus Workshop October 2, 2005 Joel Snow Langston University What is DOSAR? History of DOSAR Goals of DOSAR Strategy of DOSAR Outline DOSAR Achievements Perspectives Conclusions What is

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

Tier2 Centers. Rob Gardner. University of Chicago. LHC Software and Computing Review UC San Diego Feb 7-9, 2006

Tier2 Centers. Rob Gardner. University of Chicago. LHC Software and Computing Review UC San Diego Feb 7-9, 2006 Tier2 Centers Rob Gardner University of Chicago LHC Software and Computing Review UC San Diego Feb 7-9, 2006 Outline ATLAS Tier2 centers in the computing model Current scale and deployment plans Southwest

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

DØ Southern Analysis Region Workshop Goals and Organization

DØ Southern Analysis Region Workshop Goals and Organization DØ Southern Analysis Region Workshop Goals and Organization DØRAM Jae Yu SAR Workshop, UTA Apr. 18 19, 2003 Present Status Workshop Goals Arrangements Why Remote Analysis System? Total Run II data size

More information

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer University of Johannesburg South Africa Stavros Lambropoulos Network Engineer History of the UJ Research Cluster User Groups Hardware South African Compute Grid (SA Grid) Status Applications Issues Future

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Tier2 Centre in Prague

Tier2 Centre in Prague Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the t Academy of Sciences of the Czech Republic Outline Supported groups Hardware Middleware and software Current status 2 Particle

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

ALICE Grid Activities in US

ALICE Grid Activities in US ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions

More information

Cisco Unified Provisioning Manager 2.2

Cisco Unified Provisioning Manager 2.2 Cisco Unified Provisioning Manager 2.2 General Q. What is Cisco Unified Provisioning Manager (UPM)? A. Cisco Unified Provisioning Manager is part of the Cisco Unified Communications Management Suite. Cisco

More information

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS DESY Computing Seminar Frank Volkmer, M. Sc. Bergische Universität Wuppertal Introduction Hardware Pleiades Cluster

More information

SLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017

SLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017 SLATE Services Layer at the Edge Rob Gardner University of Chicago Shawn McKee University of Michigan Joe Breen University of Utah First Meeting of the National Research Platform Montana State University

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

The ATLAS Software Installation System v2 Alessandro De Salvo Mayuko Kataoka, Arturo Sanchez Pineda,Yuri Smirnov CHEP 2015

The ATLAS Software Installation System v2 Alessandro De Salvo Mayuko Kataoka, Arturo Sanchez Pineda,Yuri Smirnov CHEP 2015 The ATLAS Software Installation System v2 Alessandro De Salvo Mayuko Kataoka, Arturo Sanchez Pineda,Yuri Smirnov CHEP 2015 Overview Architecture Performance LJSFi Overview LJSFi is an acronym of Light

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

HPC learning using Cloud infrastructure

HPC learning using Cloud infrastructure HPC learning using Cloud infrastructure Florin MANAILA IT Architect florin.manaila@ro.ibm.com Cluj-Napoca 16 March, 2010 Agenda 1. Leveraging Cloud model 2. HPC on Cloud 3. Recent projects - FutureGRID

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information

BnP on the Grid Russ Miller 1,2,3, Mark Green 1,2, Charles M. Weeks 3

BnP on the Grid Russ Miller 1,2,3, Mark Green 1,2, Charles M. Weeks 3 BnP on the Grid Russ Miller 1,2,3, Mark Green 1,2, Charles M. Weeks 3 1 Center for Computational Research, SUNY-Buffalo 2 Computer Science & Engineering SUNY-Buffalo 3 Hauptman-Woodward Medical Research

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Cluster Setup and Distributed File System

Cluster Setup and Distributed File System Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto

More information

The ATLAS Production System

The ATLAS Production System The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

High Energy Physics data analysis

High Energy Physics data analysis escience Intrastructure T2-T3 T3 for High Energy Physics data analysis Presented by: Álvaro Fernandez Casani (Alvaro.Fernandez@ific.uv.es) IFIC Valencia (Spain) Santiago González de la Hoz, Gabriel Amorós,

More information

LCG data management at IN2P3 CC FTS SRM dcache HPSS

LCG data management at IN2P3 CC FTS SRM dcache HPSS jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

Summary, Action Items and Milestones

Summary, Action Items and Milestones Summary, Action Items and Milestones 1 st HiPCAT THEGrid Workshop July 8 9, 2004 Univ. of Texas at Arlington Contact Jae Yu (jaehoonyu@uta.edu) or Alan Sill (Alan.Sill@ttu.edu) Summary So Far High energy

More information

<Insert Picture Here> Introducing Oracle WebLogic Server on Oracle Database Appliance

<Insert Picture Here> Introducing Oracle WebLogic Server on Oracle Database Appliance Introducing Oracle WebLogic Server on Oracle Database Appliance Oracle Database Appliance with WebLogic Server Simple. Reliable. Affordable. 2 Virtualization on Oracle Database Appliance

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

Profiling Grid Data Transfer Protocols and Servers. George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA

Profiling Grid Data Transfer Protocols and Servers. George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA Profiling Grid Data Transfer Protocols and Servers George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA Motivation Scientific experiments are generating large amounts of data Education

More information

Virtual Organizations in Academic Settings

Virtual Organizations in Academic Settings Virtual Organizations in Academic Settings Alan Sill Senior Scientist, Texas Internet Grid for Research and Education and Adjunct Professor of Physics Texas Tech University Dec. 6, 2006 Internet2 Fall

More information

Tel-Aviv University GRID Status

Tel-Aviv University GRID Status EUDET Tel-Aviv University GRID Status Y. BenHammou, R. Ingbir School of Physics and Astronomy, The Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv, Israel. November

More information

Day 9: Introduction to CHTC

Day 9: Introduction to CHTC Day 9: Introduction to CHTC Suggested reading: Condor 7.7 Manual: http://www.cs.wisc.edu/condor/manual/v7.7/ Chapter 1: Overview Chapter 2: Users Manual (at most, 2.1 2.7) 1 Turn In Homework 2 Homework

More information

Influence of Distributing a Tier-2 Data Storage on Physics Analysis

Influence of Distributing a Tier-2 Data Storage on Physics Analysis ACAT Conference 2013 Influence of Distributing a Tier-2 Data Storage on Physics Analysis Jiří Horký 1,2 (horky@fzu.cz) Miloš Lokajíček 1, Jakub Peisar 2 1 Institute of Physics ASCR, 2 CESNET 17th of May,

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

System Requirements. SuccessMaker 7

System Requirements. SuccessMaker 7 System Requirements SuccessMaker 7 Copyright 2015 Pearson Education, Inc. or one or more of its direct or indirect affiliates. All rights reserved. Pearson and SuccessMaker are registered trademarks, in

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

ATLAS NorduGrid related activities

ATLAS NorduGrid related activities Outline: NorduGrid Introduction ATLAS software preparation and distribution Interface between NorduGrid and Condor NGlogger graphical interface On behalf of: Ugur Erkarslan, Samir Ferrag, Morten Hanshaugen

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

Experience with ATLAS MySQL PanDA database service

Experience with ATLAS MySQL PanDA database service Journal of Physics: Conference Series Experience with ATLAS MySQL PanDA database service To cite this article: Y Smirnov et al 2010 J. Phys.: Conf. Ser. 219 042059 View the article online for updates and

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

glideinwms architecture by Igor Sfiligoi, Jeff Dost (UCSD)

glideinwms architecture by Igor Sfiligoi, Jeff Dost (UCSD) glideinwms architecture by Igor Sfiligoi, Jeff Dost (UCSD) Outline A high level overview of the glideinwms Description of the components 2 glideinwms from 10k feet 3 Refresher - HTCondor A Condor pool

More information

System Requirements. SuccessMaker 8

System Requirements. SuccessMaker 8 System Requirements SuccessMaker 8 Copyright 2015 Pearson Education, Inc. or one or more of its direct or indirect affiliates. All rights reserved. Pearson and SuccessMaker are registered trademarks, in

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

BUCKNELL S SCIENCE DMZ

BUCKNELL S SCIENCE DMZ BUCKNELL S SCIENCE #Bisonet Param Bedi VP for Library and Information Technology Principal Investigator Initial Science Design Process Involving Bucknell faculty researchers Library and Information Technology

More information

Introduction, History

Introduction, History Perspective on Campus Usage and Needs: TTU Experience with Grid Computing GGF Production Grid Services Research Group Workshop: Grids on Campus Oct 2-3, 2005 Harvard University Alan Sill Senior Scientist,

More information

Building an Exotic HPC Ecosystem at The University of Tulsa

Building an Exotic HPC Ecosystem at The University of Tulsa Building an Exotic HPC Ecosystem at The University of Tulsa John Hale Peter Hawrylak Andrew Kongs Changing Our CS Culture Platforms Desktop, workstations, mobile Programming Java, python Serial Changing

More information

PANDA PV archiving. PANDA Collaboration Meeting 18/1, Alexandru Mario Bragadireanu, Particle Physics Department, IFIN-HH Măgurele 1

PANDA PV archiving. PANDA Collaboration Meeting 18/1, Alexandru Mario Bragadireanu, Particle Physics Department, IFIN-HH Măgurele 1 PANDA PV archiving PANDA Collaboration Meeting 18/1, Alexandru Mario Bragadireanu, Particle Physics Department, IFIN-HH Măgurele 1 Outline - Introduction; - IFIN-HH database testbed: - before PANDA DCS

More information

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Hari Subramoni, Ping Lai, Sayantan Sur and Dhabhaleswar. K. Panda Department of

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

LONI Update. ULS CIO Meeting. Lonnie Leger, LONI / LSU

LONI Update. ULS CIO Meeting. Lonnie Leger, LONI / LSU LONI Update ULS CIO Meeting Lonnie Leger, LONI / LSU How much does LONI cost? No cost to connect to the research networks of Internet2 and NLR No cost to connect to other Higher Education Institutions

More information

The RAMDISK Storage Accelerator

The RAMDISK Storage Accelerator The RAMDISK Storage Accelerator A Method of Accelerating I/O Performance on HPC Systems Using RAMDISKs Tim Wickberg, Christopher D. Carothers wickbt@rpi.edu, chrisc@cs.rpi.edu Rensselaer Polytechnic Institute

More information

The ATLAS Distributed Analysis System

The ATLAS Distributed Analysis System The ATLAS Distributed Analysis System F. Legger (LMU) on behalf of the ATLAS collaboration October 17th, 2013 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Amsterdam

More information

System Requirements. PREEvision. System requirements and deployment scenarios Version 7.0 English

System Requirements. PREEvision. System requirements and deployment scenarios Version 7.0 English System Requirements PREEvision System and deployment scenarios Version 7.0 English Imprint Vector Informatik GmbH Ingersheimer Straße 24 70499 Stuttgart, Germany Vector reserves the right to modify any

More information

VC3. Virtual Clusters for Community Computation. DOE NGNS PI Meeting September 27-28, 2017

VC3. Virtual Clusters for Community Computation. DOE NGNS PI Meeting September 27-28, 2017 VC3 Virtual Clusters for Community Computation DOE NGNS PI Meeting September 27-28, 2017 Douglas Thain, University of Notre Dame Rob Gardner, University of Chicago John Hover, Brookhaven National Lab A

More information

PANDA PV archiving PANDA DCS core group meeting, 08 February 2018, e-zuce Alexandru Mario Bragadireanu, Particle Physics Department, IFIN-HH Măgurele

PANDA PV archiving PANDA DCS core group meeting, 08 February 2018, e-zuce Alexandru Mario Bragadireanu, Particle Physics Department, IFIN-HH Măgurele PANDA PV archiving PANDA DCS core group meeting, 08 February 2018, e-zuce Alexandru Mario Bragadireanu, Particle Physics Department, IFIN-HH Măgurele PANDA DCS Architecture HESR PANDA magnets -> Experiment

More information

Data storage services at KEK/CRC -- status and plan

Data storage services at KEK/CRC -- status and plan Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The

More information

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY Preston Smith Director of Research Services RESEARCH DATA DEPOT AT PURDUE UNIVERSITY May 18, 2016 HTCONDOR WEEK 2016 Ran into Miron at a workshop recently.. Talked about data and the challenges of providing

More information

Service Level Agreement Metrics

Service Level Agreement Metrics E Service Level Agreement Metrics SLA SA1 Working Group Łukasz Skitał Central European ROC ACK CYFRONET AGH Introduction Objectives To provide formal description of resources/services provided by Resource

More information

Supercomputing resources at the IAC

Supercomputing resources at the IAC Supercomputing resources at the IAC Ángel de Vicente angelv@iac.es SIE de Investigación y Enseñanza http://www.iac.es/sieinvens/sinfin/ Burros (Workstations with plenty of RAM) esel User room, 4GB, 420GB,

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

vstart 50 VMware vsphere Solution Specification

vstart 50 VMware vsphere Solution Specification vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

UK Tier-2 site evolution for ATLAS. Alastair Dewhurst

UK Tier-2 site evolution for ATLAS. Alastair Dewhurst UK Tier-2 site evolution for ATLAS Alastair Dewhurst Introduction My understanding is that GridPP funding is only part of the story when it comes to paying for a Tier 2 site. Each site is unique. Aim to

More information

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011 High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

System Requirements. PREEvision. System requirements and deployment scenarios Version 7.0 English

System Requirements. PREEvision. System requirements and deployment scenarios Version 7.0 English System Requirements PREEvision System and deployment scenarios Version 7.0 English Imprint Vector Informatik GmbH Ingersheimer Straße 24 70499 Stuttgart, Germany Vector reserves the right to modify any

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef VMs at a Tier-1 site EGEE 09, 21-09-2009 Sander Klous, Nikhef Contents Introduction Who are we? Motivation Why are we interested in VMs? What are we going to do with VMs? Status How do we approach this

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

Analytics Platform for ATLAS Computing Services

Analytics Platform for ATLAS Computing Services Analytics Platform for ATLAS Computing Services Ilija Vukotic for the ATLAS collaboration ICHEP 2016, Chicago, USA Getting the most from distributed resources What we want To understand the system To understand

More information

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand

Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand Qi Gao, Weikuan Yu, Wei Huang, Dhabaleswar K. Panda Network-Based Computing Laboratory Department of Computer Science & Engineering

More information

GMB Response/Clarifications for Queries for the tender Desktop Virtualization Solution in GMB (DVSG) Tender ID

GMB Response/Clarifications for Queries for the tender Desktop Virtualization Solution in GMB (DVSG) Tender ID Queries for the tender Desktop Virtualization Solution in GMB (DVSG) Tender ID 162499 Description in Bidder Clarification/ Additional Remarks 1 42 8.2. Storage 4 Architecture & Processing Power We request

More information

System Requirements. SuccessMaker 3

System Requirements. SuccessMaker 3 System Requirements SuccessMaker 3 System requirements are subject to change. For the latest information on system requirements, go to http://support.pearsonschool.com. For more information about Digital

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

SLN116 Using a Virtual Infrastructure to Implement Hosted Desktop Solutions

SLN116 Using a Virtual Infrastructure to Implement Hosted Desktop Solutions SLN116 Using a Virtual Infrastructure to Implement Hosted Desktop Solutions Michael Burnett, VMware Martin Quigley, CGI Craig Cook, Long View Systems John Frainetti, Talisman Energy Agenda Company Introduction

More information

The PanDA System in the ATLAS Experiment

The PanDA System in the ATLAS Experiment 1a, Jose Caballero b, Kaushik De a, Tadashi Maeno b, Maxim Potekhin b, Torre Wenaus b on behalf of the ATLAS collaboration a University of Texas at Arlington, Science Hall, PO Box 19059, Arlington, TX

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Developing a Powerful yet Inexpensive Computational Infrastructure for the UT Dept. of Nuclear Engineering. David D. Dixon April 8, 2009

Developing a Powerful yet Inexpensive Computational Infrastructure for the UT Dept. of Nuclear Engineering. David D. Dixon April 8, 2009 Developing a Powerful yet Inexpensive Computational Infrastructure for the UT Dept. of Nuclear Engineering David D. Dixon April 8, 2009 Overview Status of Existing Computational Infrastructure General

More information