Site Report. Stephan Wiesand DESY -DV

Similar documents
Tier-2 DESY Volker Gülzow, Peter Wegner

The National Analysis DESY

DESY. Andreas Gellrich DESY DESY,

Computing / The DESY Grid Center

Geneva 10.0 System Requirements

Linux Developments at DESY. Uwe Ensslin, DESY - IT 2003 Jun 30

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP,

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

Sun and Oracle. Kevin Ashby. Oracle Technical Account Manager. Mob:

Batch system usage arm euthen F azo he Z J. B T

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

Virtualizing a Batch. University Grid Center

Grid Computing Activities at KIT

Ekran System System Requirements and Performance Numbers

Andrea Sciabà CERN, Switzerland

Austrian Federated WLCG Tier-2

HEP Grid Activities in China

Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines

Understanding StoRM: from introduction to internals

DESY site report. HEPiX Spring 2016 at DESY. Yves Kemp, Peter van der Reest. Zeuthen,

ArcExplorer -- Java Edition 9.0 System Requirements

Computing for LHC in Germany

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

A scalable storage element and its usage in HEP

Edinburgh (ECDF) Update

Tel-Aviv University GRID Status

Grid Engine - A Batch System for DESY. Andreas Haupt, Peter Wegner DESY Zeuthen

Benoit DELAUNAY Benoit DELAUNAY 1

The AMD64 Technology for Server and Workstation. Dr. Ulrich Knechtel Enterprise Program Manager EMEA

PC DESY Peter Wegner. PC Cluster Definition 1

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

International Lattice DataGrid (ILDG) / LatFor DataGrid (LDG): Datagrids for Lattice QCD

SLN116 Using a Virtual Infrastructure to Implement Hosted Desktop Solutions

Grid and Cloud Activities in KISTI

Monitoring the Usage of the ZEUS Analysis Grid

The SHARED hosting plan is designed to meet the advanced hosting needs of businesses who are not yet ready to move on to a server solution.

Fast-communication PC Clusters at DESY Peter Wegner DV Zeuthen

Monte Carlo Production on the Grid by the H1 Collaboration

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Technical Specifications and Hardware Requirements

Experiences with HP SFS / Lustre in HPC Production

Oxford University Particle Physics Site Report

ArcInfo 9.1 System Requirements

The German National Analysis Facility What it is and how to use it efficiently

Status of KISTI Tier2 Center for ALICE

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

The following table lists minimum hardware requirements and recommendations for your SolarWinds Patch Manager system.

FREE SCIENTIFIC COMPUTING

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

Improving Blade Economics with Virtualization

Operating the Distributed NDGF Tier-1

Microsoft Windows Apple Mac OS X

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS

Distributed Monte Carlo Production for

Tier2 Centre in Prague

Enterprise GIS: Using Citrix to Deliver ArcGIS Desktop. Ty Fabling

Overview of QPM 4.1. What is QPM? CHAPTER

Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.

Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.

OneWorld Xe Pre-Installation Planning & MTR

Parallels Virtuozzo Containers 4.6 for Linux Readme

SAS Activity-Based Management Software Release for Windows

SNOW LICENSE MANAGER (7.X)... 3

System Requirements. SuccessMaker 7

Advances of parallel computing. Kirill Bogachev May 2016

The CMS Computing Model

ACT! by Sage Corporate Edition 2010 System Requirements

LHCb Computing Strategy

NAF & NUC reports. Y. Kemp for NAF admin team H. Stadie for NUC 4 th annual Alliance Workshop Dresden,

Product Information for etrust Audit Components

The Flexible Desktop Farm. Dario Gnaccarini C.D.H. srl

Table of Contents Release Notes 2013/03/25. Introduction in OS Deployment Manager. in Security Manager System Requirements

Travelling securely on the Grid to the origin of the Universe

Symantec NetBackup PureDisk Compatibility Matrix Created August 26, 2010

Real Parallel Computers

Hp Proliant Dl380p Gen8 Drivers For Windows Server 2003

Server Specifications

System Requirements. SAS Activity-Based Management 7.2. Deployment

Embedded Filesystems (Direct Client Access to Vice Partitions)

Symantec Multi-tier Protection

System Requirements. SAS Activity-Based Management Deployment

Implementing GRID interoperability

Xytech MediaPulse Equipment Guidelines (Version 8 and Sky)

Data storage services at KEK/CRC -- status and plan

Terminal Services Scalability Study

Virtualization. A very short summary by Owen Synge

The INFN Tier1. 1. INFN-CNAF, Italy

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

SNOW LICENSE MANAGER (7.X)... 3

MINIMUM HARDWARE AND OS SPECIFICATIONS File Stream Document Management Software - System Requirements for V4.2

Recommended System Requirements for Microsoft Dynamics SL 2018

Computing at DESY Zeuthen. Wolfgang Friebel

Parallels Virtuozzo Containers 4.5 for Windows Release Candidate Readme

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

TINE Video System. A Modular, Well-Defined, Component-Based and Interoperable TV System. Proceedings On Redesign VSv3

High Volume Transaction Processing in Enterprise Applications

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Transcription:

Site Report Stephan Wiesand DESY -DV 2005-10-12

Where we're headed HERA (H1, HERMES, ZEUS) HASYLAB -> PETRA III PITZ VUV-FEL: first experiments, X-FEL: in planning stage ILC: R & D LQCD: parallel computing Theory participation in Amanda -> IceCube future: participation in either ATLAS or CMS offer to run a Tier 2 centre for both

GRID Deployment LCG-2_6_0 on SL 3.0.x, managed by Quattor & yaim SE (dcache) w/ access to entire DESY data space 200 CPUs by end of '05 RB, BDII, PXY, RLS; VOMS soon VOs managed: hone, hermes, zeus, herab, ildg, ilc/calice, dcms, baikal, icecube VOs supported: atlas, cms, dech http://grid.desy.de

Connectivity & Infrastructure 1 Gb/s ET XWIN connection in 2006 (Hamburg site) bandwidth initially 300 or 600 Mb/s, according to needs VPN connection to GridKa (p2p) likely 10 Gb/s in 2006 Zeuthen site will hopefully get 1 Gb/s p2p to Hamburg additional machine room of 300 m2 will be ready in spring power/ups/cooling prepared same monitoring as HERA infrastructure

GRID Development International Lattice Datagrid will allow for worldwide sharing of data from lattice QCD simulations within a grid-of-grids DESY coordinates deployment of an LCG-based datagrid for lattice groups in France/Germany/Italy Work is done in cooperation with ZAM (Juelich), ZIB (Berlin) DESY implements and will maintain a metadata catalogue for storing XML documents All Grid components exist (at least at prototype level), further work needed for reaching production level Key issue for next year: address interoperability with other ILDG grids http://www-zeuthen.desy.de/latfor/ldg

GRID Use: Increasing H1, Zeus Monte Carlo production ILC simulation data exchange ZEUS MC production 05/06 (by H. Stadie et al.) H1 MC production (by M. Karbach et al.)

Plain Old Computing Platforms supported Windows Desktop, Login, CAD, Services slowly increasing use for number crunching Linux Desktop, Login, Farms, Services Solaris Services, still some login facilities working on Solaris 10 (still SPARC only) 4 production systems w/o AFS 4 test systems (OpenAFS 1.4-rc6) no OS X support, no plans

Windows migration to win.desy.de domain (XP, 2003) complete some 300 systems left in DESYNT (accelerator controls, or hardware not sufficient for new OS) 3300 user accounts, 2950 systems (2430 RIS installs) daily online: ca. 1800 75% of XP clients with SP2 2-node clusters or simple failover for most services storage, web, SQL, license management 8 TB storage, initial home quota 500 MB Exchange 2003 cluster (migration in April '05) 4 HP blades, 2 TB Storage (HP MSA 1000)

Exchange 2003 @ DESY 2 TB disk space 9 x 146 GB data 8 x 72 GB TX-logs

mailboxes @ Exchange 2003 10,000,000 1,000,000 distribution of mailbox sizes (October 2005) Mailboxes: ~2800 mailboxes @ Exchange ~2200 used (e-mail delivered to Exchange) e-mail delivered to UNIX-cluster: ~3300 mailboxes size in KB 100,000 e-mail address management via DESY-registry 10,000 Quota: 1,000 only informational e-mails planned: 200 MB standard 100 650 MB "power-users" 1 GB special mailboxes (shared) 2 GB hard limit 10 due to mailbox moves max. allowed PST file size 1 virus check by GroupShieldExchange 5.2 mailboxes 1 2800

Windows: Management MS Premier Support since October SUS -> WSUS for updates & hotfixes see presentation by Reinhard Shavlik for servers patch management Insight Manager for driver updates McAfee VirusScan Enterprise 8.0i NetInstall for application software ongoing work advanced notebook support extension of terminal services

Windows: Terminal Services Zeuthen has run a 2-node cluster for a few years for Linux desktop users Citrix Metaframe advantage: published applications increasing use of RDP simpler to use and support, rdesktop improving no NetInstall (to come soon) now 2nd cluster for mail & internet access by windows users with SAP access, due to security considerations RDP only Hamburg now has a 2-node pilot system RDP only, with NetInstall

Linux default system: SL3 i386 or amd64 (farm nodes & servers) SL4 for some servers, notebooks need stable AFS client for most systems no widespread adoption in HEP? yet? SL5 in time for LHC? many DL5 systems left many systems w/o central support (usually debian) Quattor working group used for GRID nodes (alone or with yaim, depending on type) production cluster: migration to Quattor 1.1 development of a template structure for general use

Batch (Zeuthen Shared Farm) Resources (all running SL 3.0.5) 65 dual Opteron SUN V20z, 2.2-2.6 GHz, 4-8 GB, 64-bit SL 50 dual Xeon SUN V65x, 3.2 GHz, 2 GB + 50 dual Pentium III white boxes, 800 MHz, 0.5-1 GB SUN Grid Engine Version 6u4 in production since June fully kerberized (K5), including AFS support ticket/token handling by arcx/arcxd capacity assigned to projects using share tree & fair share 4700 jobs/day, avg. duration 30', V20/V65 utilization 80% running smoothly after bugs reported to SUN fixed Grid integration by David McBride tested once

Parallel Computing apenext massively parallel supercomputer optimized for Lattice QCD 2 large prototypes running physics codes 1st of 3 machines shipped to DESY this week 4 machines = 3 TFlops at DESY in January ongoing development of Tao & C compilers, assembly optimiser, operating system http://www-zeuthen.desy.de/ape PC clusters 16 & 32 nodes dual Xeon, Myrinet 16 nodes dual Opteron, Infiniband

Tools RT: free request tracker (BEST PRACTICAL) used for user requests to uco and a few other things interfaces: web, mail, cli run in Hamburg, used by both sides both shared and dedicated queues complete success Wikis: several existing or planned Zeuthen looked at MediaWiki & MoinMoin now deploying MoinMoin (ACLs) very successful application: minutes PMDF -> Sympa for mailing list management soon