Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Similar documents
CPU Performance/Power Measurements at the Grid Computing Centre Karlsruhe

The INFN Tier1. 1. INFN-CNAF, Italy

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Status of KISTI Tier2 Center for ALICE

Tier2 Centre in Prague

Data Access and Data Management

IWR GridKa 6. July GridKa Site Report. Holger Marten

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

Grid Computing Activities at KIT

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Austrian Federated WLCG Tier-2

Challenges of the LHC Computing Grid by the CMS experiment

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

A scalable storage element and its usage in HEP

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

DESY. Andreas Gellrich DESY DESY,

HEP Grid Activities in China

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

Computing for LHC in Germany

LCG data management at IN2P3 CC FTS SRM dcache HPSS

and the GridKa mass storage system Jos van Wezel / GridKa

A Grid Computing Centre at Forschungszentrum Karlsruhe. Response on the Requirements for a Regional Data and Computing Centre in Germany 1 (RDCCG)

Tier-2 structure in Poland. R. Gokieli Institute for Nuclear Studies, Warsaw M. Witek Institute of Nuclear Physics, Cracow

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Site Report. Stephan Wiesand DESY -DV

Edinburgh (ECDF) Update

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

Data storage services at KEK/CRC -- status and plan

Oxford University Particle Physics Site Report

Data Storage. Paul Millar dcache

Benoit DELAUNAY Benoit DELAUNAY 1

Availability measurement of grid services from the perspective of a scientific computing centre

Tier-2 DESY Volker Gülzow, Peter Wegner

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

CMS Computing Model with Focus on German Tier1 Activities

Computing / The DESY Grid Center

dcache Introduction Course

A comparison of data-access platforms for BaBar and ALICE analysis computing model at the Italian Tier1

Overview of HEP software & LCG from the openlab perspective

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Legnaro-Padova distributed Tier-2: challenges and results

Parallel Storage Systems for Large-Scale Machines

Beob Kyun KIM, Christophe BONNAUD {kyun, NSDC / KISTI

Understanding StoRM: from introduction to internals

Cluster Setup and Distributed File System

Andrea Sciabà CERN, Switzerland

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich

The Grid: Processing the Data from the World s Largest Scientific Machine

Middleware-Tests with our Xen-based Testcluster

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

ISTITUTO NAZIONALE DI FISICA NUCLEARE

The National Analysis DESY

Storage Evaluations at BNL

IEPSAS-Kosice: experiences in running LCG site

Embedded Filesystems (Direct Client Access to Vice Partitions)

Distributed Monte Carlo Production for

The Global Grid and the Local Analysis

ATLAS Tier-3 UniGe

The EU DataGrid Testbed

System Requirements. PREEvision. System requirements and deployment scenarios Version 7.0 English

ARC integration for CMS

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

Brief review of the HEPIX 2011 spring Darmstadt, 2-6 May

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

The LHC Computing Grid

FREE SCIENTIFIC COMPUTING

Technical White paper Certification of ETERNUS DX in Milestone video surveillance environment

Grid-related related Activities around KISTI Tier2 Center for ALICE

Operating the Distributed NDGF Tier-1

Service Availability Monitor tests for ATLAS

System Requirements. PREEvision. System requirements and deployment scenarios Version 7.0 English

PoS(EGICF12-EMITC2)106

IBM System x3200 Entry server for retail, file and print or network infrastructure applications

Long Term Data Preservation for CDF at INFN-CNAF

High Energy Physics data analysis

GEMSS: a novel Mass Storage System for Large Hadron Collider da

Data Transfers Between LHC Grid Sites Dorian Kcira

Towards Network Awareness in LHC Computing

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer

XRAY Grid TO BE OR NOT TO BE?

Data transfer over the wide area network with a large round trip time

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Scientific data management

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

150 million sensors deliver data. 40 million times per second

High Throughput WAN Data Transfer with Hadoop-based Storage

Integrating the IEKP Linux cluster as a Tier-2/3 prototype centre into the LHC Computing Grid

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

Monte Carlo Production on the Grid by the H1 Collaboration

Experience with Fabric Storage Area Network and HSM Software at the Tier1 INFN CNAF

PREEvision. System Requirements. Version 7.5 English

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

High Density Cooling Solutions that work from Knurr Kris Holla

HARDWARE & ENVIRONMENTAL SPECIFICATIONS TECHNICAL BRIEF

Prague Tier-2 storage experience

Opus IB Grid Enabled Opteron Cluster with InfiniBand Interconnect

Transcription:

Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz 1 D-76344 Eggenstein-Leopoldshafen http://www.fzk.de, http://www.gridka.de firstname.lastname@iwr.fzk.de

German LHC tier-1 centre, located at the Institute for Scientific Computing (Institut für wissenschaftliches Rechnen, IWR) of Forschungszentrum Karlsruhe. 4 additional HEP experiments couple to the tier-1 centre: + BaBar (SLAC) + CDF (Fermilab) + D0 (Fermilab) + Compass (CERN)

GridKa Hardware (Milestone October 2005)

GridKa Hardware (Milestone October 2005): Worker nodes: 780 boxes (1,560 CPUs): 89 *2 Intel Pentium 3, 1.266 GHz 65 *2 Intel Xeon, 2.2 GHz 72 *2 Intel Xeon, 2.66 GHz 278 *2 Intel Xeon, 3.06 GHz 277 *2 AMD Opteron 246 (2 Ghz) +135 boxes 1... 2 GB memory, 40... 120 GB disk per system

GridKa Hardware (Milestone October 2005): Worker nodes: 780 boxes (1,560 CPUs): 89 *2 Intel Pentium 3, 1.266 GHz 65 *2 Intel Xeon, 2.2 GHz 72 *2 Intel Xeon, 2.66 GHz 278 *2 Intel Xeon, 3.06 GHz 277 *2 AMD Opteron 246 (2 GHz) AMD Opteron 270 (2x2 Ghz) test/benchmarks * Intel Pentium M 760 (2 Ghz) test/benchmarks * (* special talk)

GridKa Hardware (Milestone October 2005): Worker nodes: 780 boxes (1,560 CPUs): (SL 3 + LCG 2.6)

GridKa Hardware (Milestone October 2005): 350 TB (gross) disk storage: 30 TB IDE RAIDs (3Ware 7850) 280 TB FC RAIDs - 14 IBM FAStT700 storage controllers - 146 GB FC disks 10 RPM 40 TB SATA/FC RAIDs - DataDirect Networks 2SA8500-400 GB SATA disks (Storage info prepared by Jos van Wezel, GridKa)

GridKa Hardware (Milestone October 2005): 350 TB (gross) disk storage: 30 TB IDE RAIDs (3Ware 7850) 280 TB FC RAIDs - 14 IBM FAStT700 storage controllers - 146 GB FC disks 10 RPM 40 TB SATA/FC RAIDs - DataDirect Networks 2SA8500-400 GB SATA disks Load balanced NFS exported file systems on 13 GPFS servers allow aggregrate throughput of over 1 GB/s.

GridKa Hardware (Milestone October 2005): Disk cache (dcache) of approx. 40 TB. Uses GPFS file systems to overcome terrible write performance problem on EXT3. Interfaced to Tivoli Storage Manager (TSM) with LANless access for high throughput.

GridKa Hardware (Milestone October 2005): Frontend systems (1-2 per experiment), used as - interactive login servers, - experiment specific software servers, - LCG "VO boxes": Migration to SL (RH 7.1-7.3 so far) and to new hardware started.

Next procurements (Milestone 2006): EU-wide invitation for tenders for CPUs started systems with an arithmetic performance of at least 540.000 SPECint_base2000. Vendors have to prove the compute power by benchmarks (we don't specify # of boxes this time!) Aspects of economy will be considered with the decision (# of rack units, # of boxes/network connections, electric power consumption)

Next procurements (Milestone 2006): Preparing EU-wide invitation for tenders for disks. 20 water-cooled cabinets (Knuerr CoolTherm) ordered.

Infrastructure: Expanding water distribution system to more rooms. New water chillers (4x600 kw n+1) available soon.

SC3 at GridKa: (SC3 slides prepared by Andreas Heiss, GridKa)

SC3 at GridKa: Data transfers throughput phase (July): FTS transfers from CERN to GridKa with approx. 50 MB/s average / 100 MB/s peak to dcache disk. (SC3 goal: 150 MB/s) - dcache disk file system (EXT3) problem at GridKa - FTS / Castor problems at CERN

SC3 at GridKa: Data transfers throughput phase (July): Most T1 centers did not achieve the SC3 throughput goal in Juli.

SC3 at GridKa: Data transfers throughput phase (July): No tape connection available CMS PhEDEx transfers to dcache disk with up to 100 MB/s peak

SC3 at GridKa: Data transfers September: GridKa SC3 dcache pool nodes got SAN disks with GPFS file system approx. 40 TB disk attached to 8 dcache pool machines. dcache tape connection installed.

SC3 at GridKa: Data transfers September: In September after official throughput phase the SC3 goal (150 MB/s to disk) was achieved on several days. Peak rate up to 280 MB/s hourly average.

SC3 at GridKa: Data transfers September:

SC3 at GridKa: Tier 1 Tier 2 transfers: PhEDEx transfers (CMS) from CERN to GridKa and GridKa to DESY (Jens Rehn) gridftp and srm transfers tested to/from DESY, FZU, GSI.

SC3 at GridKa: LCG 2.6.0 setup for service phase: Experiment's login nodes are used as VO box. BDII, RB, PX, MON and SE (classic) used from production system Separate SC3 setup with UI, CE (all GridKa WNs available), LFC. dcache SE is currently being installed. FTS server for T1 T2 transfers operational.

SC3 at GridKa: CMS and ALICE started their service phase in September. No major problems reported except for CERN blockade incident on Fri., Sept. 16

GridKa School: 26-30 September 2005: Grid related talks and trainings, more than 80 participants.

Questions, Comments? Forschungszentrum Karlsruhe