Middleware-Tests with our Xen-based Testcluster

Similar documents
Edinburgh (ECDF) Update

University of Johannesburg South Africa. Stavros Lambropoulos Network Engineer

Deploying virtualisation in a production grid

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

ATLAS Tier-3 UniGe

Virtualizing a Batch. University Grid Center

FREE SCIENTIFIC COMPUTING

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

Integration of Cloud and Grid Middleware at DGRZR

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Austrian Federated WLCG Tier-2

Support for multiple virtual organizations in the Romanian LCG Federation

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

The INFN Tier1. 1. INFN-CNAF, Italy

Parallel Storage Systems for Large-Scale Machines

XRAY Grid TO BE OR NOT TO BE?

Tier2 Centre in Prague

Status of KISTI Tier2 Center for ALICE

Virtualization. A very short summary by Owen Synge

Cluster Upgrade Procedure with Job Queue Migration.

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN

Monitoring tools in EGEE

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

The EU DataGrid Testbed

CrossGrid testbed status

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef

A High Availability Solution for GRID Services

The Grid: Processing the Data from the World s Largest Scientific Machine

EUROPEAN MIDDLEWARE INITIATIVE

Grid Operation at Tokyo Tier-2 Centre for ATLAS

Data storage services at KEK/CRC -- status and plan

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

EMC Smarts SAM, IP, ESM, MPLS, NPM, OTM, and VoIP Managers Support Matrix

<Insert Picture Here> Linux: The Journey, Milestones, and What s Ahead Edward Screven, Chief Corporate Architect, Oracle

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

Xen Summit Spring 2007

XEN and KVM in INFN production systems and a comparison between them. Riccardo Veraldi Andrea Chierici INFN - CNAF HEPiX Spring 2009

Updates from Edinburgh Computing & LHCb Nightly testing

Service Availability Monitor tests for ATLAS

SERVER TO SERVER MIGRATION QUESTIONNAIRE & PROCESS DETAIL

Oracle for administrative, technical and Tier-0 mass storage services

Journal of Physics: Conference Series. Related content. To cite this article: Jan-Philip Gehrcke et al 2010 J. Phys.: Conf. Ser.

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

glite Grid Services Overview

Beob Kyun KIM, Christophe BONNAUD {kyun, NSDC / KISTI

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Failover procedure for Grid core services

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

EMC Smarts SAM, IP, ESM, MPLS, NPM, OTM, and VoIP Managers 9.5 Support Matrix

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

ISTITUTO NAZIONALE DI FISICA NUCLEARE

Access the power of Grid with Eclipse

Grid-related related Activities around KISTI Tier2 Center for ALICE

Cluster Setup and Distributed File System

IEPSAS-Kosice: experiences in running LCG site

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

Advanced Job Submission on the Grid

Andrej Filipčič

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

On the employment of LCG GRID middleware

Virtualizing Oracle 11g/R2 RAC Database on Oracle VM: Methods/Tips

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

Implementing GRID interoperability

Architecture Proposal

StreamSets Control Hub Installation Guide

Cloud Control Panel User Manual v1.1

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Service Level Agreement Metrics

SANDPIPER: BLACK-BOX AND GRAY-BOX STRATEGIES FOR VIRTUAL MACHINE MIGRATION

Heterogeneous Grid Computing: Issues and Early Benchmarks

Report on the HEPiX Virtualisation Working Group

Performance of R-GMA for Monitoring Grid Jobs for CMS Data Production

PartnerProductIntroduction

Operating the Distributed NDGF Tier-1

GRNET Cloud Services

SAM at CCIN2P3 configuration issues

PERFORMABILITY ASPECTS OF THE ATLAS VO; USING LMBENCH SUITE

AGATA Analysis on the GRID

Data transfer over the wide area network with a large round trip time

Testing an Open Source installation and server provisioning tool for the INFN CNAF Tier1 Storage system

The glite middleware. Ariel Garcia KIT

SSD and Container Native Storage for High- Performance Databases

Virtualisation for Oracle databases and application servers

Summer Student Work: Accounting on Grid-Computing

Oracle made it easy: Cloud DB Vergleich

Introduction to Intelligent Platform Management Interface (IPMI)

Introduction to Grid Infrastructures

Carbonite Availability 8.2, Carbonite Migrate 8.2 and Carbonite Cloud Migration Supported Platforms Chart

Different Block Sizes and their speed implications

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

10 Gbit/s Challenge inside the Openlab framework

Transcription:

Tier-2 meeting March 3, 2008

1 Introduction Overview of the Testcluster Overview of the installed Software Xen 2 Main Original Usage of the Testcluster Present Activities The Testcluster Future Activities

Overview of the Testcluster Hardwarefeatures of the Testcluster 2 CPU s 3.0 Gbyte each.

Overview of the Testcluster Hardwarefeatures of the Testcluster 2 CPU s 3.0 Gbyte each. 2 Gbyte of Ram.

Overview of the Testcluster Hardwarefeatures of the Testcluster 2 CPU s 3.0 Gbyte each. 2 Gbyte of Ram. 80 Gbyte Sata Harddisk.

Overview of the installed Software Software installed on the Cluster Basic Operating System: Scientific Linux Cern (SLC4) with Xen Kernel

Overview of the installed Software Software installed on the Cluster Basic Operating System: Scientific Linux Cern (SLC4) with Xen Kernel 5 Virtual Operating Systems: 8 Gigabyte

Overview of the installed Software Software installed on the Cluster Basic Operating System: Scientific Linux Cern (SLC4) with Xen Kernel 5 Virtual Operating Systems: 8 Gigabyte test-glitece (SLC3)

Overview of the installed Software Software installed on the Cluster Basic Operating System: Scientific Linux Cern (SLC4) with Xen Kernel 5 Virtual Operating Systems: 8 Gigabyte test-glitece (SLC3) test-wn2 (SLC3)

Overview of the installed Software Software installed on the Cluster Basic Operating System: Scientific Linux Cern (SLC4) with Xen Kernel 5 Virtual Operating Systems: 8 Gigabyte test-glitece (SLC3) test-wn2 (SLC3) test-lcgce (SLC3)

Overview of the installed Software Software installed on the Cluster Basic Operating System: Scientific Linux Cern (SLC4) with Xen Kernel 5 Virtual Operating Systems: 8 Gigabyte test-glitece (SLC3) test-wn2 (SLC3) test-lcgce (SLC3) test-wn (SLC3)

Overview of the installed Software Software installed on the Cluster Basic Operating System: Scientific Linux Cern (SLC4) with Xen Kernel 5 Virtual Operating Systems: 8 Gigabyte test-glitece (SLC3) test-wn2 (SLC3) test-lcgce (SLC3) test-wn (SLC3) test-se (SLC3)

Overview of the installed Software

Xen Why Xen Highest performance of all virtualisation Solutions

Xen Why Xen Highest performance of all virtualisation Solutions Xen is easy to install (Souce or binary)

Original Usage of the Testcluster Original usage of the Test-Cluster Testing of Middleware Upgrades on the Testcluster.

Original Usage of the Testcluster Original usage of the Test-Cluster Testing of Middleware Upgrades on the Testcluster. Testing of new types of Workernodes (lfc-server).

Original Usage of the Testcluster Original usage of the Test-Cluster Testing of Middleware Upgrades on the Testcluster. Testing of new types of Workernodes (lfc-server). Learning of how to send Jobs to the Grid.

Original Usage of the Testcluster Original usage of the Test-Cluster Testing of Middleware Upgrades on the Testcluster. Testing of new types of Workernodes (lfc-server). Learning of how to send Jobs to the Grid. Provide a Testbed for our Team and other users.

Present Activities Present usage of the Testcluster At the moment we have we are testing a whole new Cluster where all types of Workernodes are totally based on SL4. test-glitece (UI based upon SL4)

Present Activities Present usage of the Testcluster At the moment we have we are testing a whole new Cluster where all types of Workernodes are totally based on SL4. test-glitece (UI based upon SL4) test-lcgce (SL4) minor Problems

Present Activities Present usage of the Testcluster At the moment we have we are testing a whole new Cluster where all types of Workernodes are totally based on SL4. test-glitece (UI based upon SL4) test-lcgce (SL4) minor Problems test-wn (SL4) and test-wn2 (SL4)

Present Activities Present usage of the Testcluster At the moment we have we are testing a whole new Cluster where all types of Workernodes are totally based on SL4. test-glitece (UI based upon SL4) test-lcgce (SL4) minor Problems test-wn (SL4) and test-wn2 (SL4) test-dpm (SL4)

The Testcluster The Testcluster

Future Activities future Activities Solve the remaining Problems with the test-lcgce (SL4).As of the moment we can send jobs from an UI to teh test-cluster, but there are some Problems with the SAM Monitoring tool.

Future Activities future Activities Solve the remaining Problems with the test-lcgce (SL4).As of the moment we can send jobs from an UI to teh test-cluster, but there are some Problems with the SAM Monitoring tool. Learn how to successfully migrate the MySQL-Databases from the old SLC3 based dpm to the SL4 dpm.

Future Activities future Activities Solve the remaining Problems with the test-lcgce (SL4).As of the moment we can send jobs from an UI to teh test-cluster, but there are some Problems with the SAM Monitoring tool. Learn how to successfully migrate the MySQL-Databases from the old SLC3 based dpm to the SL4 dpm. Extend the usage of Xen. (Install UI, grid-rb, grid-mon on one single Hardware)