The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

Similar documents
Grid Architectural Models

Grid Scheduling Architectures with Globus

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Grid Computing Middleware. Definitions & functions Middleware components Globus glite

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen

Low Carbon ICT project

Layered Architecture

Cloud Computing. Up until now

UGP and the UC Grid Portals

Integrating SGE and Globus in a Heterogeneous HPC Environment

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore.

Deploying virtualisation in a production grid

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4

Leveraging the InCommon Federation to access the NSF TeraGrid

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

NUSGRID a computational grid at NUS

LCG User Registration & VO management

EGEE and Interoperation

Juliusz Pukacki OGF25 - Grid technologies in e-health Catania, 2-6 March 2009

Introduction to Grid Infrastructures

Implementing GRID interoperability

Globus GTK and Grid Services

On the employment of LCG GRID middleware

Easy Access to Grid Infrastructures

XSEDE Software and Services Table For Service Providers and Campus Bridging

Grid Computing Security hack.lu 2006 :: Security in Grid Computing :: Lisa Thalheim 1

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus

XSEDE Software and Services Table For Service Providers and Campus Bridging

UNICORE Globus: Interoperability of Grid Infrastructures

The LHC Computing Grid

Managing MPICH-G2 Jobs with WebCom-G

Architecture Proposal

The Integration of Grid Technology with OGC Web Services (OWS) in NWGISS for NASA EOS Data

Interconnect EGEE and CNGRID e-infrastructures

By Ian Foster. Zhifeng Yun

ALICE Grid Activities in US

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

The GridWay. approach for job Submission and Management on Grids. Outline. Motivation. The GridWay Framework. Resource Selection

Edinburgh (ECDF) Update

Introduction to Grid Technology

Michigan Grid Research and Infrastructure Development (MGRID)

Gatlet - a Grid Portal Framework

Hardware Tokens in META Centre

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools

glite Grid Services Overview

Using MATLAB on the TeraGrid. Nate Woody, CAC John Kotwicki, MathWorks Susan Mehringer, CAC

Huddersfield University Campus Grid: QGG of OSCAR Clusters

Grid Compute Resources and Job Management

First evaluation of the Globus GRAM Service. Massimo Sgaravatto INFN Padova

Advanced School in High Performance and GRID Computing November Introduction to Grid computing.

Prototype DIRAC portal for EISCAT data Short instruction

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

Managing CAE Simulation Workloads in Cluster Environments

S.No QUESTIONS COMPETENCE LEVEL UNIT -1 PART A 1. Illustrate the evolutionary trend towards parallel distributed and cloud computing.

Corral: A Glide-in Based Service for Resource Provisioning

Update on EZ-Grid. Priya Raghunath University of Houston. PI : Dr Barbara Chapman

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

Production Grids. Outline

UCLA Grid Portal (UGP) A Globus Incubator Project

ARC middleware. The NorduGrid Collaboration

Outline 18/12/2014. Accessing GROMACS on a Science Gateway. GROMACS in a nutshell. GROMACS users in India. GROMACS on GARUDA

DESY. Andreas Gellrich DESY DESY,

CMS HLT production using Grid tools

University of Alberta

Multiple Broker Support by Grid Portals* Extended Abstract

An Architecture For Computational Grids Based On Proxy Servers

EGI federated e-infrastructure, a building block for the Open Science Commons

Ioan Raicu. Everyone else. More information at: Background? What do you want to get out of this course?

An Introduction to Virtualization and Cloud Technologies to Support Grid Computing

Scientific Computing with UNICORE

Enabling Grids for E-sciencE. EGEE security pitch. Olle Mulmo. EGEE Chief Security Architect KTH, Sweden. INFSO-RI

Gergely Sipos MTA SZTAKI

Knowledge Discovery Services and Tools on Grids

WMS overview and Proposal for Job Status

Usage statistics and usage patterns on the NorduGrid: Analyzing the logging information collected on one of the largest production Grids of the world

DEPLOYING MULTI-TIER APPLICATIONS ACROSS MULTIPLE SECURITY DOMAINS

ATLAS NorduGrid related activities

Monitoring the Usage of the ZEUS Analysis Grid

MONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT

Interfacing Operational Grid Security to Site Security. Eileen Berman Fermi National Accelerator Laboratory

GSI-based Security for Web Services

The EU DataGrid Testbed

THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap

Using the MyProxy Online Credential Repository

GT 4.2.0: Community Scheduler Framework (CSF) System Administrator's Guide

The glite middleware. Ariel Garcia KIT

FREE SCIENTIFIC COMPUTING

Usage of LDAP in Globus

Globus Toolkit Firewall Requirements. Abstract

EUROPEAN MIDDLEWARE INITIATIVE

Computational Web Portals. Tomasz Haupt Mississippi State University

Authorization Strategies for Virtualized Environments in Grid Computing Systems

Introduction to High-Performance Computing

Globus Online: File Transfer Made Easy!

THE WIDE AREA GRID. Architecture

rid Computing: An Indistry View

The Problem of Grid Scheduling

Transcription:

The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager

Outline Overview of OxGrid Self designed components Users Resources, adding new local or become part of national/international grids?

OxGrid, a University Campus Grid Single entry point for users to shared and dedicated resources Seamless access to NGS and OSC for registered users

Authorisation And Authentication For users requiring external system access, use standard UK e-science Digital Certificates For users that only access university resources, a Kerberos CA system connected to the University authentication system Closely observing developments in shibbolethgrid projects

Software Virtual Data Toolkit Using Globus Toolkit version 2.4 with several enhancements GSI enhanced OpenSSH myproxy Client & Server Developed our own lightweight Virtual Organisation Manager and Accounting system

Services necessary to connect to OxGrid For a system to connect to OxGrid Must support a minimum software set (without which it is impossible to submit jobs from the Resource Broker) GT2 GRAM and RUS reconfigured jobmanager MDS compatible information provider Desirable though not mandated OxVOM compatible grid-mapfile installation scripts Scheduling system to give fair-share to users of the resource

Expansion Due to federated nature of university expansion into other colleges & departments is very similar to expansion into other universities Have to satisfy resource owners of value Aim to get one of their own users

Core Resources Available to all users of the campus grid Individual Departmental Clusters (PBS, SGE) Grid software interfaces installed Management of users through pool accounts or manual account creation. Clusters of PCs Running Condor/SGE Single master running up to ~500 nodes Masters run either by owners or OeRC Execution environment on 2nd OS(Linux), Windows or Virtual Machine

External Resources Only accessible to users that have registered with them National Grid Service Peered access with individual systems OSC Gatekeeper system User management done through standard account issuing procedures and manual DN mapping Controlled grid submission to Oxford Supercomputing Centre Some departmental resources Used as method to bring new resources initially online Show the benefits of joining the grid Limited accessibility to donated by other departments to maintain incentive to become full participants

System Layout Current resources: All Users OUCS Linux Pool (Condor, 250 CPU, 32-bit) Oxford NGS node (PBS, 128 CPU, 32-bit) Condensed Matter Physics (Condor, 10 CPU, 32-bit) Theoretical Physics (Rocks SGE,14 CPU, 32-bit) OeRC Cluster (Rocks SGE, 5 CPU, 32-bit) OeRC MS-Cluster (CCS, 48 CPU, 64-bit) High Energy Physics (LCG-PBS,120 CPU, 32-bit) not registering with RB Registered users OSC (Zuse, 40 CPU, 64-bit) NGS (all nodes, 342 CPU, 32-bit) Biochemistry (SGE, 30 CPU, 32/64-bit) Materials Science (SGE, 20 CPU, 64-bit) GridPP nodes (those that support NGS VO, 32-bit)

Users Conscious decision to target trivially parallel tasks to remove un-necessary load from University HPC facility Total ~30 users Statistics (1 user) Materials Science (1 user) Inorganic chemistry (2 users) Theoretical Chemistry (2 users) Biochemistry (6 users) Computational Biology (3 users) Astro-Physics (3 users) Centre for Ecology and Hydrology (2 users) OeRC (6 users)

User code Vast majority of serial users have built their own code, distributed within Starting to see commercial code being requested, e.g Matlab Providing gateway to MPI - SMP resources next stage of development

Problems to overcome when expanding internally Sociological Getting academics to share resources IT officers in departments and colleges Technical Minimal firewall problems Information servers OS Versions Programming languages

Institutional Connections Already connecting Reading and Oxford Campus Grids together Have an agreement to connect CamGrid environment 2 with OxGrid In both cases we peered their Condor resources with internal resources in Oxford

Joining a national infrastructure? Depends on integration method Mustn t interfere with resource broking, accounting etc. Responsibility for actually making the decision? If we take the whole grid then university decision If we only take individual systems then up to system owners

Joining a national infrastructure? Benefits More resources for users Gain in middleware development effort and experience Disadvantages legacy systems often part of local grids Mechanisms for user management, accounting etc may not be compatible Extra effort/hardware required if significantly different software systems involved

Conclusions Displaying interoperability between university and national resources Led by research needs rather than institutions Working to engage more users and systems, resulting in a very heterogeneous system National/international infrastructures must accept that they will only get local infrastructures to join if they are not intrusive in their requirements