ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

Similar documents
HEP Grid Activities in China

ELFms industrialisation plans

DESY. Andreas Gellrich DESY DESY,

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Status of KISTI Tier2 Center for ALICE

glite Grid Services Overview

On the employment of LCG GRID middleware

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Authentication for Virtual Organizations: From Passwords to X509, Identity Federation and GridShib BRIITE Meeting Salk Institute, La Jolla CA.

The Grid: Processing the Data from the World s Largest Scientific Machine

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore.

Troubleshooting Grid authentication from the client side

Grid Infrastructure For Collaborative High Performance Scientific Computing

The EU DataGrid Testbed

Based on: Grid Intro and Fundamentals Review Talk by Gabrielle Allen Talk by Laura Bright / Bill Howe

The glite middleware. Ariel Garcia KIT

FREE SCIENTIFIC COMPUTING

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

Andrea Sciabà CERN, Switzerland

Architecture Proposal

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Overview of HEP software & LCG from the openlab perspective

Benchmarking the ATLAS software through the Kit Validation engine

UW-ATLAS Experiences with Condor

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

Distributed Computing Grid Experiences in CMS Data Challenge

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

EUROPEAN MIDDLEWARE INITIATIVE

ISTITUTO NAZIONALE DI FISICA NUCLEARE

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP,

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

A scalable storage element and its usage in HEP

IEPSAS-Kosice: experiences in running LCG site

Implementing GRID interoperability

Monitoring tools in EGEE

The DESY Grid Testbed

Service Availability Monitor tests for ATLAS

Grid Architectural Models

CrossGrid testbed status

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

LHCb Computing Strategy

High Performance Computing Course Notes Grid Computing I

Deploying virtualisation in a production grid

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Edinburgh (ECDF) Update

Grid Operation at Tokyo Tier-2 Centre for ATLAS

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

The EU DataGrid Fabric Management

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

Cloud Computing. Summary

Introduction to Grid Infrastructures

The European DataGRID Production Testbed

Data services for LHC computing

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools

How to use the Grid for my e-science

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System

The LHC Computing Grid

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Management of batch at CERN

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

LCG-2 and glite Architecture and components

Grid-related related Activities around KISTI Tier2 Center for ALICE

Scalable Computing: Practice and Experience Volume 10, Number 4, pp

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

Austrian Federated WLCG Tier-2

AGATA Analysis on the GRID

Troubleshooting Grid authentication from the client side

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

The grid for LHC Data Analysis

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

Grid and Cloud Activities in KISTI

Windows 7 Deployment Key Milestones

CHIPP Phoenix Cluster Inauguration

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano.

Grid Computing. Olivier Dadoun LAL, Orsay Introduction & Parachute method. APC-Grid February 2007

AGIS: The ATLAS Grid Information System

Understanding StoRM: from introduction to internals

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Middleware-Tests with our Xen-based Testcluster

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

TeraGrid Communication and Computation

Grid Architecture for ALICE Experiment

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

AMGA metadata catalogue system

Scott Lowden SAP America Technical Solution Architect

Grid Authentication and Authorisation Issues. Ákos Frohner at CERN

Storage and I/O requirements of the LHC experiments

Access the power of Grid with Eclipse

Transcription:

GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology Components developed at LCG 4 100 Mbps fiber Grid Enable Visual Area Network 2 1 HPC Web Services AFS PBS Globus Fabric Clusters Grid Enable Grid Enable 3 Visual Data server

Anupam-Ameya 512 512 Processors (1.73 Tera flops)

DAE Grid BHABHA ATOMIC RESEARCH CENTRE, MUMBAI INDIRA GANDHI CENTRE OF ATOMIC RESEARCH, KALPAKKAM RAJA RAMANNA CENTRE FOR ADVANCED TECHNOLOGIES, INDORE VERIABLE ENERGY CYCLOTRON CENTRE, KOLKATA

Stimulations to enter the Grid Technology Still evolving Gridtechnology Recent availability of HighBandwidth at affordablecosts Mature WebTechnologies Wide scale Global Grid Initiatives Expertise developed with DAE-CERN Collaboration.

LHC Computing LHC (Large Hadron Collider) will begin taking data in 2006-2007 at CERN. Data rates per experiment of >100 Mbytes/sec. >1 Pbytes/year of storage for raw data per experiment. Computationally problem is so large that can not be solved by a single computer centre World-wide collaborations and analysis. Desirable to share computing and analysis throughout the world.

Data Grids for HEP Image courtesy Harvey Newman, Caltech ~PBytes/sec 1 TIPS is approximately 25,000 Online System ~100 MBytes/sec SpecInt95 equivalents Tier 1 There is a bunch crossing every 25 nsecs. There are 100 triggers per second Each triggered event is ~1 MByte in size France Regional Centre ~622 Mbits/sec or Air Freight (deprecated) Germany Regional Centre Tier 0 Offline Processor Farm Italy Regional Centre ~20 TIPS ~100 MBytes/sec CERN Computer Centre FermiLab ~4 TIPS ~622 Mbits/sec Tier 3 Physics data cache Institute Institute ~0.25TIPS Physicist workstations ~622 Mbits/sec Institute ~1 MBytes/sec Tier 2 Institute Tier 4 Caltech ~1 TIPS Tier2 Centre Tier2 Centre Tier2 Centre Tier2 Centre ~1 TIPS ~1 TIPS ~1 TIPS ~1 TIPS Physicists work on analysis channels. Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server

SHIVA: a Problem Tracking System FEATURES Fully web based system providing Tracking : Tracking reported bugs, defects, feature requests, etc. Assignment : Automatic routing and notification to support staff to get issues resolved Communication : Capturing discussion and sharing knowledge Enforcement : Automatic reminders according to severity of the issues Accountability : History and Logs

SHIVA Screenshots User Home Page

LEMON Lemon is a system designed to monitor performance metrics, exceptions & status information of extremely large clusters At CERN it monitors ~2000 nodes, ~70 clusters with ~150 metrics/host producing ~1GB of data. Estimated to monitor up to 10000 nodes A variety of web based views of monitored data for Sysadmins, managers and users Highly modular architecture allows the integration of user developed sensors for monitoring site-specific metrics.

LEMON architecture

QUATTOR Quattor is a tool suite providing automated installation, configuration and management of clusters and farms Highly suitable to install, configure and manage Grid computing clusters correctly and automatically At CERN, currently used to auto manage nodes >2000 with heterogeneous hardware and software applications Centrally configurable & reproducible installations, run time management for functional & security updates to maximize availability

QUATTOR SQL CLI SQL backend GUI SCRIPTS SOAP CDB XML backend HTTP XML configuration profiles SW server(s) SW Repository HTTP RPMs Node Configuration Manager NCM CompA CompB CompC ServiceA ServiceB RPMs/ PKGs ServiceC SW Package Manager SPMA base OS HTTP / PXE Install server Install Manager System installer Managed Nodes

DAE Grid Resource sharing and coordinated problem solving in dynamic, multiple R&D units CAT: archival storage 4 Mbps Links VECC: real-time Data collection BARC: Computing with shared controls IGCAR: wide-area Data dissemination

ANUNET In BARC BARC Router CAT Router VECC Router IGCAR Router CA VOMS File Catalog UI (Grid Portal) BDII (Resource Directory) My-Proxy Resource Broker MON (e.g. RGMA) DMZ NAT (Firewall) Cluster Network Gatekeeper (CE) SE (Storage Element) UNIT Intranet Worker Nodes

Information Provider (BDII) Resource Broker Network GateKeeper Information GateKeeper Service Site 2 Worker Nodes (Job execution) No of CPUs Memory Jobs running Jobs pending etc Worker Nodes (Job execution)

File Storage Resource Broker Network GateKeeper Information Service GateKeeper Site 2 Worker Nodes (Job execution) No of CPUs Memory Jobs running Jobs pending etc Worker Nodes (Job execution)

Grid Setup (or) Services UI User Interface Interface for using the GRID BDII Information System RB Resource Broker MyProxy Server Proxy renewal CE Computing Element SE Storage Element CE Computing Element SE Storage Element WN WN Worker Node WN Worker Node WN Worker Node WN Worker Node WN Worker Node Worker Node Site 1 WN WN Worker Node WN Worker Node Worker Node Site 2 Certifying Authority Certificates VOMS Virtual Organization Membership Server LFC File Catalog

Ceritifying Authority VOMS Top BDII File Catalogue Resource Broker (Matchmaking, Job Submission) Myproxy Server Gatekeeper Site BDII GRIS Command Line Interface Certificates User Interface Gatekeeper Site BDII GRIS command Line Interface Certificates User Interface Information Providers Information Providers PBS FMON agent Computing Element FMON Server GridFTPand RFIO FMON agent Storage Element PBS Client 32 Worker Nodes PBS FMON agent Computing Element FMON Server GridFTPand RFIO FMON agent Storage Element PBS client 10 Worker Nodes Site 1 Site 2 LFC File Catalogue GridICE Server

GRID APPLICATIONS HIGH PERFORMANCE COMPUTING ON LINE STORAGE DATA SEARCH DATABASES APPLICATION BASED VIRTUAL ORGANISATIONS DATA ACQUISITION SIMULATION VISUALISATION

THANK YOU