Introduction to Grid Computing. Health Grid 2008 University of Chicago Gleacher Center Chicago, IL USA June 2, 2008

Similar documents
Based on: Grid Intro and Fundamentals Review Talk by Gabrielle Allen Talk by Laura Bright / Bill Howe

Introduction to Grid Computing

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT.

Data Management 1. Grid data management. Different sources of data. Sensors Analytic equipment Measurement tools and devices

Introduction to Grid Computing

Grid Data Management

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Architectural Models

Grid Scheduling Architectures with Globus

Grid Compute Resources and Job Management

Cloud Computing. Up until now

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen

UW-ATLAS Experiences with Condor

Introduction to SRM. Riccardo Zappi 1

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

By Ian Foster. Zhifeng Yun

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

THE GLOBUS PROJECT. White Paper. GridFTP. Universal Data Transfer for the Grid

Clouds: An Opportunity for Scientific Applications?

EGEE and Interoperation

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007

Introduction to GT3. Introduction to GT3. What is a Grid? A Story of Evolution. The Globus Project

Storage Resource Sharing with CASTOR.

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Outline. ASP 2012 Grid School

UNICORE Globus: Interoperability of Grid Infrastructures

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Pegasus Workflow Management System. Gideon Juve. USC Informa3on Sciences Ins3tute

glite Grid Services Overview

High Throughput WAN Data Transfer with Hadoop-based Storage

Managing large-scale workflows with Pegasus

Managing and Executing Loosely-Coupled Large-Scale Applications on Clusters, Grids, and Supercomputers

The PanDA System in the ATLAS Experiment

Cloud Computing. Summary

Layered Architecture

AMGA metadata catalogue system

High Performance Computing Course Notes Grid Computing I

Grid Compute Resources and Grid Job Management

XSEDE High Throughput Computing Use Cases

Globus GTK and Grid Services

Database Assessment for PDMS

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

A Distributed Media Service System Based on Globus Data-Management Technologies1

NUSGRID a computational grid at NUS

Overview of ATLAS PanDA Workload Management

Advanced School in High Performance and GRID Computing November Introduction to Grid computing.

Towards Network Awareness in LHC Computing

Overcoming Obstacles to Petabyte Archives

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

A Data Diffusion Approach to Large Scale Scientific Exploration

Authentication for Virtual Organizations: From Passwords to X509, Identity Federation and GridShib BRIITE Meeting Salk Institute, La Jolla CA.

A Simulation Model for Large Scale Distributed Systems

On the Use of Cloud Computing for Scientific Workflows

A Federated Grid Environment with Replication Services

Synonymous with supercomputing Tightly-coupled applications Implemented using Message Passing Interface (MPI) Large of amounts of computing for short

Grid Computing Middleware. Definitions & functions Middleware components Globus glite

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

DESY. Andreas Gellrich DESY DESY,

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

The Problem of Grid Scheduling

Opal: Simple Web Services Wrappers for Scientific Applications

Outline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems

Knowledge Discovery Services and Tools on Grids

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

What s new in HTCondor? What s coming? HTCondor Week 2018 Madison, WI -- May 22, 2018

Understanding StoRM: from introduction to internals

Case Studies in Storage Access by Loosely Coupled Petascale Applications

Typically applied in clusters and grids Loosely-coupled applications with sequential jobs Large amounts of computing for long periods of times

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano.

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status

Corral: A Glide-in Based Service for Resource Provisioning

Introduction to Grid Infrastructures

Harnessing Grid Resources to Enable the Dynamic Analysis of Large Astronomy Datasets

Grid Middleware and Globus Toolkit Architecture

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Data Staging: Moving large amounts of data around, and moving it close to compute resources

Pegasus. Automate, recover, and debug scientific computations. Rafael Ferreira da Silva.

N. Marusov, I. Semenov

Grid Computing: Status and Perspectives. Alexander Reinefeld Florian Schintke. Outline MOTIVATION TWO TYPICAL APPLICATION DOMAINS

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Part IV. Workflow Mapping and Execution in Pegasus. (Thanks to Ewa Deelman)

SLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017

Globus Toolkit 4 Execution Management. Alexandra Jimborean International School of Informatics Hagenberg, 2009

Extreme-scale scripting: Opportunities for large taskparallel applications on petascale computers

DYNES: DYnamic NEtwork System

Data Management for Distributed Scientific Collaborations Using a Rule Engine

Introduction to Grid Technology

Distributed Data Management with Storage Resource Broker in the UK

Design of Distributed Data Mining Applications on the KNOWLEDGE GRID

Scientific data management

LHC and LSST Use Cases

Discovery Net : A UK e-science Pilot Project for Grid-based Knowledge Discovery Services. Patrick Wendel Imperial College, London

Problems for Resource Brokering in Large and Dynamic Grid Environments

A Cloud-based Dynamic Workflow for Mass Spectrometry Data Analysis

57 Middleware 2004 Companion

Transcription:

Introduction to Grid Computing Health Grid 2008 University of Chicago Gleacher Center Chicago, IL USA June 2, 2008 1

Introduction to Grid Computing Tutorial Outline I. Motivation and Grid Architecture II. Grid Examples from Life Sciences III. Grid Security IV. Job Management: Running Applications V. Data Management VI. Open Science Grid and TeraGrid VII. Workflow on the Grid VIII. Next steps: Learning more, getting started

Scaling up Science: Biological Research Citation Network Analysis 1975 1980 1985 1990 1995 Work of James Evans, University of Chicago, Department of 2000 Sociology 2002 3

Scaling up the analysis Query and analysis of 25+ million citations Work started on desktop workstations Queries grew to month-long duration With data distributed across U of Chicago/OSG TeraPort cluster: 50 (faster) CPUs gave 100 X speedup Far more methods and hypotheses can be tested! Higher throughput and capacity enables deeper analysis and broader community access. 4

What makes this possible? High performance computing Once the domain of costly, specialized supercomputers (many architectures programming was difficult) Now unified around a smaller number of popular models notably, message passing tightly coupled computing message passing and multicore: passes data via memory and/or messages High throughput computing Running lots of ordinary programs in parallel Loosely-coupled: passes data via file exchange Highly accessible to scientists needs little programming expertise Scripting enables diverse execution models Supercomputing via commodity clusters

Computing clusters have commoditized supercomputing Cluster Management frontend I/O Servers typically RAID fileserver A few Headnodes, gatekeepers and other service nodes Tape Backup robots Disk Arrays Lots of Worker Nodes 6

Grids Provide Global Resources To Enable e-science Grids across the world can share resources based on uniform open protocol standards and common middleware software stacks. 7

What is a Grid? A Grid is a system that: Coordinates resources that are not subject to centralized control Uses standard, open, general-purpose protocols and interfaces Delivers non-trivial qualities of service 8

Grids are composed of distributed clusters Grid Client Application User Interface Grid Middleware Grid Protocols Dedicated Clusters Grid Storage Shared Clusters Grid Storage Grid Middleware Grid Middleware Compute Cluster Compute Cluster Resource, Workflow And Data Catalogs Gateways to campus other grids Grid Storage Grid Middleware Compute Cluster and a set of services provided by middleware toolkits and protocols Security to control access and protect communication Directory to locate grid sites and services Uniform interface to computing sites Facility to maintain and schedule queues of work Fast and secure data set movers Directories to track where datasets live 9

Initial Grid driver: High Energy Physics ~PBytes/sec Online System ~100 MBytes/sec 1 TIPS is approximately 25,000 SpecInt95 equivalents There is a bunch crossing every 25 nsecs. There are 100 triggers per second Each triggered event is ~1 MByte in size ~622 Mbits/sec or Air Freight (deprecated) Offline Processor Farm ~20 TIPS ~100 MBytes/sec CERN Computer Centre France Regional Centre Germany Regional Centre Italy Regional Centre FermiLab ~4 TIPS ~622 Mbits/sec ~622 Mbits/sec Caltech ~1 TIPS Tier2 Centre Tier2 Centre Tier2 Centre Tier2 Centre ~1 TIPS ~1 TIPS ~1 TIPS ~1 TIPS Institute Institute ~0.25TIPS Institute Institute Physicists work on analysis channels. Physics data cache ~1 MBytes/sec Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server Physicist workstations Image courtesy Harvey Newman, Caltech 10

Grids can process vast datasets. Many HEP and Astronomy experiments consist of: Large datasets as inputs (find datasets) Transformations which work on the input datasets (process) The output datasets (store and publish) The emphasis is on the sharing of the large datasets Programs when independent can be parallelized. Mosaic of M42 created on TeraGrid = Data Transfer Montage Workflow: ~1200 jobs, 7 levels NVO, NASA, ISI/Pegasus - Deelman et al. = Compute Job 11

Open Science Grid (OSG) provides shared computing resources for a broad set of disciplines A consortium of universities and national labs, growing and supporting a sustainable grid infrastructure for science. OSG focuses on general services, operations, and end-to-end performance Composed of a large number (>75 and growing) of shared computing facilities, or sites Uses advanced networking from all available high-performance research networks http://www.opensciencegrid.org 12

70 sites (20,000 CPUs) & growing 400 to >1000 concurrent jobs Many applications + CS experiments; includes long-running production operations Diverse job mix www.opensciencegrid.org 13

TeraGrid provides vast resources via a number of huge computing facilities. 14

Virtual Organizations (VOs) Groups of organizations that use the Grid to share resources for specific purposes Support a single community Deploy compatible technology and agree on working policies Security policies - difficult Deploy different network accessible services: Grid Information Grid Resource Brokering Grid Monitoring Grid Accounting 15

Grid Software: Globus and Condor Condor provides both client & server scheduling Condor-G: an agent to queue, schedule and manage work submission Globus Toolkit (GT) provides core middleware Client tools which you can use from a command line APIs (scripting languages, C, C++, Java, ) to build your own tools, or use direct from applications Web service interfaces Higher level tools built from these basic components, e.g. Reliable File Transfer (RFT) 16

Grid software stack Grid Applications and Portals Workflow Management Condor-G Grid Client Job Management Data Management Information (config & status) Services Grid Security Infrastructure Core Globus Services Network Protocols (TCP/IP) 17

Grid architecture is evolving to a Service-Oriented approach....but this is beyond our workshop s scope. See Service-Oriented Science by Ian Foster. Service-oriented applications Wrap applications as services Compose applications into workflows Service-oriented Grid infrastructure Provision physical resources to support application workloads Appln Service Users Composition Workflows Invocation Appln Service Provisioning The Many Faces of IT as Service, Foster, Tuecke, 2005 18

Introduction to Grid Computing Tutorial Outline I. Motivation and Grid Architecture II. Grid Examples from Life Sciences III. Grid Security IV. Job Management: Running Applications V. Data Management VI. Open Science Grid and TeraGrid VII. Workflow on the Grid VIII. Next steps: Learning more, getting started

How to leverage grids in biology and health? Many biological processes are statistical in nature. Simulations of such processes parallelize naturally. Performance of multiple simulations on OSG the Open Science Grid will help to: Enhance Accuracy Sampling necessary for calculation of experimental observables Simulate mutants, compare their behavior and compare with experiments Test several force-fields - different force-fields may produce different results Describe long timescale processes (µs and longer) Run large number of simulations (increase probability of observation of important events) Test different alternative methods

Example: OSG CHARMM Application Project and Slides are the work of: Ana Damjanovic (JHU, NIH) JHU: Petar Maksimovic Bertrand Garcia-Moreno NIH: Tim Miller Bernard Brooks OSG: Torre Wenaus and team

CHARMM and MD simulations CHARMM is one of the most widely used programs for computational modeling, simulations and analysis of biological (macro)molecules The widest use of CHARMM is for molecular dynamics (MD) simulations: Atoms are described explicitly interactions between atoms are described with an empirical force-field * electrostatic, van der Waals * bond vibrations, angle bending, dihedrals Newton s equations are solved to describe time evolution of the system: timestep 1-2 fs typical simulation times: 10-100 ns CHARMM has a variety of tools for analysis of MD trajectories

Hydration of the protein interior Interior water molecules can play key roles in many biochemical processes such as proton transfer, or catalysis. Precise location and numbers of water molecules in protein interiors is not always known from experiments. Knowing their location is important for understanding how proteins work, but also for drug design staphylococcal nuclease Crystallographic structures obtained at different temperatures disagree in the number of observed water molecules. How many water molecules are in the protein interior? Using traditional HPC resources we performed 10 X 10 ns long MD simulations. Two conformations, each different hydration pattern Not enough statistics! Use OSG to run lots of simulations with different initial velocities.

Running jobs on the OSG Manual submission and running of a large number of jobs can be time-consuming and lead to errors OSG personnel worked with scientists to get this scheme running PanDA was developed for ATLAS, and is being evolved into OSG-WMS

Accomplishments with use of OSG staphylococcal nuclease 2 initial structures X 2 methods each 40 X 2 ns long simulations Total: 160 X 2ns. Total usage: 120K cpu hours Interior water molecules can influence protein conformations Different answers obtained if simulations started with and without interior water molecules Longer runs are needed to answer how many water molecule are in the protein? Parallel CHARMM is key to running longer simulations (as simulation time is reduced). OSG is exploring and testing ways to execute parallel MPI applications.

Plans for near term future Hydration of interior of staphylococcal nuclease: answer previous question: (80 X 8 ns = 153K cpu hours) test new method for conformational search, SGLD (80 X 8 ns = 153K cpu hours) Conformational rearrangements and effects of mutations in AraC protein. when sugar arabinose is present, arm, gene expression on when sugar arabinose is absent, arm, gene expression off Computational requirements: Wild type with and without arabinose (50 X 10 ns) F15V with and without arabinose (50 X 10 ns) Total: 100 X 10 ns = 240K cpu hours Additional test simulations (100 X 25 X 1 ns) Total: 2500 X 1ns = 600K cpu hours Provide feedback to experiments

Long term needs Study conformational rearrangements in other proteins Conformational rearrangements are at the core of key biochemical processes such as regulation of enzymatic and genetic activity. Understanding of such processes is pharmaceutically relevant. Such processes usually occur on timescales of µs and longer, are not readily sampled in MD simulations. Experimentally little is know about the mechanisms of these processes. Poorly explored territory, will be hot for the next 10 years Use brute force approach Test performance of different methods Test different force fields Study effects of mutations -> provide feedback to experiments

Impact on scientific community CHARMM on OSG is still in testing and development stage so the user pool is small Potential is great - there are 1,800 registered CHARMM users bring OSG computing to hundreds of other CHARMM users give resources to small groups do more science by harvesting unused OSG cycles provide job management software Recipe used here developed for CHARMM, but can be easily extended to (... GROMACS other MD programs (AMBER, NAMD,

Genome Analysis and Database Update Runs across TeraGrid and OSG. Used VDL and Pegasus workflow & provenance. Scans public DNA and protein databases for new and newly updated genomes of different organisms and runs BLAST, Blocks, Chisel. 1200 users of resulting DB. On OSG at the peak used >600CPUs,17,000 jobs a week. 29

PUMA: Analysis of Metabolism PUMA Knowledge Base Information about proteins analyzed against ~2 million gene sequences Analysis on Grid Involves millions of BLAST, BLOCKS, and other processes Natalia Maltsev et al. http://compbio.mcs.anl.gov/puma2 30

Sample Engagement: Kuhlman Lab Using OSG to design proteins that adopt specific three dimensional structures and bind and regulate target proteins important in cell biology and pathogenesis. These designed proteins are used in experiments with living cells to detect when and where the target proteins are activated in the cells Sr. Rosetta Researcher and his team, little CS/IT expertise, no grid expertise. Quickly up and running with large scale jobs across OSG, >250k CPU hours

Rosetta on OSG Each protein design requires about 5,000 CPU hours, distributed across 1,000 individual compute jobs.

Workflow Motivation: example from Neuroscience Large fmri datasets 90,000 volumes / study 100s of studies Wide range of analyses Testing, production runs Data mining Ensemble, Parameter studies

A typical workflow pattern in fmri image analysis runs many filtering apps. 3a.h 3a.i 4a.h 4a.i ref.h ref.i 5a.h 5a.i 6a.h 6a.i align_warp/1 align_warp/3 align_warp/5 align_warp/7 3a.w 4a.w 5a.w 6a.w reslice/2 reslice/4 reslice/6 reslice/8 3a.s.h 3a.s.i 4a.s.h 4a.s.i 5a.s.h 5a.s.i 6a.s.h 6a.s.i softmean/9 atlas.h atlas.i slicer/10 slicer/12 slicer/14 atlas_x.ppm atlas_y.ppm atlas_z.ppm convert/11 convert/13 convert/15 atlas_x.jpg atlas_y.jpg atlas_z.jpg Workflow courtesy James Dobson, Dartmouth Brain Imaging Center 34

Introduction to Grid Computing Tutorial Outline I. Motivation and Grid Architecture II. Grid Examples from Life Sciences III. Grid Security IV. Job Management: Running Applications V. Data Management VI. Open Science Grid and TeraGrid VII. Workflow on the Grid VIII. Next steps: Learning more, getting started

Grid security is a crucial component Problems being solved might be sensitive Resources are typically valuable Resources are located in distinct administrative domains Each resource has own policies, procedures, security mechanisms, etc. Implementation must be broadly available & applicable Standard, well-tested, well-understood protocols; integrated with wide variety of tools 36

Grid Security Infrastructure - GSI Provides secure communications for all the higher-level grid services Secure Authentication and Authorization Authentication ensures you are whom you claim to be ID card, fingerprint, passport, username/password Authorization controls what you are permitted to do Run a job, read or write a file GSI provides Uniform Credentials Single Sign-on User authenticates once then can perform many tasks 37

Introduction to Grid Computing Tutorial Outline I. Motivation and Grid Architecture II. Grid Examples from Life Sciences III. Grid Security IV. Job Management: Running Applications V. Data Management VI. Open Science Grid and TeraGrid VII. Workflow on the Grid VIII. Next steps: Learning more, getting started

Local Resource Manager: a batch scheduler for running jobs on a computing cluster Popular LRMs include: PBS Portable Batch System LSF Load Sharing Facility SGE Sun Grid Engine Condor Originally for cycle scavenging, Condor has evolved into a comprehensive system for managing computing LRMs execute on the cluster s head node Simplest LRM allows you to fork jobs quickly Runs on the head node (gatekeeper) for fast utility functions No queuing (but this is emerging to throttle heavy loads) In GRAM, each LRM is handled with a job manager 39

GRAM Globus Resource Allocation Manager globusrun myjob Globus GRAM Protocol Gatekeeper & Job Manager Submit to LRM Organization A Organization B July 11-15, 2005 Lecture3: Grid Job Management 40

GRAM provides a uniform interface to diverse cluster schedulers. User GRAM Condor Site A VO LSF Site C VO PBS VO UNIX fork() VO Site B Site D Grid 41

Condor-G: Grid Job Submission Manager Condor-G myjob1 myjob2 myjob3 myjob4 myjob5 Globus GRAM Protocol Gatekeeper & Job Manager Submit to LRM Organization A Organization B July 11-15, 2005 Lecture3: Grid Job Management 42

Introduction to Grid Computing Tutorial Outline I. Motivation and Grid Architecture II. Grid Examples from Life Sciences III. Grid Security IV. Job Management: Running Applications V. Data Management VI. Open Science Grid and TeraGrid VII. Workflow on the Grid VIII. Next steps: Learning more, getting started

Data management services provide the mechanisms to find, move and share data GridFTP Fast, Flexible, Secure, Ubiquitous data transport Often embedded in higher level services RFT Reliable file transfer service using GridFTP Replica Location Service (RLS) Tracks multiple copies of data for speed and reliability Emerging services integrate replication and transport Metadata management services are evolving 44

GridFTP is secure, reliable and fast Security through GSI Authentication and authorization Can also provide encryption Reliability by restarting failed transfers Fast and scalable Handles huge files (>4GB 64bit sizes) Can set TCP buffers for optimal performance Parallel transfers Striping (multiple endpoints) Client Tools globus-url-copy, uberftp, custom clients 45

GridFTP Extensions to well known FTP File Transfer Protocol Strong authentication, encryption via Globus GSI Multiple, parallel data channels Third-party transfers Tunable network & I/O parameters Server side processing, command pipelining March 24-25, 2007 Grid Data Management 46

Basic Definitions Control Channel TCP link over which commands and responses flow Low bandwidth; encrypted and integrity protected by default Data Channel Communication link(s) over which the actual data of interest flows High Bandwidth; authenticated by default; encryption and integrity protection optional March 24-25, 2007 Grid Data Management 47

A file transfer with GridFTP Control channel can go either way Depends on which end is client, which end is server Data channel is still in same direction Site A Control channel Site B Server Data channel March 24-25, 2007 Grid Data Management 48

Third party transfer Controller can be separate from src/dest Useful for moving data from storage to compute Client Control channels Site A Server Site B Server Data channel March 24-25, 2007 Grid Data Management 49

Going fast parallel streams Use several data channels Site A Server Control channel Data channels Site B March 24-25, 2007 Grid Data Management 50

Going fast striped transfers Use several servers at each end Shared storage at each end Site A Server Server Server Server Server Server March 24-25, 2007 Grid Data Management 51

GridFTP examples globus-url-copy file:///home/yourlogin/dataex/myfile gsiftp://osg-edu.cs.wisc.edu/nfs/osgedu/yourlogin/ex1 globus-url-copy gsiftp://osg-edu.cs.wisc.edu/nfs/osgedu/yourlogin/ex2 gsiftp://tp-osg.ci.uchicago.edu/yourlogin/ex3

Grids replicate data files for faster access Effective use of the grid resources more parallelism Each logical file can have multiple physical copies Avoids single points of failure Manual or automatic replication Automatic replication considers the demand for a file, transfer bandwidth, etc. 53

File catalogues tell you where the data is File Catalog Services Replica Location Service (RLS) Phedex RefDB / PupDB Requirements Abstract the logical file name (LFN) for a physical file maintain the mappings between the LFNs and the PFNs (physical file names) Maintain the location information of a file 54

RLS -Replica Location Service RLS maps logical filenames to physical filenames Logical Filenames (LFN) A symbolic name for a file form is up to application Does not refer to file location (host, or where on host) Physical Filenames (PFN) Refers to a file on some filesystem somewhere Often use gsiftp:// URLs to specify PFNs Two RLS catalogs: Local Replica Catalog (LRC) and Replica Location Index (RLI)

( LRC ) Local Replica Catalog stores mappings from LFNs to PFNs. Interaction: Q: Where can I get filename experiment_result_1? A: You can get it from gsiftp://gridlab1.ci.uchicago.edu/home/benc/r.txt Undesirable to have one of these for whole grid Lots of data Single point of failure

( RLI ) Replica Location Index stores mappings from LFNs to LRCs. Interaction: Q: where can 1 find filename experiment_result_1 A: You can get more info from the LRC at gridlab1 ( info (Then go to ask that LRC for more Failure of an RLI or RLC doesn t break RLS RLI stores reduced set of information can handle many more mappings

Globus RLS site A file1 file2 rls://servera:39281 LRC file1 gsiftp://servera/file1 file2 gsiftp://servera/file2 RLI file3 rls://serverb/file3 file4 rls://serverb/file4 site B file3 file4 rls://serverb:39281 LRC file3 gsiftp://serverb/file3 file4 gsiftp://serverb/file4 RLI file1 rls://servera/file1 file2 rls://servera/file2

The LIGO Data Grid integrates GridFTP & RLS LIGO Gravitational Wave Observatory Birmingham Cardiff AEI/Golm Replicating >1 Terabyte/day to 8 sites >40 million replicas so far MTBF = 1 month 59

RFT - Reliable File Transfer Provides recovery for additional failure possibilities beyond those covered by GridFTP Uses a database to track transfer requests and their status Users start transfers then RFT takes over Integrated Automatic Failure Recovery. Network level failures. System level failures etc. Utilizes restart markers generated by GridFTP March 24-25, 2007 Grid Data Management 60

Reliable file transfer Client RFT Site A Server Control channels Site B Server Data channel March 24-25, 2007 Grid Data Management 61

Space reservations Dynamic space management Pinning file in spaces Support abstract concept of a file name: Site URL Temporary assignment of file names for transfer: Transfer URL Directory management and authorization Transfer protocol negotiation Support for peer to peer request Support for asynchronous multi-file requests Support abort, suspend, and resume operations Non-interference with local policies