Grid Infrastructure For Collaborative High Performance Scientific Computing

Size: px
Start display at page:

Download "Grid Infrastructure For Collaborative High Performance Scientific Computing"

Transcription

1 Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific Computing R. Jehadeesan Scientific Officer (E), Computer Division, Indira Gandhi Centre for Atomic Research, Kalpakkam jeha@igcar.gov.in ABSTRACT Grid computing is a form of distributed computing that involves coordinating and sharing computing, application, data, storage, or network resources across geographically isolated sites. Grid is a collective framework of nodes which contribute a combination of resources to grid users. Computational grid is an infrastructure that provides dependable, consistent and inexpensive high-end computing capability. This paper describes the grid infrastructure deployed for enabling high-performance computing through sharing heterogeneous computational resources spread across different units of an R&D establishment. It gives technical overview of Grid architecture, technology and standards, and explains in detail the services implemented for successful deployment of the computing Grid. The Grid middleware, its fundamental components and various functionalities offered are covered. The Grid-enabled high performance scientific and engineering applications exploit the potential of the computational grid resulting in increased productivity, reduced time and better price-to-performance ratio for computing resources. KEYWORDS Grid computing, Grid middleware, Grid architecture, Grid services, Grid security, Resource management, Workload management, Data management, Resource broker, Computing element, Worker node, Storage element, User interface, Virtual organization, Resource sharing, Grid job workflow, Grid monitoring, DAEGrid, High performance computing. I. INTRODUCTION Grid computing is an innovative aspect of distributed computing evolved with the objective of providing a coordinated, heterogeneous resource sharing computing environment. Today s high-performance scientific and engineering applications demands large-scale numerical computations, data analysis and collaboration at various levels. Clusters and Grids are increasingly widespread solutions to large-scale computing challenges. Grid is a distributed computing infrastructure with shared heterogeneous services and resources, accessible by users and applications to solve complex computational problems and to provide access to huge data storage. Grid computing focuses on resources sharing among geographically distributed sites in an organized and uniform manner and the development of pioneering, highperformance oriented applications. It deals with the unique challenges of security, scalability, and manageability in distributed computing. The immense benefits of grid technology include scalable highend computing capability, economical computing cost and efficiency. The enterprise grid infrastructure that can be shared by geographically disparate groups across the organization creates a more productive enterprise environment with efficient use of computing resources. Exploitation of underutilized resources, balanced resource utilizations and the potential for massive parallel processing capacity are the attractive features in the use of grid computing. In an R&D organization involved in scientific and engineering research activities, there exist always growing requirements for computational capabilities and data storage to solve challenging scientific and engineering problems. Grid computing addresses this challenge by employing powerful clusters locally and interconnecting them through wide area networks. This paper details the grid architecture, technology and grid services provided by the grid infrastructure deployed to share the computing power, applications, and data & storage resources across four organizational units. II. DAEGrid Department of Atomic Energy (DAE) has a number of R&D organizations working in the field of Nuclear science and Technology and carry out research and development activities in the frontiers of nuclear physics, nuclear engineering, material science, mechanical engineering, control systems etc. Some R&D units have supercomputing clusters to solve highly compute-intensive problems in these fields and have large amount of data on local storage worth sharing with other units. Necessity was felt to extend the high-end computing facilities beyond the geographical boundaries to meet requirements of modern research and collaboration in multidisciplinary fields. An intra-dae Grid network has been setup to provide a scalable, wide-area computing platform which enable sharing of computing and information resources among the constituent units in secure manner. This Grid network interconnects four major R&D units of DAE using high-speed fibre-optical network, aggregates the computational resources at the grid sites and provides them to users for efficient sharing. It enables collaborative research in DAE organizations and facilitates development of grid-enabled applications in advanced fields of science and technology.

2 III. GRID ARCHITECTURE DAEGrid is based on the WLCG/EGEE 7 (Worldwide LHC Computing Grid / Enabling Grids for E-SciencE) model of Grid computing and utilizes the glite 5 middleware for providing Grid services. The architecture of the computing Grid is shown in Figure 1. The architecture defines the essential Grid services provided by the infrastructure and the set of conforming interfaces, to manage resources in a single unified grid environment. The set of basic functionalities and services that should be available for deploying a computing Grid include Compute Resource Services, Workload Management, Storage/Data Management, Information & Monitoring Services, Virtual Organization & Security Management, and User Interface. The resource centres which provide the computing infrastructure and resources for the Grid are referred as Grid sites. Figure 1. DAE Computing Grid Architecture The users of a Grid infrastructure are divided into Virtual Organizations (VO). Virtual Organization is an abstract entity for grouping users, institutions and resources in the same administrative domain. VO Management service manages VO members and authorizes them to use the resources meant for that VO. Compute Resource service provides access to the local resource manager or batch system to utilize the computing resources of a Grid site. Workload Management mechanism is used to manage the jobs and provide global resource management service for the Grid. Storage and Data Management service provides access to mass data storage resources in a Grid site and takes care of the file management activities involving file transfer and catalogues. Information and Monitoring services provide information about Grid resources and monitor their status. User Interface service provides a consistent interface to the Grid with a set of client tools used for job submission, resource listing, data management and status monitoring. IV. GRID MIDDLEWARE AND SERVICES An essential component of grid infrastructure is the Grid middleware which acts as a service layer between Grid resources and Grid applications/users. It performs the set of fundamental services in deployment, management and usage of resources and providing users and applications with consistent, user-friendly interface. During the recent past, numerous Grid middleware products have emerged, leading to the problem of interoperability and standards. Still any widely accepted, usable, inter-operable standard has not evolved to meet the expectations of Grid community. DAEGrid infrastructure adopts the glite middleware developed by EGEE Grid project. It consists of a packaged suite of functional components providing a basic set of Grid services including job management, information & monitoring and data management. The glite originated from contributions of different Grid projects namely Condor 10, EDG(European Data Grid), Globus 9, VDT 11 (Virtual Data Toolkit) and LCG. The services provided by the middleware can be classified into Site Services and Global Services. Site services are pertain to the functionalities provided by individual sites which are part of the Grid. Global services are the common functionalities utilized by all Grid sites. Building the infrastructure of a Grid site includes deployment of Compute Element with Worker Nodes, Storage Elements, User Interface and Information Service. There can be more than one Computing Element, Storage Element services running in a site depending on the availability of resources. The global service elements of computing Grid include Workload Management System, VO Management Service, Information & Monitoring Service, and Data Management (File Catalogue & Transfer Service). Some of the global services like Workload Management, VO, and File Catalogues can be deployed in multiple sites based on the Grid users requirements. The organization of grid services provided by the Computing Grid infrastructure is shown in Figure 2. The role and features of each middleware component is described below. Figure 2.Organization of Computing Grid Middleware Services

3 Grid Infrastructure For Collaborative High Performance Scientific Computing A. COMPUTING ELEMENT A Computing Element (CE), is a set of computing resources localized at a site. It is essentially a computing cluster for executing the jobs. It includes a head node called Grid Gate (GG) which acts as a generic interface to the cluster; Local Resource Management System (LRMS) or batch system, and a collection of compute nodes called Worker Nodes (WN), the nodes where the jobs are actually run. The gateway node is responsible for accepting jobs and dispatching them for execution on the WNs via the LRMS. It handles job submission (including staging of required files), cancellation, suspensions and resumption, job status enquiry and notification. It also makes use of logging and book-keeping services to keep track of the jobs during their lifetime. LRMS is a batch job queuing and scheduling system responsible for managing the local computing resources and executing the jobs using the local resources (WNs). The glite CE interface supports different LRMS software namely OpenPBS/PBSPro, LSF, Maui/Torque, BQS and Condor. Maui/Torque batch system has been configured in the CEs of DAEGrid sites. CE also publishes the information about the resources available at site and current status of those resources to the Grid Information System. B. STORAGE ELEMENT A Storage Element (SE) provides uniform access to data storage resources. The Storage Element may control simple disk servers, large disk arrays or tape-based mass storage systems. Usually each Grid site provides one or more SEs. Storage Elements can support different data access protocols and interfaces. The Storage Resource Manager (SRM) service is used to manage most of the storage resources. SRM interface defines a set of functions and services that a storage system provides independently in a mass storage implementation. It supports capabilities like transparent file migration from disk to tape, file pinning, space reservation, etc. dcache interface consists of one or more pool disks and a server presenting files under single virtual file system tree. It is widely employed as disk buffer front-end to many mass storage systems. In addition Disk Pool Manager (DPM) can be used for fairly small SEs with disk-based storage. A GSI-secure FTP protocol (GridFTP) is used for whole-file transfers. It provides a secure, fast file transfers to and from SE. The Remote File Input/Output protocol (RFIO) is used for direct remote access of files stored in the SEs. The gsidcap protocol is the GSI enabled version of the dcache native access protocol. The glite I/O is a set of POSIX-like I/O services for accessing the Grid files via their logical names. C. USER INTERFACE User Interface (UI) is the access point to the computing Grid. Each user has a personal account in this machine and also the user certificate is installed in it. User is authenticated and authorized to use the Grid resources, and access the functionalities offered by the Information, Workload and Data management systems. It provides command-line tools for Grid users to perform the following activities: Listing of resources suitable for execution of a given job Finding the status of different resources Job submission, Job status view, Job Cancellation Retrieval of job outputs and job logging information File management operations (copy, replicate and delete) UI provided a set of commands to submit and manage simple jobs to advanced job types. The various types of job submissions supported by UI include single job, job collection, checkpointable jobs, parametric jobs, MPI jobs and interactive jobs. A high-level scripting language called Job Description Language (JDL) is used to describe jobs and their desired characteristics and constraints. The high-level Data Management client tools hide the complexities of storage implementation and transport & access protocols, and enable users to move data in and out of the Grid, replicate files between Storage Elements, interact with the File Catalogue. Also low-level User Interface APIs are available to allow development of Grid enabled applications. Grid Portals provide user friendly environment for submission and monitoring of the jobs and removes the difficulty of using complex commandline interface. D. WORKLOAD MANAGEMENT SYSTEM Workload Management System (WMS) is the core service which accepts user job, determines a site that fulfills the job s resource requirements and submits the job to that site. It dispatches job to the most appropriate Computing Element in the Grid and provides facilities to manage jobs. It also records the jobs status and retrieves their output. WMS is otherwise called Resource Broker (RB). The user interacts with WMS/RB using Command Line Interface or APIs. The job being submitted is described in JDL by the user. The JDL script defines which executable to run and its command line arguments, input files needed, output files to be generated and files to be moved to and from the worker node, in addition stating any specific requirements on the CE and the worker node. The process of finding the suitable CE for submitting the job is called match-making. It involves the following steps: Each CE in the Grid is assigned a rank based on its status information derived from the number of currently running and queued jobs. Highest rank is assigned for the least loaded CE. Among all available CEs, those which fulfill the requirements articulated by the user and those which are close to specified input files on the Grid are selected The CE with the highest rank in the selection is chosen for job dispatch. The RB interacts with File Catalogues using DLI service to locate the Grid input files specified in the JDL script.

4 The Logging and Bookkeeping service (LB), which normally runs on the RB, tracks submitted jobs during their lifetime. It gathers events from WMS components and CEs and records the current status and the complete history of the jobs. This logging information about submitted jobs can be retrieved via UI commands and is useful in verifying the success or analyzing the job failure. E. DATA MANGEMENT SERVICES Data Management Services are essentially used to locate and access data files, to copy files between UI, CE, WN and SE, and replicate files between SEs. A variety of data management client tools to upload/download files to/from the Grid, replicate data and interact with the file catalogues. File Catalogue File is the primary unit of data in Grid. Users or applications generally use Logical File Names (LFN) to refer files in the Grid. A file is uniquely identified internally by Global Unique Identifier (GUID). Storage URL (SURL) and Transport URL (TURL) contain information about where a physical replica is located, and how it can be accessed. LFNs and GUIDs identify files irrespective of their location. File Catalogue service is used for this purpose. The mappings between LFNs, GUIDs and SURLs are stored in File Catalogue system, while actual files are stored in Storage Elements. LCG File Catalogue (LFC) is the catalogue service in use and it offers a hierarchical view of logical file name space. It translates LFN to SURL via a GUID and locate the site where the referred file resides. It supports a transactional API called Data Location Interface (DLI) for providing a generic interface to a file catalog. File Transfer Service Fle Transfer Service help users to carry out reliable file transfer operations across SEs of Grid sites. FTS is the low level data movement service which performs asynchronous (batch mode) reliable file replication from source to destination. It interacts with SRM interface for dealing storage resources and manages the basic-level data transfer with GridFTP protocol. It maintains a persistent transfer queue thus providing a reliable data transfer even with communication link interruptions. It does not depend on File Catalogue for resolving file names and hence SURL is used to specify source and destination. FTS service is not currently implemented in DAEGrid. F. INFORMATION AND MONITORING SERVICES Information Service (IS) provides information about the Grid resources and their status. The published information is used for resource discovery, monitoring and accounting purposes. This information conforms to the common conceptual data model GLUE Schema (Grid Laboratory for a Uniform Environment). It describes the attributes and value of CEs, SEs and their binding information. Following two information services are used for Grid resource monitoring and discovery. Monitoring and Discovery Service (MDS) The MDS is used for resource discovery and to publish the resource status. It implements the GLUE Schema using an open source implementation of LDAP (Lightweight Directory Access Protocol), a specialized database optimized for reading, browsing and searching information. No Grid credentials are required to access to MDS data, both by the users (for reading) and by the services (for writing/publishing information). MDS architecture is based on Grid Resource Information Server (GRIS) and Berkeley Database Information Index Server (BDII). GRIS is an LDAP server which runs on the resource (CE and SE) and publishes the relevant static and dynamic information about the resource. The resource information provided include number of CPUs, running jobs, waiting jobs; amount of memory; OS details; type of storage, used and available space etc. BDII service is an another LDAP server which runs at each site and collect information from local GRISs. Also there exist a global or top-level BDII which is configured to query the site-bdiis at every site and act as a cache by storing information about the Grid status in their database. It gives the status of overall Grid resources. Relational Grid Monitoring Architecture (RGMA) RGMA is used for accounting, monitoring and publication of system-level & user-level information. It is an implementation of the Grid Monitoring Architecture (GMA) and presents a relational view of the collected data. The model is based on global distributed relational database and supports advanced query operations. R-GMA is an alternative information system to MDS. It uses the same GLUE schema as the MDS. The RGMA architecture consists of Producers, which provide the information; Consumers, which request the information from Producers; and Registry, which mediates the communication between the Producers and the Consumers. The Producers and Consumers are the services running at each site, which interact with the global Registry service to answer users queries. G. VO MANGEMENT SERVICE Grid is organized into Virtual Organizations (VO), which is a dynamic collection of individuals and institutions sharing resources in a flexible, secure and coordinated manner. Virtual Organization Management Service (VOMS) is used to manage information about the roles and privileges of users within a VO. In order to use the Grid infrastructure, a user should choose a VO and become its member by registering some personal data and accepting the usage rules. Membership of a VO grants specific privileges to a user. It is possible that a user can belong to more than one VO. The VO must ensure that all its members have provided the necessary information and have accepted the usage rules. The user information is stored in a database maintained by the VO. The short-term proxies which are required for authentication and authorization are annotated with Attribute Certificate obtained from VOMS. The Attribute

5 Grid Infrastructure For Collaborative High Performance Scientific Computing Certificate contains information about VO, group membership, and roles. A single VOMS server can serve multiple VOs. VOMS Administrator web interface is used for managing VO membership using a web browser. V. GRID SECURITY The Grid middleware employs Grid Security Infrastructure (GSI) to enable secure authentication and communication over an open network. GSI is based on public key encryption, X.509 certificates, and the Secure Sockets Layer (SSL) communication protocol. The authorization of a user on a specific Grid resource is done by VOMS. Certification Authority (CA) In order to access Grid resources, a user needs to have a digital X.509 certificate from a CA trusted by organizations involved in Grid. It is the responsibility of CA to issue and manage certificates. Registration Authority is the service delegated by CA to validate the identity of user and legitimacy of the certification request at each site. This is a pre-requirement for joining any VO. Grid resources are also issued with certificates to allow them to authenticate themselves to users and other services. Proxy User s identity is required to run jobs on remote sites. The user certificate is used to generate and sign a temporary certificate, called proxy, which is used for the actual authentication to Grid services. A user needs a valid proxy to submit jobs; those jobs carry their own copies of the proxy to be able to authenticate with Grid services as they run. VOMS Proxy is an extension of proxy which contain additional information about the VO, the groups the user belongs to in the VO, and any roles the user is entitled to have. The proxy has a short lifetime to reduce security risks. For long-running jobs, the job proxy may expire before the job has finished, causing the job to fail. To avoid this, there is a proxy renewal mechanism to keep the job proxy valid for as long as needed. MyProxy server is a proxy credential repository system which allows the user to create and store a long-term proxy. The WMS will then be able to use this long-term proxy to periodically renew the proxy for a submitted job before it expires and until the job ends. VI. LOGICAL WORKFLOW FOR JOBS The sequence of steps involved in job submission and processing in the Grid is described below. Figure 3 shows the illustration of the logical job workflow in the Grid. User obtains certificate from CA, registers in a VO and gets an account on a UI The user creates a proxy certificate in UI to authenticate himself in subsequent secure interactions The user submits a job from the UI to RB. Any local input files specified in the JDL file are copied initially from the UI to the RB. The WMS (RB) finds the appropriate CE to execute the job. It consults BDII, to determine the status of computational and storage resources, and the File Catalogue to find the location of any required input files. The RB readies the job for submission, creating a wrapper script and required parameters to pass to the selected CE. The CE receives the request and sends the job for execution to the LRMS. The LRMS handles the execution of jobs on the local Worker Nodes. The input files are copied from the RB to the WNs where the job is executed. The running job runs can access the Grid files in SE using the supported protocols (RFIO/gsidcap) After successful completion of the job, the output file(s) is transferred back to the RB node. The user can retrieve the output of his job to the UI. Figure 3. Job Workflow in the Computing Grid If the chosen site does not accept or run the job, automatic resubmission to another suitable CE takes place. Job gets aborted if the number of successive failed resubmissions reaches a maximum limit. All the events during the process of job submission & execution are logged in LB. User can query the LB from the UI on the job status. Also, it is possible to query the BDII for the status of the resources. VII. GRID APPLICATIONS Application software should conform in its architecture to the overall design of the Grid and shall make use of the set of core tools, libraries and services which integrate and inter-operate with the Grid middleware The application software development has to meet the set of high-level requirements on language, platform and distributed computing for successful deployment on Grid. This Grid infrastructure enables

6 collaborative development of scientific computing applications across different organizations. User applications vary from sequential jobs to multi-cpu parallel jobs, in-house developed scientific codes to commercial engineering applications involving dense floating-point operations. Some of the specificpurpose applications for which the computing Grid is effectively used are: Highly compute intensive, number crunching, scientific applications in the areas of Computational Molecular Dynamics, High Energy Physics, Material Modeling, Reactor Core calculations & Safety Analysis, Weather Forecasting and Simulation Studies. Engineering applications in the areas of Finite Element Analysis, Computational Fluid Dynamics and Multiphysics Modeling. Experimental Data Processing and Analysis All computing components in the Grid are based on Scientific Linux (SLC) operating system. Applications are primarily developed in C/C++ and FORTRAN languages. The supported compilers, scientific/mathematical libraries, parallelization tools and grid related APIs are installed and configured to establish well-defined development environment for gridenabled applications. VIII. CONCLUSION The architectural design of the Grid setup for enabling highperformance computing through sharing heterogeneous computational resources spread across different units of an R&D establishment is detailed out. The Grid infrastructure, the middleware and its functionalities are explained. This paper enlightens the basic set of Grid services which are required for User Interface, Compute Resource Management, Workload Management, Storage & Data Management, Information Management and Security Enforcement in a computing Grid deployment. The organization of services and interaction between the functional elements are illustrated along with the underlying software framework. Modern research in advanced scientific and engineering areas call for solving high-end computational problems using distributed resources in a coordinated, uniform way. The Grid-enabled high performance scientific and engineering applications exploit the potential of the computational grid resulting in increased productivity, reduced time and better price-to-performance ratio for computing resources. Grid computing is an evolving area of computing, where standards and technology are still being developed. There is a scope for improvement in enhancing the middleware services with more intuitive user interface, advanced information and monitoring capabilities; supporting wide range of batch systems; and providing inter-operability among different middleware standards. REFERENCES [1] Fran Berman, Geoffrey Fox and Tony Hey Grid Computing: Making the Global Infrastructure a Reality; Wiley; [2] Joshy Joseph, Craig Fellenstein Grid Computing IBM Press; [3] Lucio Grandinetti Grid Computing: The New Frontier of High Performance Computing, 14; Elsevier; [4] Mark Baker, Rajkumar Buyya and Domenico Laforenza, Grids and Grid technologies for wide-area distributed computing, Software Practice and Experience, August [5] Stephen Burke, Simone Campana, Antonio Delgado Peris, Flavia Donno, Patricia M endez Lorenzo, Roberto Santinelli, Andrea Sciaba, Glite 3 User Guide, Worlwide LHC Computing Grid, January [6] Introduction to Grid Computing with Globus, IBM Red Book, September [7] Worldwide LHC Computing Grid (WLCG), [8] EGEE Homepage, [9] The Globus Alliance, [10] Condor Project, [11] Virtual Data Toolkit, IX. FUTURE SCOPE The infrastructure and design of DAEGrid implementation is adequate for facilitating the collaborative scientific research among constituent units. To improve upon the performance, utilization and reliability, some of the grid services like WMS, VOMS, Proxy, File Catalogue would be deployed or replicated on multiple sites. As the computing and storage requirements of scientific & engineering community grows explosively, it demands for periodic enhancement of processing power and storage capacity in computing and storage elements respectively.

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

LCG-2 and glite Architecture and components

LCG-2 and glite Architecture and components LCG-2 and glite Architecture and components Author E.Slabospitskaya www.eu-egee.org Outline Enabling Grids for E-sciencE What are LCG-2 and glite? glite Architecture Release 1.0 review What is glite?.

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document

More information

Introduction to Grid Infrastructures

Introduction to Grid Infrastructures Introduction to Grid Infrastructures Stefano Cozzini 1 and Alessandro Costantini 2 1 CNR-INFM DEMOCRITOS National Simulation Center, Trieste, Italy 2 Department of Chemistry, Università di Perugia, Perugia,

More information

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011 ( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Scientific data management

Scientific data management Scientific data management Storage and data management components Application database Certificate Certificate Authorised users directory Certificate Certificate Researcher Certificate Policies Information

More information

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008 Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, 13-14 November 2008 Outline Introduction SRM Storage Elements in glite LCG File Catalog (LFC) Information System Grid Tutorial, 13-14

More information

Overview of HEP software & LCG from the openlab perspective

Overview of HEP software & LCG from the openlab perspective Overview of HEP software & LCG from the openlab perspective Andreas Unterkircher, CERN openlab February 2005 Andreas Unterkircher 1 Contents 1. Opencluster overview 2. High Energy Physics (HEP) software

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

Gergely Sipos MTA SZTAKI

Gergely Sipos MTA SZTAKI Application development on EGEE with P-GRADE Portal Gergely Sipos MTA SZTAKI sipos@sztaki.hu EGEE Training and Induction EGEE Application Porting Support www.lpds.sztaki.hu/gasuc www.portal.p-grade.hu

More information

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE The glite middleware Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 John.White@cern.ch www.eu-egee.org EGEE and glite are registered trademarks Outline glite distributions Software

More information

Introduction to SRM. Riccardo Zappi 1

Introduction to SRM. Riccardo Zappi 1 Introduction to SRM Grid Storage Resource Manager Riccardo Zappi 1 1 INFN-CNAF, National Center of INFN (National Institute for Nuclear Physic) for Research and Development into the field of Information

More information

Cloud Computing. Up until now

Cloud Computing. Up until now Cloud Computing Lecture 4 and 5 Grid: 2012-2013 Introduction. Up until now Definition of Cloud Computing. Grid Computing: Schedulers: Condor SGE 1 Summary Core Grid: Toolkit Condor-G Grid: Conceptual Architecture

More information

Introduction to Grid Computing

Introduction to Grid Computing Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

Grid Computing Middleware. Definitions & functions Middleware components Globus glite

Grid Computing Middleware. Definitions & functions Middleware components Globus glite Seminar Review 1 Topics Grid Computing Middleware Grid Resource Management Grid Computing Security Applications of SOA and Web Services Semantic Grid Grid & E-Science Grid Economics Cloud Computing 2 Grid

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology

More information

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

Introduction to Grid Technology

Introduction to Grid Technology Introduction to Grid Technology B.Ramamurthy 1 Arthur C Clarke s Laws (two of many) Any sufficiently advanced technology is indistinguishable from magic." "The only way of discovering the limits of the

More information

Architecture Proposal

Architecture Proposal Nordic Testbed for Wide Area Computing and Data Handling NORDUGRID-TECH-1 19/02/2002 Architecture Proposal M.Ellert, A.Konstantinov, B.Kónya, O.Smirnova, A.Wäänänen Introduction The document describes

More information

Advanced School in High Performance and GRID Computing November Introduction to Grid computing.

Advanced School in High Performance and GRID Computing November Introduction to Grid computing. 1967-14 Advanced School in High Performance and GRID Computing 3-14 November 2008 Introduction to Grid computing. TAFFONI Giuliano Osservatorio Astronomico di Trieste/INAF Via G.B. Tiepolo 11 34131 Trieste

More information

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT.

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT. Chapter 4:- Introduction to Grid and its Evolution Prepared By:- Assistant Professor SVBIT. Overview Background: What is the Grid? Related technologies Grid applications Communities Grid Tools Case Studies

More information

Grid Interoperation and Regional Collaboration

Grid Interoperation and Regional Collaboration Grid Interoperation and Regional Collaboration Eric Yen ASGC Academia Sinica Taiwan 23 Jan. 2006 Dreams of Grid Computing Global collaboration across administrative domains by sharing of people, resources,

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Grid Middleware and Globus Toolkit Architecture

Grid Middleware and Globus Toolkit Architecture Grid Middleware and Globus Toolkit Architecture Lisa Childers Argonne National Laboratory University of Chicago 2 Overview Grid Middleware The problem: supporting Virtual Organizations equirements Capabilities

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

Parallel Computing in EGI

Parallel Computing in EGI Parallel Computing in EGI V. Šipková, M. Dobrucký, and P. Slížik Ústav informatiky, Slovenská akadémia vied 845 07 Bratislava, Dúbravská cesta 9 http://www.ui.sav.sk/ {Viera.Sipkova, Miroslav.Dobrucky,

More information

Deploying virtualisation in a production grid

Deploying virtualisation in a production grid Deploying virtualisation in a production grid Stephen Childs Trinity College Dublin & Grid-Ireland TERENA NRENs and Grids workshop 2 nd September 2008 www.eu-egee.org EGEE and glite are registered trademarks

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side Troubleshooting Grid authentication from the client side By Adriaan van der Zee RP1 presentation 2009-02-04 Contents The Grid @NIKHEF The project Grid components and interactions X.509 certificates, proxies

More information

Layered Architecture

Layered Architecture The Globus Toolkit : Introdution Dr Simon See Sun APSTC 09 June 2003 Jie Song, Grid Computing Specialist, Sun APSTC 2 Globus Toolkit TM An open source software toolkit addressing key technical problems

More information

Scalable Computing: Practice and Experience Volume 10, Number 4, pp

Scalable Computing: Practice and Experience Volume 10, Number 4, pp Scalable Computing: Practice and Experience Volume 10, Number 4, pp. 413 418. http://www.scpe.org ISSN 1895-1767 c 2009 SCPE MULTI-APPLICATION BAG OF JOBS FOR INTERACTIVE AND ON-DEMAND COMPUTING BRANKO

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen Grid Computing 7700 Fall 2005 Lecture 5: Grid Architecture and Globus Gabrielle Allen allen@bit.csc.lsu.edu http://www.cct.lsu.edu/~gallen Concrete Example I have a source file Main.F on machine A, an

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING M. KACI mohammed.kaci@ific.uv.es 2nd EGAN School, 03-07 December 2012, GSI Darmstadt, Germany GRID COMPUTING TECHNOLOGY THE EUROPEAN GRID: HISTORY

More information

Philippe Charpentier PH Department CERN, Geneva

Philippe Charpentier PH Department CERN, Geneva Philippe Charpentier PH Department CERN, Geneva Outline Disclaimer: These lectures are not meant at teaching you how to compute on the Grid! I hope it will give you a flavor on what Grid Computing is about

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

A Distributed Media Service System Based on Globus Data-Management Technologies1

A Distributed Media Service System Based on Globus Data-Management Technologies1 A Distributed Media Service System Based on Globus Data-Management Technologies1 Xiang Yu, Shoubao Yang, and Yu Hong Dept. of Computer Science, University of Science and Technology of China, Hefei 230026,

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

DIRAC data management: consistency, integrity and coherence of data

DIRAC data management: consistency, integrity and coherence of data Journal of Physics: Conference Series DIRAC data management: consistency, integrity and coherence of data To cite this article: M Bargiotti and A C Smith 2008 J. Phys.: Conf. Ser. 119 062013 Related content

More information

R-GMA (Relational Grid Monitoring Architecture) for monitoring applications

R-GMA (Relational Grid Monitoring Architecture) for monitoring applications R-GMA (Relational Grid Monitoring Architecture) for monitoring applications www.eu-egee.org egee EGEE-II INFSO-RI-031688 Acknowledgements Slides are taken/derived from the GILDA team Steve Fisher (RAL,

More information

WMS overview and Proposal for Job Status

WMS overview and Proposal for Job Status WMS overview and Proposal for Job Status Author: V.Garonne, I.Stokes-Rees, A. Tsaregorodtsev. Centre de physiques des Particules de Marseille Date: 15/12/2003 Abstract In this paper, we describe briefly

More information

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007 Grid Programming: Concepts and Challenges Michael Rokitka SUNY@Buffalo CSE510B 10/2007 Issues Due to Heterogeneous Hardware level Environment Different architectures, chipsets, execution speeds Software

More information

High Performance Computing Course Notes Grid Computing I

High Performance Computing Course Notes Grid Computing I High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are

More information

Boundary control : Access Controls: An access control mechanism processes users request for resources in three steps: Identification:

Boundary control : Access Controls: An access control mechanism processes users request for resources in three steps: Identification: Application control : Boundary control : Access Controls: These controls restrict use of computer system resources to authorized users, limit the actions authorized users can taker with these resources,

More information

Multiple Broker Support by Grid Portals* Extended Abstract

Multiple Broker Support by Grid Portals* Extended Abstract 1. Introduction Multiple Broker Support by Grid Portals* Extended Abstract Attila Kertesz 1,3, Zoltan Farkas 1,4, Peter Kacsuk 1,4, Tamas Kiss 2,4 1 MTA SZTAKI Computer and Automation Research Institute

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Implementing GRID interoperability

Implementing GRID interoperability AFS & Kerberos Best Practices Workshop University of Michigan, Ann Arbor June 12-16 2006 Implementing GRID interoperability G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, C. Scio**,

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

<Insert Picture Here> Enterprise Data Management using Grid Technology

<Insert Picture Here> Enterprise Data Management using Grid Technology Enterprise Data using Grid Technology Kriangsak Tiawsirisup Sales Consulting Manager Oracle Corporation (Thailand) 3 Related Data Centre Trends. Service Oriented Architecture Flexibility

More information

Database Assessment for PDMS

Database Assessment for PDMS Database Assessment for PDMS Abhishek Gaurav, Nayden Markatchev, Philip Rizk and Rob Simmonds Grid Research Centre, University of Calgary. http://grid.ucalgary.ca 1 Introduction This document describes

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

AMGA metadata catalogue system

AMGA metadata catalogue system AMGA metadata catalogue system Hurng-Chun Lee ACGrid School, Hanoi, Vietnam www.eu-egee.org EGEE and glite are registered trademarks Outline AMGA overview AMGA Background and Motivation for AMGA Interface,

More information

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4 An Overview of Grid Computing Workshop Day 1 : August 05 2004 (Thursday) An overview of Globus Toolkit 2.4 By CDAC Experts Contact :vcvrao@cdacindia.com; betatest@cdacindia.com URL : http://www.cs.umn.edu/~vcvrao

More information

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore.

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore. Garuda : The National Grid Computing Initiative Of India Natraj A.C, CDAC Knowledge Park, Bangalore. natraj@cdacb.ernet.in 1 Agenda About CDAC Garuda grid highlights Garuda Foundation Phase EU-India grid

More information

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus UNIT IV PROGRAMMING MODEL Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus Globus: One of the most influential Grid middleware projects is the Globus

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side System and Network Engineering RP1 Troubleshooting Grid authentication from the client side Adriaan van der Zee 2009-02-05 Abstract This report, the result of a four-week research project, discusses the

More information

Easy Access to Grid Infrastructures

Easy Access to Grid Infrastructures Easy Access to Grid Infrastructures Dr. Harald Kornmayer (NEC Laboratories Europe) On behalf of the g-eclipse consortium WP11 Grid Workshop Grenoble, France 09 th of December 2008 Background in astro particle

More information

SZDG, ecom4com technology, EDGeS-EDGI in large P. Kacsuk MTA SZTAKI

SZDG, ecom4com technology, EDGeS-EDGI in large P. Kacsuk MTA SZTAKI SZDG, ecom4com technology, EDGeS-EDGI in large P. Kacsuk MTA SZTAKI The EDGI/EDGeS projects receive(d) Community research funding 1 Outline of the talk SZTAKI Desktop Grid (SZDG) SZDG technology: ecom4com

More information

An Introduction to the Grid

An Introduction to the Grid 1 An Introduction to the Grid 1.1 INTRODUCTION The Grid concepts and technologies are all very new, first expressed by Foster and Kesselman in 1998 [1]. Before this, efforts to orchestrate wide-area distributed

More information

The PanDA System in the ATLAS Experiment

The PanDA System in the ATLAS Experiment 1a, Jose Caballero b, Kaushik De a, Tadashi Maeno b, Maxim Potekhin b, Torre Wenaus b on behalf of the ATLAS collaboration a University of Texas at Arlington, Science Hall, PO Box 19059, Arlington, TX

More information

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano.

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano. First sights on a non-grid end-user analysis model on Grid Infrastructure Roberto Santinelli CERN E-mail: roberto.santinelli@cern.ch Fabrizio Furano CERN E-mail: fabrzio.furano@cern.ch Andrew Maier CERN

More information

SDS: A Scalable Data Services System in Data Grid

SDS: A Scalable Data Services System in Data Grid SDS: A Scalable Data s System in Data Grid Xiaoning Peng School of Information Science & Engineering, Central South University Changsha 410083, China Department of Computer Science and Technology, Huaihua

More information

glite Middleware Usage

glite Middleware Usage glite Middleware Usage Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Nov. 18, 2008 www.eu-egee.org EGEE and glite are registered trademarks Usage

More information

THE WIDE AREA GRID. Architecture

THE WIDE AREA GRID. Architecture THE WIDE AREA GRID Architecture Context The Wide Area Grid concept was discussed during several WGISS meetings The idea was to imagine and experiment an infrastructure that could be used by agencies to

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status High Level Requirements for user analysis computing Code Development Environment Compile, run,

More information

A Practical Approach for a Workflow Management System

A Practical Approach for a Workflow Management System A Practical Approach for a Workflow Management System Simone Pellegrini, Francesco Giacomini, Antonia Ghiselli INFN Cnaf Viale B. Pichat, 6/2 40127 Bologna {simone.pellegrini francesco.giacomini antonia.ghiselli}@cnaf.infn.it

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI Department of Computer Science and Engineering CS6703 Grid and Cloud Computing Anna University 2 & 16 Mark Questions & Answers Year / Semester: IV / VII Regulation:

More information

Programming the Grid with glite

Programming the Grid with glite Programming the Grid with glite E. Laure 1, C. Grandi 1, S. Fisher 2, A. Frohner 1, P. Kunszt 3, A. Krenek 4, O. Mulmo 5, F. Pacini 6, F. Prelz 7, J. White 1 M. Barroso 1, P. Buncic 1, R. Byrom 2, L. Cornwall

More information

Architecture of the WMS

Architecture of the WMS Architecture of the WMS Dr. Giuliano Taffoni INFORMATION SYSTEMS UNIT Outline This presentation will cover the following arguments: Overview of WMS Architecture Job Description Language Overview WMProxy

More information

Scalable, Reliable Marshalling and Organization of Distributed Large Scale Data Onto Enterprise Storage Environments *

Scalable, Reliable Marshalling and Organization of Distributed Large Scale Data Onto Enterprise Storage Environments * Scalable, Reliable Marshalling and Organization of Distributed Large Scale Data Onto Enterprise Storage Environments * Joesph JaJa joseph@ Mike Smorul toaster@ Fritz McCall fmccall@ Yang Wang wpwy@ Institute

More information

Outline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems

Outline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems Distributed Systems Outline Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems What Is A Distributed System? A collection of independent computers that appears

More information

Software Paradigms (Lesson 10) Selected Topics in Software Architecture

Software Paradigms (Lesson 10) Selected Topics in Software Architecture Software Paradigms (Lesson 10) Selected Topics in Software Architecture Table of Contents 1 World-Wide-Web... 2 1.1 Basic Architectural Solution... 2 1.2 Designing WWW Applications... 7 2 CORBA... 11 2.1

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System International Collaboration to Extend and Advance Grid Education glite WMS Workload Management System Marco Pappalardo Consorzio COMETA & INFN Catania, Italy ITIS Ferraris, Acireale, Tutorial GRID per

More information

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI Setup Desktop Grids and Bridges Tutorial Robert Lovas, MTA SZTAKI Outline of the SZDG installation process 1. Installing the base operating system 2. Basic configuration of the operating system 3. Installing

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

Grid Data Management

Grid Data Management Grid Data Management Week #4 Hardi Teder hardi@eenet.ee University of Tartu March 6th 2013 Overview Grid Data Management Where the Data comes from? Grid Data Management tools 2/33 Grid foundations 3/33

More information

Heterogeneous Grid Computing: Issues and Early Benchmarks

Heterogeneous Grid Computing: Issues and Early Benchmarks Heterogeneous Grid Computing: Issues and Early Benchmarks Eamonn Kenny 1, Brian Coghlan 1, George Tsouloupas 2, Marios Dikaiakos 2, John Walsh 1, Stephen Childs 1, David O Callaghan 1, and Geoff Quigley

More information

Globus Toolkit Firewall Requirements. Abstract

Globus Toolkit Firewall Requirements. Abstract Globus Toolkit Firewall Requirements v0.3 8/30/2002 Von Welch Software Architect, Globus Project welch@mcs.anl.gov Abstract This document provides requirements and guidance to firewall administrators at

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Chapter 3. Design of Grid Scheduler. 3.1 Introduction

Chapter 3. Design of Grid Scheduler. 3.1 Introduction Chapter 3 Design of Grid Scheduler The scheduler component of the grid is responsible to prepare the job ques for grid resources. The research in design of grid schedulers has given various topologies

More information

A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS

A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS Raj Kumar, Vanish Talwar, Sujoy Basu Hewlett-Packard Labs 1501 Page Mill Road, MS 1181 Palo Alto, CA 94304 USA { raj.kumar,vanish.talwar,sujoy.basu}@hp.com

More information