Parallel Computing in EGI

Size: px
Start display at page:

Download "Parallel Computing in EGI"

Transcription

1 Parallel Computing in EGI V. Šipková, M. Dobrucký, and P. Slížik Ústav informatiky, Slovenská akadémia vied Bratislava, Dúbravská cesta 9 {Viera.Sipkova, Miroslav.Dobrucky, Peter.Slizik}@ savba.sk Abstract. EGI.eu is a foundation established in February 2010 to create and maintain a pan-european Grid Infrastructure (EGI) so as to guarantee the long-term availability of a generic e-infrastructure for all European research communities and their international collaborators. Its work builds on previous EU-funded grid projects: LHC, DataGrid, EGEE, a.o. EGI does not develop the software deployed in the grid infrastructure, all upgrades and new programs are produced with external technology providers. Concerning the compute area, the major highlights of the first release of the EGI middleware - EMI1, are improvements in the CREAM service, extensions in the JDL language, and the support for the user defined fine-grained mapping of processes to physical resources. This makes possible for grid applications to employ various parallel programming models. This work presents the overview of the current job management facilities provided by the EGI middleware components. 1 Introduction EGI.eu is a foundation established in February 2010 to create and maintain a pan-european Grid Infrastructure (EGI) [1], in collaboration with National Grid Initiatives (NGIs) and European International Research Organizations (EIROs), so as to guarantee the long-term development, availability and sustainability of grid services and e-infrastructure for all European research communities and their international partners. Its work builds on previous EU-funded projects which raised this goal from the initial concept of a scalable, federated, distributed computing system. The distributed computing grid was originally conceived in 1999 to analyze the experimental data produced by the particle accelerator Large Hadron Collider (LHC) at CERN (European Organization for Nuclear Research). The research and development of grid technologies started in January 2001 within the European Data Grid [2] project which proved the successful application of the grid in research fields of the high energy physics, earth observation and bioinformatics. Upon its completion in 2004, a new project, called Enabling Grid for E-sciencE (EGEE) [3], took over the grid s further development. EGEE allowed researchers the access to computing resources on demand, from anywhere in the world and at any time of the day. By April 2010 when the last project phase

2 was finished, there were about 13 million jobs per month running on the EGEE infrastructure, hosted by a network of 300 computer centers worldwide. A special role in EGI is covered by the 4-year project Integrated Sustainable Pan-European Infrastructure for Researchers in Europe (EGI-InSPIRE) [4], which mission is to coordinate the transition from EGEE to EGI and to provide support for the current and emerging user communities. When starting in May 2010, EGI-InSPIRE has gathered 51 beneficiaries, 141 partners, 39 National Grid Initiatives within Europe and 47 countries around the world. EGI-InSPIRE is also supporting the integration of new distributed computing infrastructures such as clouds, supercomputing networks and desktop grids. EGI itself does not develop the software deployed in the grid infrastructure, all upgrades and new programs are produced with the independent technology providers. Up to now, EGI.eu has formalized and contracted agreements with the following external partners: European Middleware Initiative (EMI) [5] - the primary goal of EMI is to deliver a consolidated set of grid middleware components (as part of the Unified Middleware Distribution, UMD) to extend the interoperability and integration with emerging computing models, and to strengthen the reliability, stability and manageability of the middleware services. Initiative for Globus in Europe (IGE) [6] - the IGE project aims to be a comprehensive service provider for the European e-infrastructure regarding the development, customization, provisioning, support, and maintenance of components of the grid middleware Globus. Simple API for Grid Applications (SAGA) [7] - SAGA is a programming abstraction that offers the basic functionality required to build distributed applications, tools and frameworks, so as to be independent of the details of the underlying middleware systems and infrastructure. StratusLab [8] - the StratusLab project was set up to develop a complete, open-source cloud distribution that allows grid and non-grid resource centers to offer and to exploit an infrastructure as a Service cloud. It focuses on enhancing distributed computing infrastructures such as EGI with virtualisation and cloud technologies. The grid world has a lot of specialized jargon, some of acronyms frequently used in this paper are explained below: CE CREAM JDL MPI OpenMP SE VOMS WN WMProxy WMS Computing Element Computing Resource Execution And Management Job Description Language Message Passing Interface Open Multi-Processing Storage Element Virtual Organization Membership Service Worker Node Workload Manager Proxy Workload Management System

3 2 EGI Middleware A grid middleware is a specific software product, placed between the infrastructure and user applications, which enables sharing the heterogeneous grid resources. Resources include commodity or HPC clusters, disk storage, various instruments, data archives or digital libraries, and software packages. For the first period of its existence, EGI has operated using the grid middleware glite3.2 [9]. During the 6 years of EGEE, glite components have been progressively improved and made increasingly robust and efficient to satisfy the requirements of a large variety of research communities. In May 2011 EMI delivered the first release of the grid middleware: EMI1 Kebnekaise [10], which features the first complete and consolidated set of components from glite [9], ARC [11], dcache [12] and UNICORE [13]. The reference platform for EMI1 is the Scientific Linux 5/64bit. EMI1 is focussing primarily on laying the foundations for the distribution, increasing the level of integration among the original middleware stacks, improving the compatibility with mainstream operating systems guidelines, and extending the compliance with existing standards. EMI 1 has introduced a number of changes and new functionalities in areas of the Security, Computing, Data and Infrastructure. Security - includes middleware services and components that enable and enforce the grid security model allowing the safe sharing of resources on a large scale. The major highlights comprise: the replacement of GSI with SSL in security components, most notably VOMS; the REST-based interface for obtaining X.509 attribute certificates in VOMS Admin service; and the initial integration of ARGUS authorization with middleware services. Computing - includes middleware services and corresponding client components involved in the processing and management of user requests concerning the execution of a computational task. The major highlights comprise: the full support for the CLUSTER service in CREAM; the initial support for GLUE 2 in all CEs; the integration of ARGUS authorization in CREAM; and the initial implementation of common MPI methods across the different compute services. In MPI it includes the support for the user defined fine-grained mapping of processes to physical resources, the basic support for OpenMP, new command line options for MPI-Start, and the support for SLURM& Condor schedulers. Data - includes middleware services and corresponding client components involved in the processing and management of user requests concerning storage management, data access and data replication. The major highlights comprise: the adoption of pnfs4.1 and WebDAV standards in dcache; and the preview version of a messaging-based SE File Catalog synchronization service (SEMsg). Infrastructure - includes middleware services and components that offer a common information and management functionality to deployed grid services. They involve the Information System and Service Registry, the grid

4 messaging infrastructure, the service monitoring and management, Logging and Bookkeeping services, and the accounting functionality. The major highlights are: the ARC CE, UNICORE Web services, CREAM service, dcache, and improvements in the Disk Pool Manager (DPM) via the experimental adoption of the GLUE 2 information model standard. The following sections address the glite component of the EGI middleware. 3 Job Management Services Job management services in the grid are concerned with the acceptance, scheduling, monitoring, and management of remote computations, called jobs. A job consists of a computation, and optionally, file transfer and management operations related to the computation. In grid terminology, a Computing Element (CE) represents a set of computational resources localized at a site (i.e. a cluster). CE consists of: a Grid Gate (GG) acting as a generic interface to the cluster, a Local Resource Management System (LRMS), called also batch system, the Cluster itself: a collection of Worker Nodes (WNs) where jobs are run. The central role of GG is accepting jobs and dispatching them for execution on WNs via the LRMS. The GG implementation in glite3.2 is the CREAM based CE [14] - a lightweight service, which is responsible for performing all the job management operations. CREAM accepts job submissions and other requests through the Workload Management System (WMS) [15], or through a generic client (e.g. an end-user willing to directly submit jobs to a CREAM-CE). WMS is a software service of the glite which takes care of distributing and managing tasks across computing and storage resources available in the grid. As a global grid resource broker, WMS forms a reliable and efficient entry point to high-end services on the grid presenting a common front-end for user job submissions. The main service providing access to the WMS is the WMProxy. Both services the CREAM and WMProxy implement similar functionalities and expose a web services interface that the user can interact with by means of the Command Line Interface. The client commands enable to perform the following operations: delegation of proxy credentials to speed up subsequent operations; renewal of delegations (i.e. proxy of submitted jobs) match-making - to display the list of CEs which match the JDL requirements submission of jobs for execution monitoring the status of submitted jobs cancellation of jobs at any point in their life cycle suspension/resumption of running jobs retrieval of the job execution logging and the output of finished jobs getting information about jobs, WMS/CREAM services, and CEs The following tables summarize the most relevant client commands of WMProxy and CREAM services.

5 WMProxy Client Commands glite-wms-delegate-proxy glite-wms-job-submit glite-wms-job-status glite-wms-job-cancel glite-wms-job-output glite-wms-job-perusal glite-wms-job-logging-info glite-wms-job-list-match glite-wms-job-info CREAM Client Commands glite-ce-delegate-proxy glite-ce-job-submit glite-ce-job-status glite-ce-job-cancel glite-ce-job-output glite-ce-job-suspend glite-ce-job-resume glite-ce-job-purge glite-ce-proxy-renew glite-ce-job-list glite-ce-service-info 4 Description of Grid Jobs Jobs to be submitted to the grid must be described using the Job Description Language (JDL) [16]. JDL is a flexible, high-level language based on the Condor Classified Advertisements which enables to describe jobs and aggregates of jobs with arbitrary dependency relations, and to express any requirements and constraints on the CE, WNs, SE and installed software. In general, JDL attributes hold request-specific information which designate in some way actions that have to be performed. As the JDL is an extensible language, the user is allowed to use whatever attribute for the description of a job request, however, only a certain subset of attributes is accepted. Some of the attributes are mandatory, without specifying them the service cannot handle the request. A typical example of a JDL file for a simple job looks like as follows: Type = "Job"; JobType = "Normal"; Executable = "start-mytest.sh"; Arguments = "mytest.exe myinput.dat myoutput.dat"; StdOutput = "stdout.txt"; StdError = "stderr.txt"; InputSandbox = {"start-mytest.sh","mytest.exe","myinput.dat"}; OutputSandbox = {"stdout.txt","stderr.txt","myoutput.dat"}; The Type attribute represents the type of the request, and JobType defines the type of the job. The Executable attribute specifies the name of the executable/command to be carried out, and Arguments specifies its input arguments. StdOutput/StdError assign the names to standard streams(output/error) of the job. The Shell-script start-mytest.sh is supposed to invoke the actual

6 executable mytest.exe with arguments myinput.dat and myoutput.dat. Files which should be transfered between the client User Interface machine or external data SE and the compute resource before/after the job execution must be identified by InputSandbox/OutputSandbox attributes. Comparing the sets of JDL attributes supported by the WMS [17] and CREAM service [18], there is rather big difference between them. For instance, unlike CREAM which supports at the moment only simple job requests, the WMS can handle also compound job structures: Type = "Job"; # a simple job Type = "DAG"; # a DAG of dependent jobs Type = "Collection"; # a set of independent jobs A Directed Acyclic Graph (DAG) is defined as a set of jobs where the input/output/execution of one of more jobs may depend on one or more other jobs. A Collection is defined as multiple independent jobs with a common description. WMS currently supports two job types: JobType = "Normal"; # a simple batch-job JobType = "Parametric"; # a job with parametric attributes where the Parametric job represents multiple jobs with one parameterized description. CREAM allows for the time being only the Normal job type. A special class is constituted by parallel MPI and multi-threaded OpenMP tasks. These are included into the type Normal, but they need special handling. 4.1 MPI Jobs The MPI (Message Passing Interface) [19] has become a standard for programming distributed memory systems. In case of an MPI job, the JDL file must contain the attribute CpuNumber (an integer greater than 1) which defines the number of CPUs to be allocated. The Requirements attribute may be used to force the selection of such sites which have the necessary hardware configuration and MPI software support. The MPI application itself need to be initialized by means of an executable script, for instance, through the invocation of MPI-Start [20]. The MPI-Start is an abstraction layer that offers a unique interface to the grid middleware to start MPI programs with various execution environment implementations. It has already become a fixed part of the EMI1 middleware. Using plugins it supports different combinations of execution environments (Open MPI, MPICH, MPICH2, LAM-MPI, PACX-MPI) and batch schedulers (PBS/Torque, SGE, LSF). Besides the capability to start jobs, MPI-Start provides a hooks framework which makes possible customizing the MPI-Start behavior, the simple file distribution for sites without a shared file system, and for user applications to perform any pre-processing (e.g. program compilation, data fetching) and/or post-processing (e.g. storage of application results, clean-up) actions. MPI-Start can be controlled via environment variables or command line switches. The following sample shows a JDL example for an MPI job:

7 Type = "Job"; JobType = "Normal"; CpuNumber = 8; Executable = "start-mytest_mpi.sh"; Arguments = "8 mytest_mpi.exe";... Requirements = other.gluehostarchitectureplatformtype == "x86_64" && other.glueceinfototalcpus >= 8 && Member("MPI-START", other.gluehostapplicationsoftwareruntimeenvironment) && Member("OPENMPI", other.gluehostapplicationsoftwareruntimeenvironment); The Requirements attribute will be used during the matchmaking process for selecting those CEs having a 64bit architecture with at least 8 CPUs, and MPI- Start with Open MPI software installed. 4.2 OpenMP Jobs The OpenMP (Open Multi-Processing) - is an Application Programming Interface [21] that provides a parallel programming model for shared memory architectures ranging from the standard desktop computers to supercomputers. For the time being the submission of parallel OpenMP jobs is possible only via the CREAM clients. As the CREAM has become an integrated part of the EMI execution service, the CREAM JDL is constantly varying and extending in order to better address new requirements and scenarios. For example, an 4-threaded OpenMP job may be described in the following way: Type = "Job"; JobType = "Normal"; HostNumber = 1; WholeNodes = True; SMPGranularity = 4; Executable = "mytest_omp.exe"; Arguments = "4"; Environment = {"OMP_NUM_THREADS=4"};... Requirements = other.gluehostarchitecturesmpsize >= SMPGranularity; The HostNumber attribute defines the number of nodes the user wishes to obtain for his job. The SMPGranularity specifies the number of cores that any host involved in the allocation has to dedicate to the job, it corresponds to the number of slave-threads which will be forked off by the master-thread. The WholeNodes is a boolean variable which indicates whether whole nodes should be used exclusively or not.

8 5 Conclusion Current computing systems allow applying of many parallel solutions at the same time to achieve the maximum performance and efficiency gains. The EGEE project was concentrated first of all on providing a computing infrastructure and on running distributed applications, the support for parallel applications was rather poor. In July 2011, EMI1 Kebnekaise has been released by EGI as the first version of the UMD 1.0.0, which delivers a significant number of new features. The support for new CREAM JDL attributes and the fine-grained mapping of MPI processes and OpenMP threads to physical resources extend the variety of applications which may be grid-enabled, and improve the resource utilization as well. Acknowledgements. This work was partially supported by the following projects: EGI-InSPIRE RI , VEGA No. 2/0211/09, and ASFEU OPVV CRISIS ITMS References 1. EGI - European Grid Infrastructure EDG - European Data Grid EGEE - Enabling Grid for E-sciencE EGI-InSpire - Integrated Sustainable Pan-European Infrastructure for Researchers in Europe EMI - European Middleware Initiative IGE - Initiative for Globus in Europe SAGA - Simple API for Grid Applications StratusLab project S. Burke, et al.: glite3.2 User Guide, April 19, EMI 1 Kebnekaise ARC - Advanced Resource Connector dcache/srm - Storage Middleware System UNICORE - Uniform Interface to Computing Resources EGEE User s Guide - CREAM Service, January 31, EMI User s Guide - WMS Command Line Interface, June 30, Fabrizio Pacini: Job Description Language HowTo, December 17, http: // 2-Document.pdf 17. Fabrizio Pacini: Job Description Language Attributes Specification (for the glite WMS), March 11, CREAM Job Description Language Attributes Specification, February 28, MPI - Message Passing Interface Standard MPI-Start and MPI-Utils, EMI Document 1.0.1, Jun 14, OpenMP - Open Multi-Processing, API Specification for Parallel Programming.

A unified user experience for MPI jobs in EMI

A unified user experience for MPI jobs in EMI A unified user experience for MPI jobs in EMI Enol Fernández (CSIC) glite MPI PT Outline Parallel Jobs EMI middleware stacks approaches How to execute a simple MPI job with 16 process with ARC/gLite/UNICORE?

More information

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

Architecture of the WMS

Architecture of the WMS Architecture of the WMS Dr. Giuliano Taffoni INFORMATION SYSTEMS UNIT Outline This presentation will cover the following arguments: Overview of WMS Architecture Job Description Language Overview WMProxy

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Introduction to Grid Infrastructures

Introduction to Grid Infrastructures Introduction to Grid Infrastructures Stefano Cozzini 1 and Alessandro Costantini 2 1 CNR-INFM DEMOCRITOS National Simulation Center, Trieste, Italy 2 Department of Chemistry, Università di Perugia, Perugia,

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

On the EGI Operational Level Agreement Framework

On the EGI Operational Level Agreement Framework EGI-InSPIRE On the EGI Operational Level Agreement Framework Tiziana Ferrari, EGI.eu EGI Chief Operations Officer 1 Outline EGI and its ecosystem EGI Service Infrastructure Operational level agreements

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain

Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain Introduction (I)! Parallel applica/ons are common in clusters and HPC systems Grid infrastructures

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

How to use computing resources at Grid

How to use computing resources at Grid How to use computing resources at Grid Nikola Grkic ngrkic@ipb.ac.rs Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Academic and Educat ional Gr id Init iat ive of S er bia Oct.

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

Gergely Sipos MTA SZTAKI

Gergely Sipos MTA SZTAKI Application development on EGEE with P-GRADE Portal Gergely Sipos MTA SZTAKI sipos@sztaki.hu EGEE Training and Induction EGEE Application Porting Support www.lpds.sztaki.hu/gasuc www.portal.p-grade.hu

More information

Provisioning of Grid Middleware for EGI in the framework of EGI InSPIRE

Provisioning of Grid Middleware for EGI in the framework of EGI InSPIRE Ibergrid 2010 Provisioning of Grid Middleware for EGI in the framework of EGI InSPIRE M. David G. Borges, J. Gomes, I. Campos, A. Lopez, P. Orviz, J. López Cacheiro, C. Fernandez and A. Simon LIP, CSIC,

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

Grid Infrastructure For Collaborative High Performance Scientific Computing

Grid Infrastructure For Collaborative High Performance Scientific Computing Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific

More information

PoS(EGICF12-EMITC2)081

PoS(EGICF12-EMITC2)081 University of Oslo, P.b.1048 Blindern, N-0316 Oslo, Norway E-mail: aleksandr.konstantinov@fys.uio.no Martin Skou Andersen Niels Bohr Institute, Blegdamsvej 17, 2100 København Ø, Denmark E-mail: skou@nbi.ku.dk

More information

European Grid Infrastructure

European Grid Infrastructure EGI-InSPIRE European Grid Infrastructure A pan-european Research Infrastructure supporting the digital European Research Area Michel Drescher Technical Manager, EGI.eu Michel.Drescher@egi.eu TPDL 2013

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System International Collaboration to Extend and Advance Grid Education glite WMS Workload Management System Marco Pappalardo Consorzio COMETA & INFN Catania, Italy ITIS Ferraris, Acireale, Tutorial GRID per

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

EGI Operations and Best Practices

EGI Operations and Best Practices EGI-InSPIRE EGI Operations and Best Practices Tiziana Ferrari/EGI.eu CHAIN-REDS Workshop ISGC 2013 CHAIN-REDS workshop, ISGC, March 2013 1 EGI Infrastructure EGI Infrastructure is the composition of multiple

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

GRID COMPANION GUIDE

GRID COMPANION GUIDE Companion Subject: GRID COMPANION Author(s): Miguel Cárdenas Montes, Antonio Gómez Iglesias, Francisco Castejón, Adrian Jackson, Joachim Hein Distribution: Public 1.Introduction Here you will find the

More information

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools EGI-InSPIRE EGI Operations Tiziana Ferrari/EGI.eu EGI Chief Operations Officer 1 Outline Infrastructure and operations architecture Services Monitoring and management tools Operations 2 Installed Capacity

More information

Problemi di schedulazione distribuita su Grid

Problemi di schedulazione distribuita su Grid Problemi di schedulazione distribuita su Grid Ivan Porro Università degli Studi di Genova, DIST, Laboratorio BioLab pivan@unige.it 010-3532789 Riadattato da materiale realizzato da INFN Catania per il

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE DJRA1.1.1 - COMPUTE ARE A WORK PLAN AND STATUS REPORT EC DELIVERABLE: D5.1.1 Document identifier: EMI-DJRA1.1.1-1277608- Compute_Area_Work_Plan-v1.0.doc Activity: Lead Partner:

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE DSA2.3.1 - PERIODIC QA REPORTS EU DELIVERABLE: D4.3.1 Document identifier: EMI-DSA2.3.1-QAReport-Final.doc Date: 31/07/2010 Activity: Lead Partner: Document status: Document

More information

LCG-2 and glite Architecture and components

LCG-2 and glite Architecture and components LCG-2 and glite Architecture and components Author E.Slabospitskaya www.eu-egee.org Outline Enabling Grids for E-sciencE What are LCG-2 and glite? glite Architecture Release 1.0 review What is glite?.

More information

J O B D E S C R I P T I O N L A N G U A G E A T T R I B U T E S S P E C I F I C A T I O N

J O B D E S C R I P T I O N L A N G U A G E A T T R I B U T E S S P E C I F I C A T I O N . E G E E J O B D E S C R I P T I O N L A N G U A G E A T T R I B U T E S S P E C I F I C A T I O N F O R T H E G L I T E W O R K L O A D M A N A G E M E N T S Y S T E M Document identifier: WMS-JDL.odt

More information

High Performance Computing Course Notes Grid Computing I

High Performance Computing Course Notes Grid Computing I High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

MPI SUPPORT ON THE GRID. Kiril Dichev, Sven Stork, Rainer Keller. Enol Fernández

MPI SUPPORT ON THE GRID. Kiril Dichev, Sven Stork, Rainer Keller. Enol Fernández Computing and Informatics, Vol. 27, 2008, 213 222 MPI SUPPORT ON THE GRID Kiril Dichev, Sven Stork, Rainer Keller High Performance Computing Center University of Stuttgart Nobelstrasse 19 70569 Stuttgart,

More information

DataGrid. Document identifier: Date: 24/11/2003. Work package: Partner: Document status. Deliverable identifier:

DataGrid. Document identifier: Date: 24/11/2003. Work package: Partner: Document status. Deliverable identifier: DataGrid WMS GUI USER G UIDE Document identifier: Work package: Partner: WP1 Datamat SpA Document status Deliverable identifier: Abstract: This document provides a description of all functionalities provided

More information

EGI: Linking digital resources across Eastern Europe for European science and innovation

EGI: Linking digital resources across Eastern Europe for European science and innovation EGI- InSPIRE EGI: Linking digital resources across Eastern Europe for European science and innovation Steven Newhouse EGI.eu Director 12/19/12 EPE 2012 1 EGI European Over 35 countries Grid Secure sharing

More information

PoS(EGICF12-EMITC2)057

PoS(EGICF12-EMITC2)057 Enol Fernandez-del-Castillo Instituto de Fisica de Cantabria (IFCA), CSIC-UC, Spain E-mail: enolfc@ifca.unican.es John Walsh Trinity College Dublin, Ireland E-mail: John.Walsh@scss.tcd.ie CESGA, Spain

More information

Grids and Clouds Integration and Interoperability: an overview

Grids and Clouds Integration and Interoperability: an overview Grids and Clouds Integration and Interoperability: an overview 1 CERN European Organization for Nuclear Research 1211 Geneva, Switzerland E-mail: alberto.di.meglio@cern.ch Morris Riedel Forschungzentrum

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Scalable Computing: Practice and Experience Volume 10, Number 4, pp

Scalable Computing: Practice and Experience Volume 10, Number 4, pp Scalable Computing: Practice and Experience Volume 10, Number 4, pp. 413 418. http://www.scpe.org ISSN 1895-1767 c 2009 SCPE MULTI-APPLICATION BAG OF JOBS FOR INTERACTIVE AND ON-DEMAND COMPUTING BRANKO

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE Aleksandar Belić Scientific Computing Laboratory Institute of Physics EGEE Introduction EGEE = Enabling Grids for

More information

Multi-thread and Mpi usage in GRID Roberto Alfieri - Parma University & INFN, Gr.Coll. di Parma

Multi-thread and Mpi usage in GRID Roberto Alfieri - Parma University & INFN, Gr.Coll. di Parma SuperB Computing R&D Workshop Multi-thread and Mpi usage in GRID Roberto Alfieri - Parma University & INFN, Gr.Coll. di Parma Ferrara, Thursday, March 11, 2010 1 Outline MPI and multi-thread support in

More information

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN EMI Deployment Planning C. Aiftimiei D. Dongiovanni INFN Outline Migrating to EMI: WHY What's new: EMI Overview Products, Platforms, Repos, Dependencies, Support / Release Cycle Migrating to EMI: HOW Admin

More information

Future of Grid parallel exploitation

Future of Grid parallel exploitation Future of Grid parallel exploitation Roberto Alfieri - arma University & INFN Italy SuperbB Computing R&D Workshop - Ferrara 6/07/2011 1 Outline MI support in the current grid middleware (glite) MI and

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Introduction to Grid Computing

Introduction to Grid Computing Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

EMI Data, the unified European Data Management Middleware

EMI Data, the unified European Data Management Middleware EMI Data, the unified European Data Management Middleware Patrick Fuhrmann (DESY) EMI Data Area lead (on behalf of many people and slides stolen from all over the place) Credits Alejandro Alvarez Alex

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

The Role and Functions of European Grid Infrastructure

The Role and Functions of European Grid Infrastructure The Role and Functions of European Grid Infrastructure Luděk Matyska Masaryk University and CESNET Czech Republic (Ludek.Matyska@cesnet.cz) EGI_DS Project Director What is a Grid? A distributed system

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures Journal of Physics: Conference Series Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures To cite this article: L Field et al

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

Chapter 2 Introduction to the WS-PGRADE/gUSE Science Gateway Framework

Chapter 2 Introduction to the WS-PGRADE/gUSE Science Gateway Framework Chapter 2 Introduction to the WS-PGRADE/gUSE Science Gateway Framework Tibor Gottdank Abstract WS-PGRADE/gUSE is a gateway framework that offers a set of highlevel grid and cloud services by which interoperation

More information

SZDG, ecom4com technology, EDGeS-EDGI in large P. Kacsuk MTA SZTAKI

SZDG, ecom4com technology, EDGeS-EDGI in large P. Kacsuk MTA SZTAKI SZDG, ecom4com technology, EDGeS-EDGI in large P. Kacsuk MTA SZTAKI The EDGI/EDGeS projects receive(d) Community research funding 1 Outline of the talk SZTAKI Desktop Grid (SZDG) SZDG technology: ecom4com

More information

Advanced School in High Performance and GRID Computing November Introduction to Grid computing.

Advanced School in High Performance and GRID Computing November Introduction to Grid computing. 1967-14 Advanced School in High Performance and GRID Computing 3-14 November 2008 Introduction to Grid computing. TAFFONI Giuliano Osservatorio Astronomico di Trieste/INAF Via G.B. Tiepolo 11 34131 Trieste

More information

DataGrid. Document identifier: Date: 16/06/2003. Work package: Partner: Document status. Deliverable identifier:

DataGrid. Document identifier: Date: 16/06/2003. Work package: Partner: Document status. Deliverable identifier: DataGrid JDL ATTRIBUTES Document identifier: Work package: Partner: WP1 Datamat SpA Document status Deliverable identifier: Abstract: This note provides the description of JDL attributes supported by the

More information

Heterogeneous Grid Computing: Issues and Early Benchmarks

Heterogeneous Grid Computing: Issues and Early Benchmarks Heterogeneous Grid Computing: Issues and Early Benchmarks Eamonn Kenny 1, Brian Coghlan 1, George Tsouloupas 2, Marios Dikaiakos 2, John Walsh 1, Stephen Childs 1, David O Callaghan 1, and Geoff Quigley

More information

Implementing GRID interoperability

Implementing GRID interoperability AFS & Kerberos Best Practices Workshop University of Michigan, Ann Arbor June 12-16 2006 Implementing GRID interoperability G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, C. Scio**,

More information

E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT

E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT DSA1.1 Document Filename: Activity: Partner(s): Lead Partner: Document classification: EUFORIA-DSA1.1-v1.0-CSIC SA1 CSIC, FZK, PSNC, CHALMERS CSIC PUBLIC

More information

QosCosGrid Middleware

QosCosGrid Middleware Domain-oriented services and resources of Polish Infrastructure for Supporting Computational Science in the European Research Space PLGrid Plus QosCosGrid Middleware Domain-oriented services and resources

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side Troubleshooting Grid authentication from the client side By Adriaan van der Zee RP1 presentation 2009-02-04 Contents The Grid @NIKHEF The project Grid components and interactions X.509 certificates, proxies

More information

Deploying virtualisation in a production grid

Deploying virtualisation in a production grid Deploying virtualisation in a production grid Stephen Childs Trinity College Dublin & Grid-Ireland TERENA NRENs and Grids workshop 2 nd September 2008 www.eu-egee.org EGEE and glite are registered trademarks

More information

Argus Vulnerability Assessment *1

Argus Vulnerability Assessment *1 Argus Vulnerability Assessment *1 Manuel Brugnoli and Elisa Heymann Universitat Autònoma de Barcelona June, 2011 Introduction Argus is the glite Authorization Service. It is intended to provide consistent

More information

XRAY Grid TO BE OR NOT TO BE?

XRAY Grid TO BE OR NOT TO BE? XRAY Grid TO BE OR NOT TO BE? 1 I was not always a Grid sceptic! I started off as a grid enthusiast e.g. by insisting that Grid be part of the ESRF Upgrade Program outlined in the Purple Book : In this

More information

ATLAS NorduGrid related activities

ATLAS NorduGrid related activities Outline: NorduGrid Introduction ATLAS software preparation and distribution Interface between NorduGrid and Condor NGlogger graphical interface On behalf of: Ugur Erkarslan, Samir Ferrag, Morten Hanshaugen

More information

First European Globus Community Forum Meeting

First European Globus Community Forum Meeting First European Globus Community Forum Meeting Florian Zrenner (zrenner@lrz.de) Slides from Dr. Helmut Heller (heller@lrz.de) Leibniz Supercomputing Centre (LRZ), Munich, Germany September 7 th, 2011 1

More information

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING M. KACI mohammed.kaci@ific.uv.es 2nd EGAN School, 03-07 December 2012, GSI Darmstadt, Germany GRID COMPUTING TECHNOLOGY THE EUROPEAN GRID: HISTORY

More information

INDIGO AAI An overview and status update!

INDIGO AAI An overview and status update! RIA-653549 INDIGO DataCloud INDIGO AAI An overview and status update! Andrea Ceccanti (INFN) on behalf of the INDIGO AAI Task Force! indigo-aai-tf@lists.indigo-datacloud.org INDIGO Datacloud An H2020 project

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano.

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano. First sights on a non-grid end-user analysis model on Grid Infrastructure Roberto Santinelli CERN E-mail: roberto.santinelli@cern.ch Fabrizio Furano CERN E-mail: fabrzio.furano@cern.ch Andrew Maier CERN

More information

A Practical Approach for a Workflow Management System

A Practical Approach for a Workflow Management System A Practical Approach for a Workflow Management System Simone Pellegrini, Francesco Giacomini, Antonia Ghiselli INFN Cnaf Viale B. Pichat, 6/2 40127 Bologna {simone.pellegrini francesco.giacomini antonia.ghiselli}@cnaf.infn.it

More information

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014 StratusLab Cloud Distribution Installation Charles Loomis (CNRS/LAL) 3 July 2014 StratusLab What is it? Complete IaaS cloud distribution Open source (Apache 2 license) Works well for production private

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

Access the power of Grid with Eclipse

Access the power of Grid with Eclipse Access the power of Grid with Eclipse Harald Kornmayer (Forschungszentrum Karlsruhe GmbH) Markus Knauer (Innoopract GmbH) October 11th, 2006, Eclipse Summit, Esslingen 2006 by H. Kornmayer, M. Knauer;

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Roberto Alfieri. Parma University & INFN Italy

Roberto Alfieri. Parma University & INFN Italy EGI-InSPIRE TheoMpi: a large MPI cluster on the grid for Theoretical Physics Roberto Alfieri Parma University & INFN Italy Co-authors: S. Arezzini, A. Ciampa, E. Mazzoni (INFN-PI), A. Gianelle, M. Sgaravatto

More information

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE The glite middleware Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 John.White@cern.ch www.eu-egee.org EGEE and glite are registered trademarks Outline glite distributions Software

More information

AliEn Resource Brokers

AliEn Resource Brokers AliEn Resource Brokers Pablo Saiz University of the West of England, Frenchay Campus Coldharbour Lane, Bristol BS16 1QY, U.K. Predrag Buncic Institut für Kernphysik, August-Euler-Strasse 6, 60486 Frankfurt

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Integration of Cloud and Grid Middleware at DGRZR

Integration of Cloud and Grid Middleware at DGRZR D- of International Symposium on Computing 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology March 12, 2010 Overview D- 1 D- Resource Center Ruhr 2 Clouds in the German

More information

Cluster Nazionale CSN4

Cluster Nazionale CSN4 CCR Workshop - Stato e Prospettive del Calcolo Scientifico Cluster Nazionale CSN4 Parallel computing and scientific activity Roberto Alfieri - Parma University & INFN, Gr.Coll. di Parma LNL - 16/02/2011

More information

Virtualization. A very short summary by Owen Synge

Virtualization. A very short summary by Owen Synge Virtualization A very short summary by Owen Synge Outline What is Virtulization? What's virtulization good for? What's virtualisation bad for? We had a workshop. What was presented? What did we do with

More information

THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap

THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap Arnie Miles Georgetown University adm35@georgetown.edu http://thebes.arc.georgetown.edu The Thebes middleware project was

More information

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011 ( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid

More information

How to build Scientific Gateways with Vine Toolkit and Liferay/GridSphere framework

How to build Scientific Gateways with Vine Toolkit and Liferay/GridSphere framework How to build Scientific Gateways with Vine Toolkit and Liferay/GridSphere framework Piotr Dziubecki, Piotr Grabowski, Michał Krysiński, Tomasz Kuczyński, Dawid Szejnfeld, Dominik Tarnawczyk, Gosia Wolniewicz

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

Multiple Broker Support by Grid Portals* Extended Abstract

Multiple Broker Support by Grid Portals* Extended Abstract 1. Introduction Multiple Broker Support by Grid Portals* Extended Abstract Attila Kertesz 1,3, Zoltan Farkas 1,4, Peter Kacsuk 1,4, Tamas Kiss 2,4 1 MTA SZTAKI Computer and Automation Research Institute

More information

Bob Jones. EGEE and glite are registered trademarks. egee EGEE-III INFSO-RI

Bob Jones.  EGEE and glite are registered trademarks. egee EGEE-III INFSO-RI Bob Jones EGEE project director www.eu-egee.org egee EGEE-III INFSO-RI-222667 EGEE and glite are registered trademarks Quality: Enabling Grids for E-sciencE Monitoring via Nagios - distributed via official

More information

WMS overview and Proposal for Job Status

WMS overview and Proposal for Job Status WMS overview and Proposal for Job Status Author: V.Garonne, I.Stokes-Rees, A. Tsaregorodtsev. Centre de physiques des Particules de Marseille Date: 15/12/2003 Abstract In this paper, we describe briefly

More information

PoS(ACAT2010)029. Tools to use heterogeneous Grid schedulers and storage system. Mattia Cinquilli. Giuseppe Codispoti

PoS(ACAT2010)029. Tools to use heterogeneous Grid schedulers and storage system. Mattia Cinquilli. Giuseppe Codispoti Tools to use heterogeneous Grid schedulers and storage system INFN and Università di Perugia E-mail: mattia.cinquilli@pg.infn.it Giuseppe Codispoti INFN and Università di Bologna E-mail: giuseppe.codispoti@bo.infn.it

More information

High Performance Computing from an EU perspective

High Performance Computing from an EU perspective High Performance Computing from an EU perspective DEISA PRACE Symposium 2010 Barcelona, 10 May 2010 Kostas Glinos European Commission - DG INFSO Head of Unit GÉANT & e-infrastructures 1 "The views expressed

More information

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007 Grid Programming: Concepts and Challenges Michael Rokitka SUNY@Buffalo CSE510B 10/2007 Issues Due to Heterogeneous Hardware level Environment Different architectures, chipsets, execution speeds Software

More information