The NorduGrid production Grid infrastructure, status and plans

Size: px
Start display at page:

Download "The NorduGrid production Grid infrastructure, status and plans"

Transcription

1 The NorduGrid production Grid infrastructure, status and plans P.Eerola,B.Kónya, O. Smirnova Department of High Energy Physics Lund University Box 118, Lund, Sweden T. Ekelöf, M. Ellert Department of Radiation Sciences Uppsala University Box 535, Uppsala, Sweden J.R.Hansen,J.L.Nielsen,A.Wäänänen Niels Bohr Institutet for Astronomi, Fysik og Geofysik Blegdamsvej 17, Dk-2100 Copenhagen Ø, Denmark A. Konstantinov Institute of Material Science and Applied Research Vilnius University Saulėtekio al. 9, Vilnius 2040, Lithuania T. Myklebust, F. Ould-Saada Department of Physics University of Oslo P. O. Box 1048, Blindern, 0316 Oslo, Norway J. Herrala, M. Tuisku Helsinki Institute of Physics University of Helsinki P.O. Box 33 FIN Helsinki, Finland B. Vinter Dept. of Mathematics & Computer Science Odense University Campusvej 55 DK-5230 Odense M, Denmark Abstract NorduGrid offers reliable Grid services for academic users over an increasing set of computing & storage resources spanning through the Nordic countries Denmark, Finland, Norway and Sweden. A small group of scientists has already been using the NorduGrid as their daily computing utility. In the near future we expect a rapid growth both in the number of active users and available resources thanks to the recently launched Nordic Grid projects. In this paper we report on the present status and short term plans of the Nordic Grid infrastructure and describe the available and foreseen resources, Grid services and our forming user base. 1. Introduction Grids are emerging as new promising infrastructures for solving large-scale scientific challenges [1]. Recently many Grid projects have been launched for developing, evaluating and deploying Grid Technologies. Despite the vast number of projects very few Grid TestBeds have succeeded in surpassing the demonstration quality stage and providing reliable Grid services on a daily basis for their users. In order to create a Grid infrastructure which can help face the computing and data challenges of the Nordic scientific communities, the Nordic TestBed for Wide Area Computing and Data Handling, also know as NorduGrid [2], was launched in Scandinavia and Finland (see Figure 1). The early TestBed set up in August 2001 comprised of five core development sites in Bergen, Copenhagen, Lund, Oslo and Uppsala. It was based on the at that time very beta quality version 2 of the Globus Toolkit [3]. The evaluation of functionality and reliability of the Grid services provided by the Globus Toolkit and the EU DataGrid Middleware [4] carried out during the Autumn of 2001 showed that these toolkits alone could not be used to build a production Grid. Therefore the NorduGrid developers came up with their own architecture and implementation proposal [5, 6]. In May 2002 the first version of the NorduGrid middleware [7] was rolled out on the TestBed. The toolkit soon got evaluated and deployed in some of the largest Nordic computing centers. Section 2 gives more detailed account of the NorduGrid resources while Section 3 introduces the provided Grid services. It was the ATLAS High Energy Physics Group which first started to use NorduGrid for its production data processing and since July 2002 NorduGrid hosts the production ATLAS runs. As of this writing NorduGrid provides a unique international production Grid infrastructure to its growing user

2 base. The reliable Grid services implemented by the NorduGrid middleware are provided permanently, the TestBed is operational 24 hours a day, 7 days a week. The achieved scientific results and the Nordic share of the ATLAS Data challenges (Section 4) has proven that NorduGrid overcame the demonstration quality stage and that it is now a production Grid infrastructure. In the near future NorduGrid foresees an extensive scaleup both in terms of resources and users due to the recently launched projects such as the Danish center for Grid computing, the Nordic Data Grid Facility or the SweGrid [8]. It is obvious to us that the currently available low-level Grid services will not be satisfactory to professionally manage the scaled-up infrastructure and the extended user base. Therefore common policies and technical solutions (higherlevel Grid services) are needed to be found. Bergen Trondheim NORWAY Oslo DENMARK Tromsø SWEDEN Linköping Göteborg Uppsala Århus Lund Odense Copenhagen Umeå Stockholm FINLAND Helsinki WAN lines: 10 Gbps, GigaSUNET 2.5 Gbps, NORDUNet 2.5 Gbps, UNINETT 622 Mbps, Forskningsnet 100 km Figure 1. Grid enabled sites: The NorduGrid connectivity map. Solid lines illustrate connections to existing resources, dashed lines connect sites which plan to join the facility 2. The resources The NorduGrid fabric consists of a dynamic and an increasing number of computing and storage resources connected via the excellent Nordunet academic network of the Nordic countries [9]. Resources belong to different administrative domains of academic institutes and Nordic Supercomputing Centers. The available resources can any time be seen on the Grid Monitor [10] shown in Figure 2. The majority of the computing resources are Linux clusters although we are experimenting with SMP machines as well. The list of Grid-enabled clusters ranges from small test clusters with a couple of CPUs to TOP500 supercomputing facilities like the Monolith or the HPC2N Superclusters [11]. At the time of writing NorduGrid connects approximately 20 resources with a rather heterogeneous setup. There are both Grid dedicated and non-dedicated resources, PC and Alpha based clusters, SMP machines, clusters running RedHat, Mandrake, Debian or Slackware Linux. NorduGrid uses disk servers as storage facilities. Storage space is continuously added following user demands. As of June 2003, NorduGrid operates Terabyte range storage elements placed in Oslo, Copenhagen, Lund, Linköping and Umeå adding up to a total of 10 Terabytes distributed capacity. NorduGrid is already one of the largest production Grids in the world, both in terms of number of connected sites and resources (app CPU s are available 24 hours a day, 7 days a week). During the Autumn of 2003, the facility is foreseen to grow further. Sweden has initiated the Swe- Grid project which will build and Grid-enable six Beowulfclass clusters and install distributed storage in the order of 100 Terabytes. Finland plans to join the NorduGrid through their official supercomputing centre, Centre for Scientific Computing, by adding a 512 CPU IBM Regatta machine as well as a 128 CPU SGI Origin 3000 machine. In Norway the supercomputing meta-center, NOTUR, will join with its resources: a 96 CPU IBM Regatta and a 862 CPU SGI Origin Finally, Denmark joins through the Danish Centre for Grid Computing and the Danish Centre for Scientific Computing, which will add an 80 CPU IBM Regatta, a 32 CPU SGI Origin 3000 and two Beowulf class clusters of 652 and 480 CPU s respectively. 2.1 Site setup The careful design of the NorduGrid middleware [5] resulted in a lightweight, portable and non-intrusive Grid solution. The main design principles are the following: ffl resource owners retain full control over their resources ffl sites do not have to be Grid dedicated, resources can be shared between Grid and non-grid (local) applications

3 ffl number of Grid daemons is kept minimal and Grid site management as simple as possible ffl NorduGrid is a Grid layer and not a cluster installation, configuration nor management tool ffl there is no unnecessary dictation or constraint on the local cluster setup, no dependency on a particular operating system version ffl the middleware is only installed on the frontend; no extra requirements on the nodes, e.g. nodes can be on a private network. These principles allowed the toolkit to get widely adopted within the computing centers and academic institutions. Moreover, the implementation resulted in an easily installable, configurable and maintainable Grid layer which makes the additional Grid management costs of a NorduGrid site minimal. Some sites are administrated by the NorduGrid developers and some of the site administrators are actively participating in the development process, thus creating a very healthy and close connection between developers and site administrators. 3 Grid services The NorduGrid facility reliably delivers a broad set of Grid services for handling user authentication, authorization, job submission and management, low-level data management as well as services coupled to the information system such as resource aggregation, discovery, monitoring and resource selection (brokering). The above services, referred to as low-level Grid services, are provided by the NorduGrid Toolkit [7] which is a Grid middleware solution based upon the Globus Toolkit libraries and services. Some of the original Globus components are kept (authentication and security framework), some are slightly enhanced (authorization) or extended (resource specification language), some are used in a different manner (LDAP-MDS), while others are completely replaced (GRAM and the Globus jobmanager) by their NorduGrid equivalent. Furthermore NorduGrid have created new services and solutions such as the Grid Manager (a smart Grid frontend on top of the clusters), the NorduGrid Gridftp server (which among other things accepts the Grid job requests), the Information Model with the Monitoring system and the User Interface with the integrated brokering. Even though the NorduGrid Toolkit is under constant development, the Grid services are provided reliably since extra priority is put on maintaining the quality and stability of toolkit and the Grid. The remarkable stability of the NorduGrid was achieved by replacing the most problematic Globus components (gass-cache, GRAM, jobmanager) and simplifying some of the others (the GIIS-GRIS from MDS). Our philosophy to use something simple but functional coupled with the substantial amount of testing and debugging effort added to the overall stability. This set of low-level Grid services appears to be sufficient to operate a production Grid. At the same time it has become obvious that there are important higher-level Grid services missing. In what follows, the implemented services are described together with our conception on the missing higher-level ones. 3.1 Authentication & Security The security infrastructure of the NorduGrid complies with the Grid Security Infrastructure (GSI) [12] of the Globus Toolkit. All users and resources are authenticated by their X.509 Public Key Infrastructure (PKI) certificates. NorduGrid runs its own Certificate Authority (CA) which is responsible for establishing the identity of the NorduGrid users and resources. The NorduGrid CA is recognized by other related projects, such as the EDG [4]. In fact, NorduGrid and EDG certificates are mutually accepted. 3.2 Authorization The current authorization solution follows the rather limited capabilities of the Globus Toolkit. Local (site) authorization is accomplished through a simple mapfile which contains the list of authorized Grid credentials mapped to local Unix user accounts. Local access control is then enforced by fine tuning the Unix and/or the batch system s authorization capabilities. In order to facilitate the synchronization of the grid mapfiles, a collective authorization method has been set up in accordance with the practices of many Grid TestBeds. A central service (a GSI-enabled OpenLDAP database) maintains a list of Grid users organized into user groups. A small utility (nordugridmap) on the resources periodically pulls the list of authorized users and generates the grid mapfile according to the local site policy which decides upon the authorized user groups. Evidently, the above authorization structure is very coarse grained, rather static and absolutely lacks flexibility and scalability. Therefore NorduGrid is making steps towards the usage of one of the recently proposed Grid authorization frameworks such as the EDG s Virtual Organization Management Service (VOMS) [13] or the Globus Project s Community Authorization Service (CAS) [14]. 3.3 Accounting, Logging & Bookkeeping Terms such as accounting, logging & bookkeeping refer to Grid services responsible for collecting some sort of

4 Figure 2. A typical screenshot of the Grid Monitor [10] during production. The number of CPU s of each cluster is given to the right of the cluster name. The monitor shows both the number of running grid and locally submitted (non-grid) jobs for each cluster. resource usage and job history and other information. In centralized systems where all the Grid jobs pass through a central broker the accounting service comes for free. On the other hand, in real distributed environments the accounting information needs to be collected from the local resources. According to our architecture [5] there is no central broker in the NorduGrid. Therefore to maintain an accounting service, NorduGrid have to set up a central accounting database and collect the usage information from the resources. As of writing, the NorduGrid Grid Manager [15] can collect local resource usage information and it is already capable of sending this information through a SOAP message to a central accounting database. The details of the central accounting database and the usage record are currently under development. An alternative solution to the above described accounting service could be the consideration of a Grid Market Economy framework. Within that scenario the Grid Marketplace would serve as a natural logging facility, where all the resource consumers and providers could meet and their contracts would be recorded. 3.4 Uniform access to computing resources NorduGrid offers a uniform interface to its computing resources. Physical access to the computing cycles (batch systems) is provided through a layer which consists of the pair of a GridFTP interface and the Grid Manager (GM) [15] running on the frontend machine of the cluster. Grid job requests, formulated in terms of the extended Resource Specification Language (XRSL) [16] and submitted by the NorduGrid User Interface (UI) or a Grid Portal (see Section 3.9) are received by the job-plugin of the NorduGrid GridFTP server (NGFS). The job-plugin of the NGFS replaces the GRAM/Gatekeeper [17] of the Globus Toolkit. The NGFS is also responsible for handling the stage-in requests of the User Interface, the UI can upload files through the NGFS to the cluster. The (GM) acts as a smart Grid layer running on top of

5 the batch system. It creates, manages and cleans up the temporary job directories (called session directories), collects input data specified in the job s xrsl file, prepares the requested RuntimeEnvironments of the job, translates the Grid job description (xrsl file) into the batch system s language and submits the Grid job to the batch system. On requests received through the job-plugin of the NGFS the GM cancels the Grid job and cleans up the session directory. On job completion, if requested in the xrsl, the GM automagically handles the stage-out process, uploads files to their destination, registers metadata into file catalogues. Additionally, GM manages a cache area for input files on the frontend, in this way letting consecutive jobs share input data and making redundant data transfer unnecessary. GM also provides information on local resource usage for the accounting (Section 3.3) and the monitoring system (Section 3.7). Let us stress here that the NorduGrid layer completely replaces (and largely extends) the functionalities of the Gatekeeper, GRAM and jobmanager services of the Globus Toolkit. The job submission and job management between the UI and the resource is handled by the NGFS job-plugin (i.e. no need for the Gatekeeper), the stage-in process to the session directory is governed by the GM using the NGFS (i.e. no need for the Globus gass cache), the local job submission to the queuing system is also handled by the GM, whereas the job monitoring is implemented exclusively through the information system (no job monitoring through the Jobmanager ports, see Section 3.7). 3.5 Data management NorduGrid supports some low-level data management services for moving data around on the Grid and managing file replication catalogues. The currently available data management infrastructure consists of the following components: Simple Storage Elements (SSE), Replica Catalogs (RC) and a set of user level commands. Moreover the Grid Manager s cache area and its stage-in/stage-out machinery can be considered as part of the data management infrastructure too. The SSE is actually not much more than a NorduGrid GridFtp server plus a rather simple information provider. Its main features involve support for virtual directory trees, access to local file systems, Grid credential (at the moment the Distinguished Name of the certificate) based access control. NorduGrid makes use of the OpenLDAP-based Globus Replica Catalog for maintaining file replication information. A couple of patches fixing apparent problems of transferring relatively big amount of data has made the centralized system functional enough. The user-level commands allow the transfer, removal, replication and registration of data files in the system. These commands are built on the Globus API libraries, the only significant change is the addition of the possibility of secure authenticated connections with RC server. As it was described in Section 3.4, the GM can automatically perform a significant part of the stage-in/stage-out process of a Grid job, thus saving the user from the burden of manual management of input and output files. The GM is capable of downloading the requested input files to the job s session directory, can perform RC queries on behalf of the user and after job completion it can upload output files to an SSE and register the corresponding metadata into an RC. The experience of the ATLAS user group (see Section 4.2) showed that although it was possible, it is rather difficult to handle production tasks with the present low-level data management system. A higher-level data management is very much needed. In particular, a reliable and consistent (distributed) data replication catalog, intelligent storage elements, storage allocation mechanisms are the missing pieces of a higher-level system. Therefore NorduGrid has started to draft its data management architecture. 3.6 Resource aggregation & discovery NorduGrid consists of a dynamic set of resources which vary from time to time since sites may join and leave the Grid. A resource grouping or aggregation mechanism is used to hold the dynamic Grid together, sites are soft registering to index services. Resource discovery is the process of finding the available resources through the index services. Within the NorduGrid these services are provided by the dynamic distributed Information System (IS) [18]. The IS was created by extending the LDAP-based Monitoring and Discovery Services (MDS) [19], of the Globus Toolkit. Resource aggregation is implemented through soft registrations which are used by the local information databases to register their contact information to the indices. The indices themselves can further register to other indices. This makes the Grid dynamic, allowing sites to come and go, the soft-state registrations made the creation of specific topologies of indexed resources possible. NorduGrid utilizes a multilevel crosslinked tree hierarchy which tries to follow the geographical organization. Resources belonging to the same country are grouped together and register to the country s index service and the index services are further registering to a top level NorduGrid index. In order to avoid any single point of failure, NorduGrid operates a multi-rooted tree with several top-level indices. Resource discovery involves query of information indices. In the NorduGrid the clients query the indices only to find out the contact information of the local databases, so the indices are used as simple dynamic link catalogues, the resources are then queried directly. This streamlined setup and usage of the information indices made the system

6 much more efficient and reliable. The present implementation of the soft-state registration of the MDS makes the creation of resource topologies (the aggregation of resources) a rather cumbersome process, NorduGrid is considering a more flexible replacement for this mechanism. 3.7 Monitoring The Grid monitoring service is built upon the distributed Information System sketched in the previous section. The Grid Monitor (the main window is shown in Fig. 2) is actually nothing else but a Grid client which takes a snapshot of the status of the Grid by performing queries on the distributed LDAP system. Just as the Information System, the Monitor itself follows a pull model: the status information is collected (pulled) by direct queries of the resources. For this first the information indices are contacted in order to find out the list of available resources (contact strings). Technically, the Grid Monitor is implemented as a set of PHP codes running on a web server. The PHP-LDAP module made it especially easy to natively interface to the distributed LDAP-based IS. Within the implementation of a pull model the issue of timeouts of pending queries is critical. Unfortunately the original LDAP-MDS has no properly working timeouts, hence frozen and hanging queries were frequent until NorduGrid provided fixes for functioning timeouts. The Monitor allows browsing through all the published information or launching real time queries on resources and attributes. The information content presented by the Monitor is determined by the NorduGrid Information Model [18]. The NorduGrid schema provides information on clusters, batch queues, storage elements, Grid users and Grid jobs. The daily operation of the NorduGrid and the satisfaction of our users proved the validity of our information model and monitoring service. We expect that the service will efficiently scale even on the extented size of the Grid. However it is apparent that a full scale snapshoting system (regardless if it follows a pull or a push model) is not maintainable on still larger systems, therefore different monitoring strategies must be investigated and followed. NorduGrid is performing scalability tests of its infrastructure and looking for scalable solutions. 3.8 Resource management Resource management on the Grid usually refers to the problem of how the Grid jobs are scheduled or allocated among the Grid nodes. Grid scheduling or Superscheduling over the local schedulers (batch systems) is a very complex problem (see the relevant research area of the Global Grid Forum [20]). There are pros & cons on brokering from a central pool, some people even questions the justification and feasibility of such a service. On the other hand Grid market based approaches (which are essentially centralized pools) look as rather promissing candidates for solving the resource management challenge. It was not the task of NorduGrid to settle this question. Instead NorduGrid took a very pragmatic step and followed the simpliest and most straightforward approach, the individual agent based model. NorduGrid implements a fully decentralized resource management system where independent agents are making their own brokering decisions after performing queries against the Information System. The Nordugrid broker comes as an integral part of the User Interface (Sec. 3.9). First it scans the IS for the list of available resources, then matches the job requirements against the possible target and, following brokering algorithm it selects the destination cluster. The brokering algorithm takes into account the number of free and occupied CPUs and the data locality. The broker follows a mixture of a CPU and data driven brokering scheme. 3.9 User Interface A set of command line tools is provided for job submission, status query and job management, etc. There are also commands for retreiving data from finished jobs or accessing files on Storage Elements and manipulating the replica information. Detailed information of the NorduGrid commands can be found in the User Interface Manual [21]. A command line Grid session requires the preparation of an xrsl file [16], which provides the job description by specifying the input data, the executable, the output data or the resource requirements. Then the job is submitted to the Grid with the ngsub command which does the brokering and sends the job to the selected resource (see Section 3.8 on resource allocation and brokering). The User Interface is distributed in an easily installable preconfigured, self-contained portable package available from the NorduGrid download page. Another option to access the NorduGrid is via a Grid portal, which is being developed at the Helsinki Institute of Physics, Finland. The NorduGrid portal is part of the GridBlocks technology framework [22]. The objective of the NorduGrid portal is to provide users with a web-based interface to the available resources and services. The portal is implemented as a light-weight Java application, which embeds the NorduGrid command line interface. A portal user does not need to install the NorduGrid client software, only a web browser is required. The user simply connects his/her web browser to the portal web site and executes NorduGrid commands through the web browser interface.

7 4. Users on the NorduGrid The Grid is used for middleware development, system integration tests, for demonstrations and during tutorials but most importantly for production runs. The existing active user base is what tells NorduGrid apart from the DemoGrids. Users are the key assets on any Grid, they are the real criteria of a production quality operation. One of the main objectives of the recently launched Nordic Grid projects is to attract users to the Grid by providing low level entrance barrier, these projects in particular focus on the traditional High Performance Computing community. 4.1 The early pioneers A small group of scientists who are traditionally dealing with complex, computationally extensive problems has already discovered the NorduGrid. From their point of view the Grid is in the Golden Age of the Wild West where there is a free lunch of vast amount of computing power available for everybody who dares to make the first steps. Our pioneers have been successfully utilizing the Grid for supersymmetric theoretical particle physics model calculations [24] or for performing quantum Monte Carlo simulations of large-scale quantum many body systems [25]. 4.2 The ATLAS HEP group In order to prepare for data-taking at the LHC starting 2007, ATLAS [26] has started a series of computing challenges of increasing size and complexity. The goals include a test of its computing model and the software suite and to integrate grid-middleware as quickly as possible. The first of these Data-Challenges, the DC1, has been running in several stages in the second half of 2002 and in the first half of For the first time, massive physicssimulations ran at a total of more than 50 institutes worldwide. NorduGrid was the Scandinavian contribution to this Data-Challenge. Using solely the NorduGrid facility, we were able to participate in the DC1 and run real production. In total NorduGrid ran more than 4750 grid-jobs processing more than 2TB of input data and producing more than 2.5 TB of output data. See [23] for a thorough account on the NorduGrid involvement in the DC1. In total about 60 TB of data was produced worldwide in DC1. The future Data-Challenges, with Data-Challenge 2 starting in the second quarter of 2004, will be larger and more complex. The intent is along the way to put up a permanent production environment in the interested institutions. This will present our group with future challenges which we are committed to solve relying exclusively on the NorduGrid. 4.3 New user groups The Grid environment appears as an attractive alternative to the traditional structure of supercomputer centers and private research clusters. As more and more resources are being made available through the Grid (see Section 2), it is expected, that the conventional High Performance Computing (HPC) user community will gradually migrate to the NorduGrid. For example, in order to facilitate this process the Swe- Grid project [8], launched in early 2003, will offer its resources linked to the NorduGrid exclusively through the Grid layer. The combined resources of the planned six Grid nodes exceed those of the national supercomputer centers representing a valuable packet for the always CPU-thirsty community. Similar steps have been made in Denmark, Norway and Finland whereas the Nordic DataGrid Facility is responsible for the coordination of developments. This strategy will bring users from many different research areas on the Grid, and presents a serious test for the NorduGrid toolkit. Meeting the requirements of the new user groups, while keeping the reliability, stability, scalability and portability, will be a major challenge for our infrastructure. 5 Summary The production Nordic Grid infrastructure was put into operation during the Summer of 2002 and since then it serves an active user base who already considers the Grid a part of their everyday computing toolkit. The Grid spans over the Nordic countries and connects an increasing number of resources which soon will cover all the HPC facilities in North Europe. The strategic Nordic decision on putting all the High Performance Computing (HPC) resources on the NorduGrid helps the migration of the traditional HPC user community. In Sweden SNAC (national resource allocation committee) already allocates CPU resources on the Grid and similar commitments have been made from the national supercomputing organizations in Norway, Finland and Denmark. Thanks to the NorduGrid middleware the infrastructure can reliably provide a solid set of low-level Grid services on a permanent basis. The foreseen scale-up of the NorduGrid necessitates the development and deployment of higherlevel Grid services.

8 6 Acknowledgements The pioneering Nordic Grid Project, the NorduGrid was funded by the Nordic Council of Ministers through the Nordunet2 programme and by NOS-N. The authors would like to express their gratitude to the system administrators across the Nordic countries for their courage, patience and assistance in enabling the NorduGrid environment. In particular, our thanks go to Ulf Mjörnmark and Björn Lundberg of Lund University, Björn Nilsson of NBI Copenhagen, Niclas Andersson and Leif Nixon of NSC Linköping, Åke Sandgren of HPC2N Umeå and Jacko Koster of Parallab, Bergen. References [1] I. Foster, What is the Grid? A three point checklist. [Online] 02/0722/ html [2] Nordic Testbed for Wide Area Computing And Data Handling, [Online] [3] The Globus Project, [Online] [4] The European Union DataGrid Project, [Online] [5] A. Wäänanen et. al., An Overview of an Architecture Proposal for a High Energy Physics Grid, Proc. of PARA 2002, LNCS 2367, p. 76, Springer-Verlag Berlin Heidelberg [6] A. Konstantinov et. al., The NorduGrid project: Using Globus toolkit for building Grid infrastructure, Proc. of ACAT 2002, Nucl. Instr. and Methods A 502 (2003) pp , Elsevier Science [7] P. Eerola, et. al., The NorduGrid architecture and tools, Proc. of the CHEP 03, (2003), [Online] [8] SweGrid, [Online] [9] NorduNet, The Nordic Internet highway to research and education [Online], [10] The NorduGrid Grid Monitor, [Online] [11] Nordic TOP500 SuperClusters, [Online] /super-cluster.html [12] Grid Security Infrastructure [Online] [13] R. Alfieri et.al., Managing Dynamic User Communities in a Grid of Autonomous Resources, [Online] [14] L. Pearlman, et. al., A Community Authorization Service for Group Collaboration, Proceedings of the IEEE 3rd International Workshop on Policies for Distributed Systems and Networks, [15] A. Konstantinov, The NorduGrid Grid Manager And GridFTP Server: Description And Administrator s Manual, [Online] [16] O. Smirnova, Extended Resource Specifikation Language, [Online] [17] Globus Resource Allocation Manager (GRAM), [Online] [18] B. Kónya, The NorduGrid Information System, [Online] /documents/ng-infosys.pdf [19] Monitoring and Discovery Services, [Online] [20] Global Grid Forum Scheduling and Resource Management Area (SRM), [Online] SRM/srm.htm [21] M. Ellert, The NorduGrid toolkit user interface, User s manual, [Online] /NorduGrid-UI.pdf [22] GridBlocks, [Online] [23] P. Eerola et. al., ATLAS Data Challenge 1 on NorduGrid, Proc. of the CHEP 03, (2003), [Online] [24] T. Sjöstrand and P. Z. Skands, Baryon Number Violation and String Topologies, Nuclear Physics B vol. 659, no. 1-2, (2003) pp [25] O. F. Syljuåsen, Directed Loop Updates for Quantum Lattice Models, [Online] [26] The ATLAS Experiment of the Large Hadron Collider, [Online]

The NorduGrid Architecture and Middleware for Scientific Applications

The NorduGrid Architecture and Middleware for Scientific Applications The NorduGrid Architecture and Middleware for Scientific Applications O. Smirnova 1, P. Eerola 1,T.Ekelöf 2, M. Ellert 2, J.R. Hansen 3, A. Konstantinov 4,B.Kónya 1, J.L. Nielsen 3, F. Ould-Saada 5, and

More information

Atlas Data-Challenge 1 on NorduGrid

Atlas Data-Challenge 1 on NorduGrid Atlas Data-Challenge 1 on NorduGrid P. Eerola, B. Kónya, O. Smirnova Particle Physics, Institute of Physics, Lund University, Box 118, 22100 Lund, Sweden T. Ekelöf, M. Ellert Department of Radiation Sciences,

More information

Usage statistics and usage patterns on the NorduGrid: Analyzing the logging information collected on one of the largest production Grids of the world

Usage statistics and usage patterns on the NorduGrid: Analyzing the logging information collected on one of the largest production Grids of the world Usage statistics and usage patterns on the NorduGrid: Analyzing the logging information collected on one of the largest production Grids of the world Pajchel, K.; Eerola, Paula; Konya, Balazs; Smirnova,

More information

Eerola, Paula; Konya, Balazs; Smirnova, Oxana; Ekelof, T; Ellert, M; Hansen, JR; Nielsen, JL; Waananen, A; Konstantinov, A; Ould-Saada, F

Eerola, Paula; Konya, Balazs; Smirnova, Oxana; Ekelof, T; Ellert, M; Hansen, JR; Nielsen, JL; Waananen, A; Konstantinov, A; Ould-Saada, F Building a production grid in Scandinavia Eerola, Paula; Konya, Balazs; Smirnova, Oxana; Ekelof, T; Ellert, M; Hansen, JR; Nielsen, JL; Waananen, A; Konstantinov, A; Ould-Saada, F Published in: IEEE Internet

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

arxiv:cs.dc/ v2 14 May 2002

arxiv:cs.dc/ v2 14 May 2002 Performance evaluation of the GridFTP within the NorduGrid project M. Ellert a, A. Konstantinov b, B. Kónya c, O. Smirnova c, A. Wäänänen d arxiv:cs.dc/2523 v2 14 May 22 a Department of Radiation Sciences,

More information

Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2

Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2 Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2 Sturrock, R.; Eerola, Paula; Konya, Balazs; Smirnova, Oxana; Lindemann, Jonas; et, al. Published in: CERN-2005-002 Published:

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

ATLAS NorduGrid related activities

ATLAS NorduGrid related activities Outline: NorduGrid Introduction ATLAS software preparation and distribution Interface between NorduGrid and Condor NGlogger graphical interface On behalf of: Ugur Erkarslan, Samir Ferrag, Morten Hanshaugen

More information

Scientific data management

Scientific data management Scientific data management Storage and data management components Application database Certificate Certificate Authorised users directory Certificate Certificate Researcher Certificate Policies Information

More information

Introduction to Grid Computing

Introduction to Grid Computing Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able

More information

Architecture Proposal

Architecture Proposal Nordic Testbed for Wide Area Computing and Data Handling NORDUGRID-TECH-1 19/02/2002 Architecture Proposal M.Ellert, A.Konstantinov, B.Kónya, O.Smirnova, A.Wäänänen Introduction The document describes

More information

The Grid Monitor. Usage and installation manual. Oxana Smirnova

The Grid Monitor. Usage and installation manual. Oxana Smirnova NORDUGRID NORDUGRID-MANUAL-5 2/5/2017 The Grid Monitor Usage and installation manual Oxana Smirnova Abstract The LDAP-based ARC Grid Monitor is a Web client tool for the ARC Information System, allowing

More information

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1 A distributed tier-1 L Fischer 1, M Grønager 1, J Kleist 2 and O Smirnova 3 1 NDGF - Nordic DataGrid Facilty, Kastruplundgade 22(1), DK-2770 Kastrup 2 NDGF and Aalborg University, Department of Computer

More information

A Distributed Media Service System Based on Globus Data-Management Technologies1

A Distributed Media Service System Based on Globus Data-Management Technologies1 A Distributed Media Service System Based on Globus Data-Management Technologies1 Xiang Yu, Shoubao Yang, and Yu Hong Dept. of Computer Science, University of Science and Technology of China, Hefei 230026,

More information

ARC middleware. The NorduGrid Collaboration

ARC middleware. The NorduGrid Collaboration ARC middleware The NorduGrid Collaboration Abstract The paper describes the existing components of ARC, discusses some of the new components, functionalities and enhancements currently under development,

More information

Future trends in distributed infrastructures the Nordic Tier-1 example

Future trends in distributed infrastructures the Nordic Tier-1 example Future trends in distributed infrastructures the Nordic Tier-1 example O. G. Smirnova 1,2 1 Lund University, 1, Professorsgatan, Lund, 22100, Sweden 2 NeIC, 25, Stensberggata, Oslo, NO-0170, Norway E-mail:

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer computing platform Internal report Marko Niinimaki, Mohamed BenBelgacem, Nabil Abdennadher HEPIA, January 2010 1. Background and motivation

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

UNICORE Globus: Interoperability of Grid Infrastructures

UNICORE Globus: Interoperability of Grid Infrastructures UNICORE : Interoperability of Grid Infrastructures Michael Rambadt Philipp Wieder Central Institute for Applied Mathematics (ZAM) Research Centre Juelich D 52425 Juelich, Germany Phone: +49 2461 612057

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen Grid Computing 7700 Fall 2005 Lecture 5: Grid Architecture and Globus Gabrielle Allen allen@bit.csc.lsu.edu http://www.cct.lsu.edu/~gallen Concrete Example I have a source file Main.F on machine A, an

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

Future Developments in the EU DataGrid

Future Developments in the EU DataGrid Future Developments in the EU DataGrid The European DataGrid Project Team http://www.eu-datagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Overview Where is the

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

The SweGrid Accounting System

The SweGrid Accounting System The SweGrid Accounting System Enforcing Grid Resource Allocations Thomas Sandholm sandholm@pdc.kth.se 1 Outline Resource Sharing Dilemma Grid Research Trends Connecting National Computing Resources in

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

Design of Distributed Data Mining Applications on the KNOWLEDGE GRID

Design of Distributed Data Mining Applications on the KNOWLEDGE GRID Design of Distributed Data Mining Applications on the KNOWLEDGE GRID Mario Cannataro ICAR-CNR cannataro@acm.org Domenico Talia DEIS University of Calabria talia@deis.unical.it Paolo Trunfio DEIS University

More information

High Performance Computing Course Notes Grid Computing I

High Performance Computing Course Notes Grid Computing I High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are

More information

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007 Grid Programming: Concepts and Challenges Michael Rokitka SUNY@Buffalo CSE510B 10/2007 Issues Due to Heterogeneous Hardware level Environment Different architectures, chipsets, execution speeds Software

More information

PoS(EGICF12-EMITC2)081

PoS(EGICF12-EMITC2)081 University of Oslo, P.b.1048 Blindern, N-0316 Oslo, Norway E-mail: aleksandr.konstantinov@fys.uio.no Martin Skou Andersen Niels Bohr Institute, Blegdamsvej 17, 2100 København Ø, Denmark E-mail: skou@nbi.ku.dk

More information

Information and monitoring

Information and monitoring Information and monitoring Information is essential Application database Certificate Certificate Authorised users directory Certificate Certificate Grid tools Researcher Certificate Policies Information

More information

THE GLOBUS PROJECT. White Paper. GridFTP. Universal Data Transfer for the Grid

THE GLOBUS PROJECT. White Paper. GridFTP. Universal Data Transfer for the Grid THE GLOBUS PROJECT White Paper GridFTP Universal Data Transfer for the Grid WHITE PAPER GridFTP Universal Data Transfer for the Grid September 5, 2000 Copyright 2000, The University of Chicago and The

More information

Layered Architecture

Layered Architecture The Globus Toolkit : Introdution Dr Simon See Sun APSTC 09 June 2003 Jie Song, Grid Computing Specialist, Sun APSTC 2 Globus Toolkit TM An open source software toolkit addressing key technical problems

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4 An Overview of Grid Computing Workshop Day 1 : August 05 2004 (Thursday) An overview of Globus Toolkit 2.4 By CDAC Experts Contact :vcvrao@cdacindia.com; betatest@cdacindia.com URL : http://www.cs.umn.edu/~vcvrao

More information

Monitoring ARC services with GangliARC

Monitoring ARC services with GangliARC Journal of Physics: Conference Series Monitoring ARC services with GangliARC To cite this article: D Cameron and D Karpenko 2012 J. Phys.: Conf. Ser. 396 032018 View the article online for updates and

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

A New Grid Manager for NorduGrid

A New Grid Manager for NorduGrid Master s Thesis A New Grid Manager for NorduGrid A Transitional Path Thomas Christensen Rasmus Aslak Kjær June 2nd, 2005 Aalborg University Aalborg University Department of Computer Science, Frederik Bajers

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT.

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT. Chapter 4:- Introduction to Grid and its Evolution Prepared By:- Assistant Professor SVBIT. Overview Background: What is the Grid? Related technologies Grid applications Communities Grid Tools Case Studies

More information

Grid Computing with NorduGrid-ARC

Grid Computing with NorduGrid-ARC Grid Computing with NorduGrid-ARC Balázs Kónya, Lund University, NorduGrid Collaboration, Dapsys 2004, Budapest, 19 September 2004 outline (Introduction to Gridcomputing) Quick Introduction Overview of

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

NorduGrid Tutorial. Client Installation and Job Examples

NorduGrid Tutorial. Client Installation and Job Examples NorduGrid Tutorial Client Installation and Job Examples Linux Clusters for Super Computing Conference Linköping, Sweden October 18, 2004 Arto Teräs arto.teras@csc.fi Steps to Start Using NorduGrid 1) Install

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS

A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS Raj Kumar, Vanish Talwar, Sujoy Basu Hewlett-Packard Labs 1501 Page Mill Road, MS 1181 Palo Alto, CA 94304 USA { raj.kumar,vanish.talwar,sujoy.basu}@hp.com

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

NUSGRID a computational grid at NUS

NUSGRID a computational grid at NUS NUSGRID a computational grid at NUS Grace Foo (SVU/Academic Computing, Computer Centre) SVU is leading an initiative to set up a campus wide computational grid prototype at NUS. The initiative arose out

More information

Scientific Computing with UNICORE

Scientific Computing with UNICORE Scientific Computing with UNICORE Dirk Breuer, Dietmar Erwin Presented by Cristina Tugurlan Outline Introduction Grid Computing Concepts Unicore Arhitecture Unicore Capabilities Unicore Globus Interoperability

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Introduction to Grid Infrastructures

Introduction to Grid Infrastructures Introduction to Grid Infrastructures Stefano Cozzini 1 and Alessandro Costantini 2 1 CNR-INFM DEMOCRITOS National Simulation Center, Trieste, Italy 2 Department of Chemistry, Università di Perugia, Perugia,

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

Grid Computing Systems: A Survey and Taxonomy

Grid Computing Systems: A Survey and Taxonomy Grid Computing Systems: A Survey and Taxonomy Material for this lecture from: A Survey and Taxonomy of Resource Management Systems for Grid Computing Systems, K. Krauter, R. Buyya, M. Maheswaran, CS Technical

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

Automatic Job Resubmission in the Nordugrid Middleware

Automatic Job Resubmission in the Nordugrid Middleware Henrik Thostrup Jensen Jesper Ryge Leth Automatic Job Resubmission in the Nordugrid Middleware Dat5 Project September 2003 - January 2004 Department of Computer Science Aalborg University Fredrik Bajersvej

More information

igrid: a Relational Information Service A novel resource & service discovery approach

igrid: a Relational Information Service A novel resource & service discovery approach igrid: a Relational Information Service A novel resource & service discovery approach Italo Epicoco, Ph.D. University of Lecce, Italy Italo.epicoco@unile.it Outline of the talk Requirements & features

More information

Chapter 3. Design of Grid Scheduler. 3.1 Introduction

Chapter 3. Design of Grid Scheduler. 3.1 Introduction Chapter 3 Design of Grid Scheduler The scheduler component of the grid is responsible to prepare the job ques for grid resources. The research in design of grid schedulers has given various topologies

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

A Simulation Model for Large Scale Distributed Systems

A Simulation Model for Large Scale Distributed Systems A Simulation Model for Large Scale Distributed Systems Ciprian M. Dobre and Valentin Cristea Politechnica University ofbucharest, Romania, e-mail. **Politechnica University ofbucharest, Romania, e-mail.

More information

Analysis of internal network requirements for the distributed Nordic Tier-1

Analysis of internal network requirements for the distributed Nordic Tier-1 Journal of Physics: Conference Series Analysis of internal network requirements for the distributed Nordic Tier-1 To cite this article: G Behrmann et al 2010 J. Phys.: Conf. Ser. 219 052001 View the article

More information

A Federated Grid Environment with Replication Services

A Federated Grid Environment with Replication Services A Federated Grid Environment with Replication Services Vivek Khurana, Max Berger & Michael Sobolewski SORCER Research Group, Texas Tech University Grids can be classified as computational grids, access

More information

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus UNIT IV PROGRAMMING MODEL Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus Globus: One of the most influential Grid middleware projects is the Globus

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Distributed Data Management with Storage Resource Broker in the UK

Distributed Data Management with Storage Resource Broker in the UK Distributed Data Management with Storage Resource Broker in the UK Michael Doherty, Lisa Blanshard, Ananta Manandhar, Rik Tyer, Kerstin Kleese @ CCLRC, UK Abstract The Storage Resource Broker (SRB) is

More information

Usage of LDAP in Globus

Usage of LDAP in Globus Usage of LDAP in Globus Gregor von Laszewski and Ian Foster Mathematics and Computer Science Division Argonne National Laboratory, Argonne, IL 60439 gregor@mcs.anl.gov Abstract: This short note describes

More information

A Survey Paper on Grid Information Systems

A Survey Paper on Grid Information Systems B 534 DISTRIBUTED SYSTEMS A Survey Paper on Grid Information Systems Anand Hegde 800 North Smith Road Bloomington Indiana 47408 aghegde@indiana.edu ABSTRACT Grid computing combines computers from various

More information

ARC NOX AND THE ROADMAP TO THE UNIFIED EUROPEAN MIDDLEWARE

ARC NOX AND THE ROADMAP TO THE UNIFIED EUROPEAN MIDDLEWARE ARC NOX AND THE ROADMAP TO THE UNIFIED EUROPEAN MIDDLEWARE GRID-2010, Dubna, July 2 2010 Oxana Smirnova (on behalf of the NorduGrid Collaboration) Outlook Usage of ARC in NDGF and ATLAS Overview of the

More information

Database Assessment for PDMS

Database Assessment for PDMS Database Assessment for PDMS Abhishek Gaurav, Nayden Markatchev, Philip Rizk and Rob Simmonds Grid Research Centre, University of Calgary. http://grid.ucalgary.ca 1 Introduction This document describes

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Oracle Warehouse Builder 10g Release 2 Integrating Packaged Applications Data

Oracle Warehouse Builder 10g Release 2 Integrating Packaged Applications Data Oracle Warehouse Builder 10g Release 2 Integrating Packaged Applications Data June 2006 Note: This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality,

More information

Knowledge Discovery Services and Tools on Grids

Knowledge Discovery Services and Tools on Grids Knowledge Discovery Services and Tools on Grids DOMENICO TALIA DEIS University of Calabria ITALY talia@deis.unical.it Symposium ISMIS 2003, Maebashi City, Japan, Oct. 29, 2003 OUTLINE Introduction Grid

More information

Administering Cloud Pod Architecture in Horizon 7. Modified on 4 JAN 2018 VMware Horizon 7 7.4

Administering Cloud Pod Architecture in Horizon 7. Modified on 4 JAN 2018 VMware Horizon 7 7.4 Administering Cloud Pod Architecture in Horizon 7 Modified on 4 JAN 2018 VMware Horizon 7 7.4 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Nordic Net Centre (NNC) and Nordic Webindex: Two Nordic Information and Networked- Technology Projects

Nordic Net Centre (NNC) and Nordic Webindex: Two Nordic Information and Networked- Technology Projects Nordic Net Centre (NNC) and Nordic Webindex: Two Nordic Information and Networked- Technology Projects Eriksson, Jörgen Published in: Networks, Networking and Implications for Digital Libraries: Proceedings

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

Outline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems

Outline. Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems Distributed Systems Outline Definition of a Distributed System Goals of a Distributed System Types of Distributed Systems What Is A Distributed System? A collection of independent computers that appears

More information

Grid Data Management

Grid Data Management Grid Data Management Week #4 Hardi Teder hardi@eenet.ee University of Tartu March 6th 2013 Overview Grid Data Management Where the Data comes from? Grid Data Management tools 2/33 Grid foundations 3/33

More information

Michigan Grid Research and Infrastructure Development (MGRID)

Michigan Grid Research and Infrastructure Development (MGRID) Michigan Grid Research and Infrastructure Development (MGRID) Abhijit Bose MGRID and Dept. of Electrical Engineering and Computer Science The University of Michigan Ann Arbor, MI 48109 abose@umich.edu

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

EPCC Sun Data and Compute Grids Project Update

EPCC Sun Data and Compute Grids Project Update EPCC Sun Data and Compute Grids Project Update Using Sun Grid Engine and Globus for Multi-Site Resource Sharing Grid Engine Workshop, Regensburg, September 2003 Geoff Cawood Edinburgh Parallel Computing

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Introduction to GT3. Introduction to GT3. What is a Grid? A Story of Evolution. The Globus Project

Introduction to GT3. Introduction to GT3. What is a Grid? A Story of Evolution. The Globus Project Introduction to GT3 The Globus Project Argonne National Laboratory USC Information Sciences Institute Copyright (C) 2003 University of Chicago and The University of Southern California. All Rights Reserved.

More information

Boundary control : Access Controls: An access control mechanism processes users request for resources in three steps: Identification:

Boundary control : Access Controls: An access control mechanism processes users request for resources in three steps: Identification: Application control : Boundary control : Access Controls: These controls restrict use of computer system resources to authorized users, limit the actions authorized users can taker with these resources,

More information

FedX: A Federation Layer for Distributed Query Processing on Linked Open Data

FedX: A Federation Layer for Distributed Query Processing on Linked Open Data FedX: A Federation Layer for Distributed Query Processing on Linked Open Data Andreas Schwarte 1, Peter Haase 1,KatjaHose 2, Ralf Schenkel 2, and Michael Schmidt 1 1 fluid Operations AG, Walldorf, Germany

More information

EU DataGRID testbed management and support at CERN

EU DataGRID testbed management and support at CERN EU DataGRID testbed management and support at CERN E. Leonardi and M.W. Schulz CERN, Geneva, Switzerland In this paper we report on the first two years of running the CERN testbed site for the EU DataGRID

More information

A Resource Discovery Algorithm in Mobile Grid Computing Based on IP-Paging Scheme

A Resource Discovery Algorithm in Mobile Grid Computing Based on IP-Paging Scheme A Resource Discovery Algorithm in Mobile Grid Computing Based on IP-Paging Scheme Yue Zhang 1 and Yunxia Pei 2 1 Department of Math and Computer Science Center of Network, Henan Police College, Zhengzhou,

More information

Europeana Core Service Platform

Europeana Core Service Platform Europeana Core Service Platform DELIVERABLE D7.1: Strategic Development Plan, Architectural Planning Revision Final Date of submission 30 October 2015 Author(s) Marcin Werla, PSNC Pavel Kats, Europeana

More information

Development of new security infrastructure design principles for distributed computing systems based on open protocols

Development of new security infrastructure design principles for distributed computing systems based on open protocols Development of new security infrastructure design principles for distributed computing systems based on open protocols Yu. Yu. Dubenskaya a, A. P. Kryukov, A. P. Demichev Skobeltsyn Institute of Nuclear

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

An Evaluation of Alternative Designs for a Grid Information Service

An Evaluation of Alternative Designs for a Grid Information Service An Evaluation of Alternative Designs for a Grid Information Service Warren Smith, Abdul Waheed *, David Meyers, Jerry Yan Computer Sciences Corporation * MRJ Technology Solutions Directory Research L.L.C.

More information

Empowering a Flexible Application Portal with a SOA-based Grid Job Management Framework

Empowering a Flexible Application Portal with a SOA-based Grid Job Management Framework Empowering a Flexible Application Portal with a SOA-based Grid Job Management Framework Erik Elmroth 1, Sverker Holmgren 2, Jonas Lindemann 3, Salman Toor 2, and Per-Olov Östberg1 1 Dept. Computing Science

More information