Recent Evolutions of GridICE: a Monitoring Tool for Grid Systems
|
|
- Lesley Maxwell
- 6 years ago
- Views:
Transcription
1 Recent Evolutions of GridICE: a Monitoring Tool for Grid Systems Cristina Aiftimiei INFN-Padova Padova, Italy cristina.aiftimiei@pd.infn.it Vihang Dudhalkar INFN-Bari and Dipartimento di Fisica - Politecnico di Bari vihang007@gmail.com Giorgio Maggi INFN-Bari and Dipartimento di Fisica - Politecnico di Bari giorgio.maggi@ba.infn.it Sergio Andreozzi INFN-CNAF Bologna, Italy sergio.andreozzi@cnaf.infn.it Giacinto Donvito INFN-Bari giacinto.donvito@ba.infn.it Sergio Fantinel INFN-Padova/Legnaro Padova, Italy sergio.fantinel@lnl.infn.it Antonio Pierro INFN-Bari antonio.pierro@ba.infn.it Guido Cuscela INFN-Bari guido.cuscela@ba.infn.it Enrico Fattibene INFN-CNAF Bologna, Italy enrico.fattibene@cnaf.infn.it Giuseppe Misurelli INFN-CNAF Bologna, Italy giuseppe.misurelli@cnaf.infn.it ABSTRACT Grid systems must provide its users with precise and reliable information about the status and usage of available resources. The efficient distribution of this information enables Virtual Organizations (VOs) to optimize the utilization strategies of their resources and to complete the planned computations. In this paper, we describe the recent evolution of GridICE, a monitoring tool for Grid systems. Such evolutions are targeted at satisfying the requirements from the main categories of users: Grid operators, site administrators, Virtual Organization (VO) managers and Grid users. Categories and Subject Descriptors H.4 [Information Systems Applications]: Miscellaneous; D.2.8 [Software Engineering]: Metrics performance measures on leave from NIPNE-HH, Bucharest, Romania contact author Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. HPDC 07, June 25 29, 2007, Monterey, California, USA. Copyright 2007 ACM /07/ $5.00. General Terms Measurement, Performance, Management Keywords Grid computing, monitoring, measurement, data quality, performance analysis 1. INTRODUCTION Grid computing is concerned with the virtualization, integration and management of services and resources in a distributed, heterogeneous environment that supports collections of users and resources across traditional administrative and organizational domains [24]. One aspect of particular importance is Grid monitoring, that is the activity of measuring significant Grid resourcerelated parameters in order to analyze status, usage, behavior and performance of a Grid system. Grid monitoring helps also in the detection of faulty situations, contract violations and user-defined events. Two main types of monitoring can be identified: infrastructure monitoring and application monitoring. The former aims at collecting information about Grid resources; it can also maintain the history of observations in order to perform retrospective analysis. The latter aims at enabling the observation of a particular execution of an application; the collected data can be useful to the application development activity or for visualizing its behavior when running in a machine with no login right access (i.e., the typical Grid use case). In this area, GridICE [14], an open source distributed monitoring tool for Grid systems, provides the full coverage
2 of the first type. The project was started in late 2002 within the EU-DataTAG [8] project and is evolving in the context of EU-EGEE [9] and related projects. GridICE is fully integrated with the glite middleware [15], in fact its metering service and publishing of the measured data can be configured via the glite installation mechanisms. GridICE is designed to serve different categories of monitoring information consumers and different aggregation dimensions such the Grid operators, VO managers and site administrators are provided. Recent work has been devoted to the addition of a group level aggregation granularity based on the privilege attributes associated to a user, i.e., groups and roles provided by the Virtual Organization Membership Service (VOMS) [2]. This capability is an important step towards the support of a correct group-based resource allocation of Virtual Organization resources. By means of this feature, the relevant persons can observe the VO activity as a whole or drill down through its groups/roles up to the individual users. This paper is organized as follows: Section 2 presents the GridICE architecture and its main features; Section 3 focuses on the new sensors; Section 4 discusses about data quality and related tests; finally, Section 5 draws up the conclusions together with directions for future work. 2. GRIDICE ARCHITECTURE AND IMPLEMENTATION In this section, we summarize the architecture and relevant implementation choices related to GridCE (a detailed description can be found in [1, 3]). GridICE consists of three main components (see Figure 1). The sensors, performing the measurement process on the monitored entities; the site collector aggregating the information produced by the different sensors installed within a site domain for the publication using the Grid Information Service; the server performing several functions: (1) discovery of new available resources to be monitored; (2) periodical observation of the information published by the sites; (3) storage of the observed information in a relational database; (4) presentation of the collected information by means of HTML pages, XML documents and charts; they can be accessed by end-users or consumed by other automated tools; (5) processing of the information in order to identify malfunctions and send the appropriate notifications to the subscribed users. A key design choice of GridICE is that all monitoring information should be distributed outside each site using the Grid Information Service interface. In glite, this service is based on the Lightweight Directory Access Protocol (LDAP) [25] and is structured in a hierarchical set of servers based on different implementations: the MDS GRIS [7] for the leaf nodes and the OpenLDAP server [21] with the Berkeley Database [5] as a back-end for the intermediate and root nodes. In the context of glite, the latter implementation is called BDII (Berkeley Database Information Index). Given the above design principle (monitoring information distributed via the Grid Information Service), a GridICE server is able to gather information from an up-to-date list of information sources by a two-step process: (1) a periodical query is performed on a set of root nodes (BDII) configured by the GridICE server administrator; the output is compared with the current list of information sources maintained Figure 1: GridICE Architecture by the GridICE server and new or disappeared sources are detected; an up-to-date list is defined; (2) starting from this list of active sources of monitoring data and according to a number of parameters set by the server administrator, a new configuration is generated for the scheduler responsible for the periodical run of plug-ins which purpose is to read the data advertised by a certain data source, compare it with the content of the persistent storage and update of the related information. The typical frequency of the discovery process is once a day while the frequency of the plug-ins depends on the type of services to which the monitoring information refers to (given the default configuration, they vary from a minimum of 5 minutes to a maximum of 30 minutes). 2.1 LEMON and GridICE synergies The GridICE architecture (see Figure 1) can be considered a 2-level hierarchical model: the intra-site level concerns the domain of an administrative site and aims at measuring and collecting the monitoring data in a single logical repository; the inter-site level concerns distribution of monitoring data across sites and enables the Grid-wide access to the site repositories. This is an important design choice since it enforces the concept of domain ownership, that is, it clearly defines the domain boundaries so that site administrators can govern what information is offered outside their managed domain. In the intra-site level, the transportation of data is typically performed by a fabric monitoring service, while in the inter-site level, the transportation of data relies on the Grid Information Service. The two levels are totally decoupled and, in principle, different fabric monitoring services can be adapted to measure and locally collect monitoring data. The default option proposed by GridICE is LEMON [16], a fabric monitoring tool developed at CERN. This tool is scalable and offers a rich set of sensors for fabric-related attributes such as CPU load, memory utilization and available space in the file system. Via the plug-in system, GridICEspecific sensors are configured and the related information is measured and collected. By means of a transformation adapter (fmon2glue), data stored in the local repository is translated into the LDAP Data Interchange format (LDIF) [13] and is injected into a special instance of a GRIS [7] that acts as a site publisher (see Figure 1). The data published by this GRIS is not propagated up to the Grid Information
3 Service hierarchy in order to avoid the overloading of the higher-level nodes. Instead of propagating the monitoring data, the URL (Unified Resource Locator) of this special GRIS is advertised so that the GridICE server can discover its existence and directly pull the information. The current GridICE release includes an old version of LEMON (v.2.5.4). Recently, a substantial effort was spent to integrate a new version (v.2.13.x). Thanks to this improvement, both GridICE and the local monitoring can exploit a set of new features. For instance, site administrators will have the possibility to observe the activity of their farm by using the LEMON RRD Framework (LRF that provides a Web-based interface to more detailed fabric-related information that are not meaningful to be exported at the Grid level. This upgrade required modifications of the whole set of GridICE-specific sensors in order to comply with the new naming conventions. Furthermore, some old sensor developed within GridICE was dropped in favor of new ones provided by LEMON. Finally, a significant rewriting of the fmon2glue component was required. The upgraded version of GridICE sensors is under testing at the INFN-BARI site since the beginning of February THE NEW SENSORS The measurements that are managed by GridICE are an extension of those defined in the GLUE Schema [4]. Sensors related to the attributes defined in this schema are part of the glite middleware and the related information are collected by querying the Grid Information Service. The extensions to the GLUE Schema concern: (1) fabric-level information; (2) individual Grid jobs; (3) aggregated information of the Local Resource Manager System (LRMS). In the near future, we plan to include new sensors concerning the glite Workload Management Service (WMS), file transfer and file access. In Section 3.1, we describe the features of the LRM- SInfo sensor, while in Section 3.2 we report on the sensor related to the individual job monitoring. Finally, in Section 3.3, we list the improvements of the Web presentation in order to consider the new available measurements. 3.1 The LRMSInfo Sensor The LRMSInfo sensor provides aggregated information of the Local Resource Manager System and was released with GridICE in September Different incarnations of this sensor exist for the relevant batch systems used in the EGEE: TORQUE/Maui [23, 19] and LSF [18]. The attributes being measured by this sensor are: the number of available CPUs accessible by Grid jobs (CPUs exclusively associated to queues not interfaced to a Grid will not be counted); the number of used CPUs; the number of off-line CPUs; the average farm load; the total, available and used memory; the number of running and waiting jobs (see Figure 2). This information is used on the server side to provide a preliminary Service Level Agreement (SLA) monitoring support. As opposed to the information available from glite basic information providers, the LRMSinfo provides an aggregate view avoiding double counting of resources; this is a well-known problem caused by the publication of per-queue information of resources available to many queues. The LRMSInfo sensor requires one installation per LRMS server (batch system master node) and it normally runs in the Grid head node. The sensor is not intrusive since it requires just few seconds to complete the measurement pro- Figure 2: Information produced by the LRMSInfo sensor cess. As an example, we measured that in one of the Italian Tier-1 (INFN-T1) access nodes having a dual-intel(r) Xeon(TM) CPU 3.06Ghz and 3,700 managed jobs (running + queued), a typical execution time for the sensor is: real s, user 6.880s, sys 0.280s. Normally, the script is scheduled every 2 minutes, but the time interval can be adjusted as preferred. 3.2 The Job Monitoring Sensor Grid users want to know the status of their jobs on a Grid infrastructure, while site administrators want to know who is running what and where. This need is fulfilled by the GridCE Job monitoring sensor and the measured attributes are described in Table 1. Implementing this sensor presented a number of challenges to be faces. First of all, the required information is spread in many sources (e.g., log files, batch system API). Secondly, sites supporting large numbers of running and queued jobs present scalability concerns due to the need of interacting with the batch system and accessing a number of log files. Several invocations of system commands have to be performed for each sensor run with the risk of affecting the performance of other services. For these reasons, the approach to the design of this sensor changed over time in order to reduce its intrusiveness in terms of resource consumption in a shared execution environment. In the latest GridICE release, this sensor is composed of two daemons and a probe to be executed periodi-
4 Field NAME JOB ID GRID ID USER VO QUEUE QTIME START END STATUS CPUTIME WALLTIME MEMORY VMEMORY EXEC HOST EXIT STATUS SUBJECT VOMS ROLES Description Job name Local LRMS job id Grid job unique id Local mapped username User VO name Queue Job creation time Job start time Job end time Job status Job CPU usage time Job walltime Job memory usage Job virtual memory usage Execution host (WN) Batch system exit status User DN User VOMS roles Table 1: Attributes measured by the Job Monitoring sensor cally, all installed at the Grid head node of a farm. By the current design, we are able to efficiently measure the job information also in large sites. As an example, in the last months the INFN-T1 handling thousands of jobs was monitored without creating any load problem. This goal was achieved by stateful-based strategy among different executions of the sensor. The two daemons written in Perl and C programming languages listen to a set of log files and collect the relevant information. For each run of the probe, this information is correlated to the output of a few LRMS commands invocation and the status of all jobs is stored in a cache. Subsequent executions of the probe perform an update of the already available information, if needed (e.g., the data related to jobs staying in a queue for long time do not cause any update beside the initial state change of being in the queue). In one of the INFN-T1 Grid access nodes equipped with a dual-intel(r) Xeon(TM) CPU 3.06Ghz, we have measured a time lower than one minute (15s user, 15s system) for measuring information of 500 jobs. Normally, the probe is executed every 20 minutes, but the time interval can be adjusted as preferred. As regards the daemon parsing the accounting log for the LSF batch system, an execution time of 2.5s (1.8s user, 0.2s system) was measured in order to retrieve the needed information from a log file of 37MB related to 54,808 jobs. This test was performed on an dual-intel(r) Xeon(TM) CPU 2.80GHz. There are security and privacy aspects related to the job monitoring sensor since this is able to measure the Grid identity of the user submitting a job. For security reasons, the measuring of this attribute is disabled by default and can be enabled by the site administrators in each site. The GridICE server is being evolved in order to show sensitive information only to the authorized persons. The Grid-wide data distribution requires the evolution of the Grid Information Service in order to provide the necessary level of privacy. 3.3 Improvements of the Web Presentation The Web presentation relies on an XML abstraction built on top of the data stored in the PostgreSQL database management system running as part of the GridICE server. Figure 3: Sites part of the Italy region Figure 4: Usage Metering The XML documents are transformed into HTML pages via XSLT transformations. A rich set of charts generated via the JpGraph library is also available. The design of the presentation is based on requirements from Grid operators, VO managers, Site administrators and End-users. For Grid operators, summary views about aspects such information resources status, services and host status are provided. Site administrators can appreciate the job monitoring capability showing the status and computing activity of the jobs accepted in the managed resources. VO managers and End-users can rely on GridICE in order to verify the available resources and their status before starting the submission of a huge number of jobs and, after the submission, to follow the jobs execution. Grid operators and site administrators also rely on the notification capability to drive their attention towards emerging problems. The following new features were included in GridICE server release published in September 2006: view the monitored sites part of a certain region (see Figure 3); the integration of information from the EGEE GOC database (e.g., scheduled downtime); LRMSInfo-based information and preliminary SLA support (see Figures 4, 5 and 6); new statistical plots with improved look&feel. Further work is being carried out to properly handle the VOMS information (groups and roles of users submitting
5 Figure 6: Bari Farm usage monitoring provided by GridICE (proof of concept of a monitoring with the group/role details) Figure 7: Table showing its own jobs to the user identified by means of its browser certificate Figure 5: Biomed VO activity as shown to a user signed as biomed VO manager the jobs) gathered by the new version of the job monitoring sensors (not yet released). There are two main goals for this extension. The first goal is to provide reports about the resource usage with the details of the VOMS groups, roles and users in contrast with the current situation where only VO-related information can be provided. This is a precise requirement of the BioinfoGRID project [6] since the BioinfoGRID users belong to the BionfoGRID group part of the biomed VO. Being able to track the user group would allow to distinguish the activity of the BioinfoGRID project by the activity of the rest of the biomed VO. The second goal is to be able to select the information presented according to the consumer identity, its group and/or role (see Figure 7). To properly handle this use case, the GridICE server will be configured and extended in order to retrieve the user identity from the digital certificate installed in its browser. By using the identity, the related role (e.g., site manager, none ) can be retrieved from the GOC database. Sensitive information will be showed only to those having the right credentials. For instance, in case of a site manager, the identity of the user submitting job to that particular site will be showed. Clearly, the GridICE server will guarantee at the same time unauthenticated access as well as authenticated one. 4. ABOUT DATA QUALITY The quality of a Grid monitoring system depends on many factors. Sensors have to be not only non-intrusive, but also they have to meet requirements on data quality in terms of trustworthiness, objectiveness and easy of use. The different stages of transportation from sources to the central server have to preserve their quality. In this section, we present
6 Batch system GridICE Present Not Present Total Efficiency Present 13, , % Not present 1,705-1,705 - Total 14, ,750 - Table 2: Comparison of the number of observed jobs provided by GridICE against those provided by the TORQUE/Maui batch system in the INFN-Pisa farm (Jan 1, Feb 24, 2007) 1,705 refer to jobs killed while still in the queue, therefore they were correctly recorded by GridICE and not observed by the PBS Tools. 3 out of 1,705 refer to jobs for which GridICE recorded uncomplete information (that is, the job information disappeared from the information service before a valid final state was observed). Considering the 13,032 jobs that were observed by both the PBS Tools and GridICE, only 21 showed some difference in one of the related attribute values. The comparison about the accounted CPU and wall time is presented in Table 3. The quality of GridICE data is even better than the already good one derived from the number of jobs. GridICE Batch system GridICE batchsystem Wall Time 211,446, ,319, % CPU time 162,369, ,257, % Table 3: Comparison of the wall and the CPU time provided by GridICE against those provided by the TORQUE/Maui batch system in the INFN- Pisa farm (Jan 1, Feb 24, 2007) Figure 8: Set-up for job monitoring test and debugging the analysis performed on collected data during a period of more then two months of operation. For this test, we consider as truth what is contained in the batch system log files. However, since the direct handling of batch system log files is heavy and not handy, we opted to rely on refined data extracted by means of other tools. As regards the TORQUE/Maui batch system, we created a relational database using PBS tools by Ohio Supercomputer Center [22]. This tool does not consider the jobs canceled before the start of their execution. The PBS tools were slightly modified in order to store the data in a remote database located in Bari for all the farms included in the test as shown in Figure 8. In this way, we were able to compare the information for each recorded job by the batch system with those measured by GridICE. Concerning LSF, since we did not find a similar tool, we developed our own solution in order to generate the relational database. This task was done only for aggregated information. The data quality analysis focused on two Grid sites: INFN- Pisa, providing 70 cores, handling around 7,500 jobs per month and running the TORQUE/Maui local resource manager; INFN-LNL-2, providing 202 cores and running the LSF local resource manager. For the INFN-Pisa site, Table 2 summarizes the number of observed jobs in the period from 1st January 2007 up to 24th February We found 1,705 jobs present in GridICE that were not present in the batch system records. 1,702 out of Concerning the INFN-LNL-2 site, the first step was to validate our script used to generate the relational database with aggregated job monitoring information. In Table 4, we show its accuracy considering jobs observed in the period from 4th January 2007 up to 3rd February The test demonstrates a precision better than one part out of ten thousands. Our script LSF bacct Our script LSF bacct Done jobs 9,034 Exited jobs 659 Total jobs 9,693 9, % CPU time 143,602, ,594, % Wall time 170,728,730 Average turnaround 35,870 35, % Table 4: Comparison of the job information obtained by means of the bacct LSF command with those provided by our script in the INFN-LNL-2 farm (Jan 4, Feb 3, 2007) Table 5 compares the number of observed jobs in the INFN-LNL-2 farm during the period from 1st January 2007 to 30st January As regards the 1,203 jobs present in GridICE and not in the LSF logs, more than 70% refer to exit code different than zero or to jobs canceled by the user. As regards the 16,133 jobs observed by both the batch system and GridICE, 226 (less then 1,5%) showed some difference in one of the related attribute values. The comparison of the wall and CPU time accounted by GridICE and the LSF batch system is shown in Table 6. The LSF sensor appears to be less accurate than the cor-
7 Batch system GridICE Present Not Present Total Efficiency Present 16, , % Not present 1,203-1,203 - Total 17, ,383 - Table 5: Comparison of the number of observed jobs provided by GridICE against those provided by LSF running on the INFN-LNL-2 farm (Jan 1, Jan 30, 2007) GridICE Batch system GridICE batchsystem Wall Time 372,481, ,564, % CPU time 340,330, ,659, % Table 6: Comparison of the used wall and the CPU time on the INFN-LNL-2 farm (Jan 1, Jan 30, 2007) responding TORQUE/Maui one. The above tests helped us to better tune them, therefore we expect to obtain better results in the near future. As regards tests on large sites, the INFN-T1 relies on the LSF batch system in order to manage 2,500 cores. We have not yet performed data quality tests at this site, nevertheless we compared the number of Grid jobs observed by the INFN-T1 local monitoring system with those observed by GridICE. In Figure 9, we show the comparison of the two measurements over a period of a week obtained by summing up the number of individual observed jobs. The difference between the two values at a given time is satisfactory for most of the week. In the last part of the monitored week, we observed a deficit in the number of jobs monitored by GridICE. The reason was traced thanks to GridICE fabric monitoring and its notification capabilities; one of the INFN- T1 Grid head nodes was not properly working, thus causing the loss of the information for all the handled jobs by that particular machine. 5. CONCLUSIONS In this paper, we have presented the recent evolution of GridICE, a monitoring tool for Grid systems. Such evolutions mainly focused on the improvement in stability and reliability of the whole system, the introduction of new sensors and extension of the Web presentation. Detailed description of the motivations and the issues in evolving these features were provided with particular attention to the job monitoring and batch system information collection. A data quality analysis was performed in a production environment in order to investigate the trustworthiness of the data being measured and collected in the GridICE server. The results confirmed that GridICE meets the expected level of correctness. Future work is targeted at the proper handling of VOMS attributes attached to a user proxy certificate. Furthermore, the integration of new sensors related to the glite WMS, the monitoring of file access and transfer are envisioned. 6. ACKNOWLEDGMENTS We would like to thank the funding projects BioinfoGRID [6], EGEE [9], EUChinaGrid [10], EU-IndiaGrid [11], EU- MedGrid [12], LIBI [17], OMII-Europe [20] for supporting our work. Also many thanks to the LEMON team for their Figure 9: Comparison of GridICE job monitoring and LSF local job monitoring on the INFN-T1 farm fruitful collaboration and promptly support. This work makes use of results produced by the Enabling Grids for E-sciencE (EGEE) project, a project co-funded by the European Commission (under contract number INFSO- RI ) through the Sixth Framework Programme. EGEE brings together 91 partners in 32 countries to provide a seamless Grid infrastructure available to the European research community 24 hours a day. 7. ADDITIONAL AUTHORS 8. REFERENCES [1] C. Aiftimiei, S. Andreozzi, G. Cuscela, N. De Bortoli, G. Donvito, E. Fattibene, G. Misurelli, A. Pierro, G. Rubini, and G. Tortone. Gridice: Requirements, Architecture and Experience of a Monitoring Tool for Grid Systems. In In Proceedings of the International Conference on Computing in High Energy and Nuclear Physics (CHEP2006), Mumbai, India, Feb [2] R. Alfieri, R. Cecchini, V. Ciaschini, L. dell Agnello, A. Frohner, A. Gianoli, K. Lörentey, and F. Spataro. VOMS, an Authorization System for Virtual Organizations. Proceedings of the 1st European Across Grids Conference, Santiago de Compostela, Spain, February 2003, LNCS, 2970:33 40, [3] S. Andreozzi, N. D. Bortoli, S. Fantinel, A. Ghiselli, G. Rubini, G. Tortone, and M. Vistoli. GridICE: a Monitoring Service for Grid Systems. Future Generation Computer Systems Journal, 21(4):559571, [4] S. Andreozzi, S. Burke, L. Field, S. Fisher, B. Kónya, M. Mambelli, J. Schopf, M. Viljoen, and A. Wilson. GLUE Schema Specification - Version 1.2, Dec [5] Berkeley Database. [6] BioinfoGRID: Bioinformatics Grid Application for life science. [7] K. Czajkowski, S. Fitzgerald, I. Foster, and C. Kesselman. Grid Information Services for Distributed Resource Sharing. In Proceedings of the 10th IEEE International Symposium on High-Performance Distributed Computing (HPDC-10), San Francisco, CA, USA, Aug [8] European DataTAG project. [9] European Grid for E-sciencE,
8 [10] EUChinaGrid project. [19] Maui Cluster Scheduler. [11] EU-IndiaGrid project. [12] EUMedGrid project. [13] G. Good. The LDAP Data Interchange Format [20] OMII-Europe. (LDIF). IETF RFC 2849, Jun [21] OpenLDAP - The OpenLDAP Project. [14] GridICE Website. [15] E. Laure et al. Programming the Grid with glite. [22] Portable Batch System tools - Ohio Supercomputer Technical Report EGEE-TR , CERN, Center. [16] LEMON - LHC Era Monitoring. [23] TORQUE Resource Manager. [17] International Laboratory on Bioinformatics. torque-resource-manager.php. [18] Load Sharing Facility (LSF). [24] J. Treadwell. The Open Grid Services Architecture (OGSA) Glossary of Terms Version 1.5. OGF Platform.LSF/. GFD.81, Jul [25] W. Wahl, T. Howes, and S. Kille. Lightweight Directory Access Protocol v.3, RFC 2251, IETF, Dec 1997.
Comparative evaluation of software tools accessing relational databases from a (real) grid environments
Comparative evaluation of software tools accessing relational databases from a (real) grid environments Giacinto Donvito, Guido Cuscela, Massimiliano Missiato, Vicenzo Spinoso, Giorgio Maggi INFN-Bari
More informationA RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS
A RESOURCE MANAGEMENT FRAMEWORK FOR INTERACTIVE GRIDS Raj Kumar, Vanish Talwar, Sujoy Basu Hewlett-Packard Labs 1501 Page Mill Road, MS 1181 Palo Alto, CA 94304 USA { raj.kumar,vanish.talwar,sujoy.basu}@hp.com
More informationDesign principles of a web interface for monitoring tools
Home Search Collections Journals About Contact us My IOPscience Design principles of a web interface for monitoring tools This article has been downloaded from IOPscience. Please scroll down to see the
More informationEGEE and Interoperation
EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The
More informationGrid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia
Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management
More informationI Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011
I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds
More informationglite Grid Services Overview
The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,
More information30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy
Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of
More informationSPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2
EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment
More informationg-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.
g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse
More informationTesting SLURM open source batch system for a Tierl/Tier2 HEP computing facility
Journal of Physics: Conference Series OPEN ACCESS Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility Recent citations - A new Self-Adaptive dispatching System for local clusters
More informationPoS(EGICF12-EMITC2)081
University of Oslo, P.b.1048 Blindern, N-0316 Oslo, Norway E-mail: aleksandr.konstantinov@fys.uio.no Martin Skou Andersen Niels Bohr Institute, Blegdamsvej 17, 2100 København Ø, Denmark E-mail: skou@nbi.ku.dk
More informationGrid Architectural Models
Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers
More informationThe INFN Tier1. 1. INFN-CNAF, Italy
IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),
More informationUNICORE Globus: Interoperability of Grid Infrastructures
UNICORE : Interoperability of Grid Infrastructures Michael Rambadt Philipp Wieder Central Institute for Applied Mathematics (ZAM) Research Centre Juelich D 52425 Juelich, Germany Phone: +49 2461 612057
More informationGStat 2.0: Grid Information System Status Monitoring
Journal of Physics: Conference Series GStat 2.0: Grid Information System Status Monitoring To cite this article: Laurence Field et al 2010 J. Phys.: Conf. Ser. 219 062045 View the article online for updates
More informationGrids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan
Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security
More informationBookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group
Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design
More informationAn Evaluation of Alternative Designs for a Grid Information Service
An Evaluation of Alternative Designs for a Grid Information Service Warren Smith, Abdul Waheed *, David Meyers, Jerry Yan Computer Sciences Corporation * MRJ Technology Solutions Directory Research L.L.C.
More informationGrid Scheduling Architectures with Globus
Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationEUROPEAN MIDDLEWARE INITIATIVE
EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document
More informationOn the employment of LCG GRID middleware
On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID
More informationMONITORING OF GRID RESOURCES
MONITORING OF GRID RESOURCES Nikhil Khandelwal School of Computer Engineering Nanyang Technological University Nanyang Avenue, Singapore 639798 e-mail:a8156178@ntu.edu.sg Lee Bu Sung School of Computer
More informationInteroperating AliEn and ARC for a distributed Tier1 in the Nordic countries.
for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF
More informationMonitoring tools in EGEE
Monitoring tools in EGEE Piotr Nyczyk CERN IT/GD Joint OSG and EGEE Operations Workshop - 3 Abingdon, 27-29 September 2005 www.eu-egee.org Kaleidoscope of monitoring tools Monitoring for operations Covered
More informationIntroduction to Grid Infrastructures
Introduction to Grid Infrastructures Stefano Cozzini 1 and Alessandro Costantini 2 1 CNR-INFM DEMOCRITOS National Simulation Center, Trieste, Italy 2 Department of Chemistry, Università di Perugia, Perugia,
More informationScalable Computing: Practice and Experience Volume 10, Number 4, pp
Scalable Computing: Practice and Experience Volume 10, Number 4, pp. 413 418. http://www.scpe.org ISSN 1895-1767 c 2009 SCPE MULTI-APPLICATION BAG OF JOBS FOR INTERACTIVE AND ON-DEMAND COMPUTING BRANKO
More informationShibVomGSite: A Framework for Providing Username and Password Support to GridSite with Attribute based Authorization using Shibboleth and VOMS
ShibVomGSite: A Framework for Providing Username and Password Support to GridSite with Attribute based Authorization using Shibboleth and VOMS Joseph Olufemi Dada & Andrew McNab School of Physics and Astronomy,
More informationELFms industrialisation plans
ELFms industrialisation plans CERN openlab workshop 13 June 2005 German Cancio CERN IT/FIO http://cern.ch/elfms ELFms industrialisation plans, 13/6/05 Outline Background What is ELFms Collaboration with
More informationMonitoring ARC services with GangliARC
Journal of Physics: Conference Series Monitoring ARC services with GangliARC To cite this article: D Cameron and D Karpenko 2012 J. Phys.: Conf. Ser. 396 032018 View the article online for updates and
More informationGridMonitor: Integration of Large Scale Facility Fabric Monitoring with Meta Data Service in Grid Environment
GridMonitor: Integration of Large Scale Facility Fabric Monitoring with Meta Data Service in Grid Environment Rich Baker, Dantong Yu, Jason Smith, and Anthony Chan RHIC/USATLAS Computing Facility Department
More informationMONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT
The Monte Carlo Method: Versatility Unbounded in a Dynamic Computing World Chattanooga, Tennessee, April 17-21, 2005, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2005) MONTE CARLO SIMULATION
More informationAdvanced School in High Performance and GRID Computing November Introduction to Grid computing.
1967-14 Advanced School in High Performance and GRID Computing 3-14 November 2008 Introduction to Grid computing. TAFFONI Giuliano Osservatorio Astronomico di Trieste/INAF Via G.B. Tiepolo 11 34131 Trieste
More informationThe LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008
The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments
More informationGrid Infrastructure For Collaborative High Performance Scientific Computing
Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific
More informationAnalisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI
Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not
More informationHigh Performance Computing Course Notes Grid Computing I
High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are
More informationThe LHC Computing Grid
The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current
More informationInterconnect EGEE and CNGRID e-infrastructures
Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634
More informationArgus Vulnerability Assessment *1
Argus Vulnerability Assessment *1 Manuel Brugnoli and Elisa Heymann Universitat Autònoma de Barcelona June, 2011 Introduction Argus is the glite Authorization Service. It is intended to provide consistent
More informationISTITUTO NAZIONALE DI FISICA NUCLEARE
ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko
More informationR-GMA (Relational Grid Monitoring Architecture) for monitoring applications
R-GMA (Relational Grid Monitoring Architecture) for monitoring applications www.eu-egee.org egee EGEE-II INFSO-RI-031688 Acknowledgements Slides are taken/derived from the GILDA team Steve Fisher (RAL,
More informationThe Grid Monitor. Usage and installation manual. Oxana Smirnova
NORDUGRID NORDUGRID-MANUAL-5 2/5/2017 The Grid Monitor Usage and installation manual Oxana Smirnova Abstract The LDAP-based ARC Grid Monitor is a Web client tool for the ARC Information System, allowing
More informationIntroduction to Grid Technology
Introduction to Grid Technology B.Ramamurthy 1 Arthur C Clarke s Laws (two of many) Any sufficiently advanced technology is indistinguishable from magic." "The only way of discovering the limits of the
More informationQosCosGrid Middleware
Domain-oriented services and resources of Polish Infrastructure for Supporting Computational Science in the European Research Space PLGrid Plus QosCosGrid Middleware Domain-oriented services and resources
More informationg-eclipse A Contextualised Framework for Grid Users, Grid Resource Providers and Grid Application Developers
g-eclipse A Contextualised Framework for Grid Users, Grid Resource Providers and Grid Application Developers Harald Kornmayer 1, Mathias Stümpert 2, Harald Gjermundrød 3, and Pawe l Wolniewicz 4 1 NEC
More informationG-ECLIPSE: A MIDDLEWARE-INDEPENDENT FRAMEWORK TO ACCESS AND MAINTAIN GRID RESOURCES
G-ECLIPSE: A MIDDLEWARE-INDEPENDENT FRAMEWORK TO ACCESS AND MAINTAIN GRID RESOURCES Harald Gjermundrod, Nicholas Loulloudes, and Marios D. Dikaiakos University of Cyprus PO Box 20537, 75 Kallipoleos Str.
More informationA Simulation Model for Large Scale Distributed Systems
A Simulation Model for Large Scale Distributed Systems Ciprian M. Dobre and Valentin Cristea Politechnica University ofbucharest, Romania, e-mail. **Politechnica University ofbucharest, Romania, e-mail.
More informationLarge scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS
Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.
More informationOutline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools
EGI-InSPIRE EGI Operations Tiziana Ferrari/EGI.eu EGI Chief Operations Officer 1 Outline Infrastructure and operations architecture Services Monitoring and management tools Operations 2 Installed Capacity
More informationThe LHC Computing Grid
The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires
More informationEasy Access to Grid Infrastructures
Easy Access to Grid Infrastructures Dr. Harald Kornmayer (NEC Laboratories Europe) On behalf of the g-eclipse consortium WP11 Grid Workshop Grenoble, France 09 th of December 2008 Background in astro particle
More informationMONitoring Agents using a Large Integrated Services Architecture. Iosif Legrand California Institute of Technology
MONitoring Agents using a Large Integrated s Architecture California Institute of Technology Distributed Dynamic s Architecture Hierarchical structure of loosely coupled services which are independent
More informationTesting an Open Source installation and server provisioning tool for the INFN CNAF Tier1 Storage system
Testing an Open Source installation and server provisioning tool for the INFN CNAF Tier1 Storage system M Pezzi 1, M Favaro 1, D Gregori 1, PP Ricci 1, V Sapunenko 1 1 INFN CNAF Viale Berti Pichat 6/2
More informationIntroduction to GT3. Introduction to GT3. What is a Grid? A Story of Evolution. The Globus Project
Introduction to GT3 The Globus Project Argonne National Laboratory USC Information Sciences Institute Copyright (C) 2003 University of Chicago and The University of Southern California. All Rights Reserved.
More informationImplementing GRID interoperability
AFS & Kerberos Best Practices Workshop University of Michigan, Ann Arbor June 12-16 2006 Implementing GRID interoperability G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, C. Scio**,
More informationMulti-threaded, discrete event simulation of distributed computing systems
Multi-threaded, discrete event simulation of distributed computing systems Iosif C. Legrand California Institute of Technology, Pasadena, CA, U.S.A Abstract The LHC experiments have envisaged computing
More informationThe glite middleware. Ariel Garcia KIT
The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings
More informationWP3 Final Activity Report
WP3 Final Activity Report Nicholas Loulloudes WP3 Representative On behalf of the g-eclipse Consortium Outline Work Package 3 Final Status Achievements Work Package 3 Goals and Benefits WP3.1 Grid Infrastructure
More informationISTITUTO NAZIONALE DI FISICA NUCLEARE
ISTITUTO NAZIONALE DI FISICA NUCLEARE CNAF Bologna INFN/TC-04/15 6 Settembre 2004 PILOT PRODUCTION GRID INFRASTRUCTURE FOR HIGH ENERGY PHYSICS APPLICATIONS Sergio Andreozzi 1, Daniele Bonaccorsi 2, Vincenzo
More informationResource Allocation in computational Grids
Grid Computing Competence Center Resource Allocation in computational Grids Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Nov. 23, 21 Scheduling on
More informationALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006
GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology
More informationInformation and monitoring
Information and monitoring Information is essential Application database Certificate Certificate Authorised users directory Certificate Certificate Grid tools Researcher Certificate Policies Information
More informationTable of Contents Chapter 1: Migrating NIMS to OMS... 3 Index... 17
Migrating from NIMS to OMS 17.3.2.0 User Guide 7 Dec 2017 Table of Contents Chapter 1: Migrating NIMS to OMS... 3 Before migrating to OMS... 3 Purpose of this migration guide...3 Name changes from NIMS
More informationIntegration of Cloud and Grid Middleware at DGRZR
D- of International Symposium on Computing 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology March 12, 2010 Overview D- 1 D- Resource Center Ruhr 2 Clouds in the German
More information( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011
( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid
More informationFormalization of Objectives of Grid Systems Resources Protection against Unauthorized Access
Nonlinear Phenomena in Complex Systems, vol. 17, no. 3 (2014), pp. 272-277 Formalization of Objectives of Grid Systems Resources Protection against Unauthorized Access M. O. Kalinin and A. S. Konoplev
More informationMonitoring the ALICE Grid with MonALISA
Monitoring the ALICE Grid with MonALISA 2008-08-20 Costin Grigoras ALICE Workshop @ Sibiu Monitoring the ALICE Grid with MonALISA MonALISA Framework library Data collection and storage in ALICE Visualization
More informationGlobalWatch: A Distributed Service Grid Monitoring Platform with High Flexibility and Usability*
GlobalWatch: A Distributed Service Grid Monitoring Platform with High Flexibility and Usability* Sheng Di, Hai Jin, Shengli Li, Ling Chen, Chengwei Wang Cluster and Grid Computing Lab Huazhong University
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationvrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.4
vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.4 vrealize Operations Manager Customization and Administration Guide You can find the most up-to-date technical
More informationGaruda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore.
Garuda : The National Grid Computing Initiative Of India Natraj A.C, CDAC Knowledge Park, Bangalore. natraj@cdacb.ernet.in 1 Agenda About CDAC Garuda grid highlights Garuda Foundation Phase EU-India grid
More informationGRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi
GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science
More informationWorkload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova
Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present
More informationBenchmarking the ATLAS software through the Kit Validation engine
Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica
More informationThe ALICE Glance Shift Accounting Management System (SAMS)
Journal of Physics: Conference Series PAPER OPEN ACCESS The ALICE Glance Shift Accounting Management System (SAMS) To cite this article: H. Martins Silva et al 2015 J. Phys.: Conf. Ser. 664 052037 View
More informationCMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster
CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:
More informationUnderstanding StoRM: from introduction to internals
Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.
More informationAGIS: The ATLAS Grid Information System
AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS
More informationStatus of KISTI Tier2 Center for ALICE
APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan
More informationCernVM-FS beyond LHC computing
CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years
More informationMonitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY
Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.
More informationigrid: a Relational Information Service A novel resource & service discovery approach
igrid: a Relational Information Service A novel resource & service discovery approach Italo Epicoco, Ph.D. University of Lecce, Italy Italo.epicoco@unile.it Outline of the talk Requirements & features
More informationLHCb Distributed Conditions Database
LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The
More informationOptimizing Parallel Access to the BaBar Database System Using CORBA Servers
SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,
More informationVMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef
VMs at a Tier-1 site EGEE 09, 21-09-2009 Sander Klous, Nikhef Contents Introduction Who are we? Motivation Why are we interested in VMs? What are we going to do with VMs? Status How do we approach this
More informationHPC Metrics in OSCAR based on Ganglia
HPC Metrics in OSCAR based on Ganglia Google Summer of Code 2006 Report Babu Sundaram, babu@cs.uh.edu Department of Computer Science, University of Houston Mentor: Erich Focht, efocht@hpce.nec.com Open
More informationA short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN) 10-15 PetaByte/year The LHC challenge Data analysis requires at least ~100k typical PC processor cores Scientists in tens
More informationDIRAC pilot framework and the DIRAC Workload Management System
Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online
More informationYAIM Overview. Bruce Becker Meraka Institute. Co-ordination & Harmonisation of Advanced e-infrastructures for Research and Education Data Sharing
Co-ordination & Harmonisation of Advanced e-infrastructures for Research and Education Data Sharing Research Infrastructures Grant Agreement n. 306819 YAIM Overview Bruce Becker Meraka Institute Outline
More informationCMS users data management service integration and first experiences with its NoSQL data storage
Journal of Physics: Conference Series OPEN ACCESS CMS users data management service integration and first experiences with its NoSQL data storage To cite this article: H Riahi et al 2014 J. Phys.: Conf.
More informationGrid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms
Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem
More informationGrid-Based Data Mining and the KNOWLEDGE GRID Framework
Grid-Based Data Mining and the KNOWLEDGE GRID Framework DOMENICO TALIA (joint work with M. Cannataro, A. Congiusta, P. Trunfio) DEIS University of Calabria ITALY talia@deis.unical.it Minneapolis, September
More informationParallel Computing in EGI
Parallel Computing in EGI V. Šipková, M. Dobrucký, and P. Slížik Ústav informatiky, Slovenská akadémia vied 845 07 Bratislava, Dúbravská cesta 9 http://www.ui.sav.sk/ {Viera.Sipkova, Miroslav.Dobrucky,
More informationThe Legnaro-Padova distributed Tier-2: challenges and results
The Legnaro-Padova distributed Tier-2: challenges and results Simone Badoer a, Massimo Biasotto a,fulviacosta b, Alberto Crescente b, Sergio Fantinel a, Roberto Ferrari b, Michele Gulmini a, Gaetano Maron
More information3rd UNICORE Summit, Rennes, Using SAML-based VOMS for Authorization within Web Services-based UNICORE Grids
3rd UNICORE Summit, Rennes, 28.08.2007 Using SAML-based VOMS for Authorization within Web Services-based UNICORE Grids Valerio Venturi, Morris Riedel, Shiraz Memon, Shahbaz Memon, Frederico Stagni, Bernd
More informationBob Jones. EGEE and glite are registered trademarks. egee EGEE-III INFSO-RI
Bob Jones EGEE project director www.eu-egee.org egee EGEE-III INFSO-RI-222667 EGEE and glite are registered trademarks Quality: Enabling Grids for E-sciencE Monitoring via Nagios - distributed via official
More informationChapter 3. Design of Grid Scheduler. 3.1 Introduction
Chapter 3 Design of Grid Scheduler The scheduler component of the grid is responsible to prepare the job ques for grid resources. The research in design of grid schedulers has given various topologies
More informationThe Grid: Processing the Data from the World s Largest Scientific Machine
The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),
More information