Deployment of e-science Infrastructure and Applications for PNC

Size: px
Start display at page:

Download "Deployment of e-science Infrastructure and Applications for PNC"

Transcription

1 Deployment of e-science Infrastructure and Applications for PNC Eric Yen Academia Sinica, Taiwan 16 Aug

2 Outline The common keywords of PNC and e-science: Sharing and Collaboration How readiness is the e-science Infrastructure and Applications What are the Core Services Lessons Learned from WLCG/EGEE of Taiwan Metrics and Operation Summary 2

3 Motivation e-science and PNC share the same vision of sharing and collaboration how should PNC take advantage of Grid? How mature/readiness is the e-science infrastructure, and how to build up/migrate applications upon it? Key Issues Infrastructure: glite + OSG (why not Globus?) Identify Core Services Data Management Resource Discovery Security VO (Role-based rights management and collaboration) O&M Foster user communities, such as ECAI, PRDLA, etc. Identify the common requirements of each application domain Application Development Services How to measure the Success? 3

4 Deployment of Production Middleware Infrastructure glite + OSG Interoperation Data Management embedded data management subsystem in glite/osg SRM interface to integrate with other mass storage systems, such as SRB, Castor, and dcache, etc. Resource Discovery customized based on the domain specific workflow Application Specific Services Long-term Preservation Virtual Screening Geospatial data management and hazards mitigation Digital Collection Federation and other services 4

5 Enabling Grids for E-sciencE EGEE Infrastructures Production service Scaling up the infrastructure with resource centres around the globe Stable, well-supported infrastructure, running only well-tested and reliable middleware Pre-production service Run in parallel with the production service (restricted nr of sites) First deployment of new versions of the glite middleware Test-bed for applications and other external functionality T-Infrastructure (Training&Education) Complete suite of Grid elements and application (Testbed, CA, VO, monitoring, support, ) Everyone can register and use GILDA for training and testing 20 sites on 3 continents EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 5

6 Enabling Grids for E-sciencE Geographically distributed responsibility for operations: There is no central operation Regional Operation Centers Responsible or resource centers in their region Tools are developed/hosted at different sites: GOC DB (RAL), SFT (CERN), GStat (Taipei), CIC Portal (Lyon) Grid operator on duty 6 teams working in weekly rotation CERN, IN2P3, INFN, UK/I, Ru,Taipei Crucial in improving site stability and management Expanding to all ROCs in EGEE-II Operations coordination Weekly operations meetings Regular ROC managers meetings Series of EGEE Operations Workshops Nov 04, May 05, Sep 05, June 06 EGEE Operations Process Procedures described in Operations Manual Introducing new sites Site downtime scheduling Suspending a site Escalation procedures; etc. EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 6

7 Enabling Grids for E-sciencE Production Grid Middleware EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 7

8 Enabling Grids for E-sciencE Production Grid Middleware Key factors in EGEE Grid Middleware Development: EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 7

9 Enabling Grids for E-sciencE Production Grid Middleware Key factors in EGEE Grid Middleware Development: 1. Strict software process Use industry standard software engineering methods Software configuration management, version control, defect tracking, automatic build system, EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 7

10 Enabling Grids for E-sciencE Production Grid Middleware Key factors in EGEE Grid Middleware Development: 1. Strict software process Use industry standard software engineering methods Software configuration management, version control, defect tracking, automatic build system, 2. Conservative approach in what software to use Avoid cutting-edge software Deployment on over 100 sites cannot assume a homogenous environment middleware needs to work with many underlying software flavors Avoid evolving standards Evolving standards change quickly (and sometime significantly cf. OGSI vs. WSRF) impossible to keep pace on > 100 sites EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 7

11 Enabling Grids for E-sciencE Production Grid Middleware Key factors in EGEE Grid Middleware Development: 1. Strict software process Use industry standard software engineering methods Software configuration management, version control, defect tracking, automatic build system, 2. Conservative approach in what software to use Avoid cutting-edge software Deployment on over 100 sites cannot assume a homogenous environment middleware needs to work with many underlying software flavors Avoid evolving standards Evolving standards change quickly (and sometime significantly cf. OGSI vs. WSRF) impossible to keep pace on > 100 sites Long (and tedious) path from prototypes to production EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 7

12 Enabling Grids for E-sciencE glite Grid Middleware Services Overview paper EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 8

13 Directives Enabling Grids for E-sciencE glite Software Process JRA1 Development Software Error Fixing Serious problem SA3 Integration SA3 Testing & Certification Deployment Packages SA1 Pre- Production Problem Fail Testbed Deployment SA1 Production Infrastructure Integration Tests Pass Fail Pre-Production Deployment Functional Tests Pass Fail Release Installation Guide, Release Notes, etc Pass Scalability Tests EGEE-II INFSO-RI EGEE - A Large-scale Production Grid Infrastructure 9

14 OSG Middleware Layering Infrastructure Applications LIGO Data Grid CMS Services & Framework ATLAS Services & Framework CDF, D0 SamGrid & Framework OSG Release Cache: VDT + Configuration, Validation, VO management Virtual Data Toolkit (VDT) Common Services NMI + VOMS, CEMon (common EGEE components), MonaLisa, Clarens, AuthZ NSF Middleware Initiative (NMI): Condor, Globus, Myproxy

15 OSG Middleware Pipeline Domain science requirements. Condor, Globus, EGEE etc OSG stakeholders and middleware developer (joint) projects. Test on VO specific grid Integrate into VDT Release. Deploy on OSG integration grid Test Interoperability With EGEE and TeraGrid Provision in OSG release & deploy to OSG production.

16 GH5780""I 29 July CERN power-off 08:00-24:00 - SAM failed through 31 July. Other sites that were active before and after power-off assumed to be fully available available throughout power-off and SAM failure. Availability on 29 July before 08:00 adjusted pro-rata. Data from SAM monitoring. Site availability as agreed in WLCG MB on 07mar06 and 04apr06. Service considered unavailable from time of first failure to time of next success Scheduled interruptions are considered as unavailable. %*(&%'(8O%558B4$(BP!"# $%&'($ ))# B4$(8%*(&%'(8T959H&8T9U41'V88W8+"#89:8$%&'($8888!8+"#89:8$%&'($888!8$%&'($ 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% % 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% % 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% =DEF@JEXM8 %*(&%'( +"# NYZ@<=>08 %*(&%'(,-# KF0J.@==8 %*(&%'( )!# 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% % 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% % 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% KFNF@?/8 %*(&%'(./# E3<@<=>08 %*(&%'(!.# A3E3@R3?EK[8 %*(&%'( )-# 100% 100% 100% 90% 90% 90% 80% 80% 80% 70% 70% 70% 60% 60% 60% 50% 40% 30% 20% 10% 50% 40% 30% 20% 10% 50% 40% 30% 20% 10% SAM tests fail due to dcache function failure that does not affect CMS jobs. The problem is understood and is being worked on 0% % % ?EKQRN@<=>08 %*(&%'( )"#?%4S%1@<=>08 %*(&%'( +)# QA=RA@NF3<@;=/8 %*(&%'( 0"# 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% % 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% % 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% JK= %*(&%'( )!# LF< %*(&%'( 12% FM>N %*(&%'( 12% CERN + Tier-1 Availability 2 Printed on 10/08/2006 at 12:41 12

17 WLCG Availability of CERN & T1s Target is to have >=8 sites reaching 88% availability by end of Sep After that, full WLCG services should be in operation. 13

18 WLCG Availability of CERN & T1s Target is to have >=8 sites reaching 88% availability by end of Sep After that, full WLCG services should be in operation. CERN- PROD FZK- LCG2 IN2P3- CC INFN-T1 RAL- LCG2 SARA- MATRIX TRIUMF- LCG2 Taiwan- LCG2 USCMS- FNAL- WC1 PIC average - all Tier-1s May'06 89% 85% 83% 89% 68% 58% 77% 87% 68% 61% 77% June'06 92% 15% 89% 62% 76% 49% 88% 75% 64% 88% 70% July'06 90% 54% 87% 31% 73% 84% 80% 98% 20% 87% 70% average 90% 53% 86% 61% 72% 64% 81% 87% 50% 78% 72% 13

19 EGEE OSG inter-operability Agree on a common Virtual Organization Management System (VOMS) Active Joint Security groups: leading to common policies and procedures. Condor-G interfaces to multiple remote job execution services (GRAM, Condor-C). File Transfers using GridFTP. SRM V1.1 for managed storage access. SRM V2.1 in test. Publish OSG BDII to shared BDII for Resource Brokers to route jobs across the two grids. Automate ticket routing between GOCs.

20 OSG/LCG resource integration Mature tech help integrating resources GCB introduced to help integrating with OSG computing resources CDF/OSG users can submit jobs by gliding-in into GCB box Access ASGC T1 computing resources from twgrid VO Customized UI to help accessing back-end storage resources Help local users not ready for grid HEP users access T1 resources

21 Data Management 16

22 Requirements: User s viewpoint Find Data Registries & Human communication Understand data Metadata description, Standard / familiar formats & representations, Standard value systems & ontologies Data Access Find how to interact with data resource Obtain permission (authority) Make connection Make selection Move Data In bulk or streamed (in increments) Ischia, Italy July 2006

23 Requirements: User s viewpoint 2 Transform Data To format, organisation & representation required for computation or integration Combine data Standard DB operations + operations relevant to the application model Present results To humans: data movement + transform for viewing To application code: data movement + transform to the required format To standard analysis tools, e.g. R To standard visualisation tools, e.g. Spotfire Ischia, Italy July 2006

24 Metadata Services for Grid Efficient management of millions of files spread over several sites User and applications need an efficient mechanism to find the files of interest discover and query information about their contents Could be provided by associating descriptive attributes (metadata) to files exposing this information in catalogues, accessible and searchable by user and client application Quality of Services Hide network latency Disconnected computing: local replicas for offline access DBMS independent: Grid environment is heterogeneous Reliability: no single point of failure Scalability: support hundreds/thousands of concurrent users 19

25 Grid Metadata Services Grid Metadata Service (GMS) is not just for data management, but also for job management (e.g., WLCG) and resource management in general GMS is not exclusive to the (file) catalog service. File catalog keeps track in which storage element a particular object is located (but provides no way to query on their contents) GMS provides the repository of detailed information for each object, and makes queries on them Advanced Features (Research Topics) Split big files among several SEs (different chunks stored in different SEs) data security enforcement upload/download bandwidth increased Automatic extraction and population of metadata 20

26 Integrated service for Data & Metadata Ischia, Italy July 2006

27 Enabling Grids for E-sciencE Introduction to VOMS VOMS Features Single login using (proxy-init) only at the beginning of a session VOMS extensions are attached to user proxy Expiration time The authorization information is only valid for a limited period of time as the proxy certificate itself Multiple VO User may log-in into multiple VOs and create an aggregate proxy certificate, which enables him/her to access resources in any one of them Support for Group and Roles Group membership is automatically inserted when requesting voms proxy Role has to be requested explicitly Backward compatibility The extra VO related information is in the user s proxy certificate User s proxy certificate can be still used with non VOMS-aware service Security All client-server communications are secured and authenticated INFSO-RI Catania, NA4 Generic Applications Meeting, January 10th,

28 VOMS Web interface Enabling Grids for E-sciencE VO user can Query membership details Register himself in the VO You will need a valid certificate Track his requests VO manager can Handle request from users Administer the VO INFSO-RI Catania, NA4 Generic Applications Meeting, January 10th,

29 Enabling Grids for E-sciencE Groups The number of users of a VO can be very high: E.g. the experiment ATLAS has 2000 member Make VO manageable by organizing users in groups: Examples: VO GILDA Group Catania INFN o Group Barbera University Group Padua VO GILDA /GILDA/TUTORS /GILDA/STUDENT can write to normal storage only write to volatile space Groups can have a hierarchical structure, undefinitely deep INFSO-RI Catania, NA4 Generic Applications Meeting, January 10th,

30 Enabling Grids for E-sciencE Roles Roles are specific roles a user has and that distinguishes him from others in his group: Software manager VO-Administrator Difference between roles and groups: Roles have no hierarchical structure there is no sub-role Roles are not used in normal operation They are not added to the proxy by default when running voms-proxy-init But they can be added to the proxy for special purposes when running vomsproxy-init Example: User Emidio has the following membership VO=gilda, Group=tutors, Role=SoftwareManager During normal operation the role is not taken into account, e.g. Emidio can work as a normal user For special things he can obtain the role Software Manager INFSO-RI Catania, NA4 Generic Applications Meeting, January 10th,

31 Grid Security Fundamentals Key terms that are typically associated with security Authentication Authorisation Audit/accounting Integrity Fabric Management Confidentiality Privacy Trust All are important for Grids but some applications may have more emphasis on certain concepts than others ISSGC06, Ischia July 2006

32 Grid Security Infrastructure (GSI) Standard mechanism for interfacing Grids Supports X509 proxy certificates for Authentication Created with the grid-proxy-init command Proxy certificate stored in /tmp directory Establishes connections between services by Mutual Authentication Preliminary messages are exchanged and encrypted/decrypted Signatures and CAs are verified If this checks out then both parties know who they are talking to Uses an Access Control List called a grid-mapfile for Authorisation But still can use GGF SAML Callout to other AuthZ functions Stored in the /etc/grid-security/ directory ISSGC06, Ischia July 2006

33 Federated Trust Local authentication infrastructures are vital e.g. Campus student directories Support existing infrastructures (e.g. registration, human resources) Will normally have enrolled IN PERSON at the institution» With standard identity (birth certificate, exam results) Will be (reasonably) well known by local staff Also the Regional Operators for a CA Required decentralisation of credential verification due to travel/time restrictions National CA would be impossible without this Remote authentication information will always be out of date Don t want to have to learn lots of usernames/passwords The best entity to authenticate a person is their home institution/company Info will be up to date They will always know a person better than a remote site Remote site may not know if user is still valid or not ISSGC06, Ischia July 2006

34 Summary Security is a combination of technical implementation and sociological behaviour There can be no overall security policy for the Grid integrate existing site policies The establishment of identity on the Grid (authentication) is achieved through the use of PKI Certificates and Proxies ISSGC06, Ischia July 2006

35 Application Development 30

36 CMS Experiment - an exemplar community grid OS G EGEE CERN Taiwan Italy UK USA Germany France Purdue UNL Wisconsin Caltech UCSD Florida MIT Data & jobs moving locally, regionally & globally within CMS grid. Transparently across grid boundaries from campus to global.

37 Computing Models Cover Data model Output stages, formats, sizes, rates, distribution Analysis model Workflow, streams, (re)processing, data movement, interactivity Distributed deployment strategy Computing tier roles, data management & processing across the tiers Specifications for Capacity (processors, storage, network etc.) Capability (middleware and other services) 32

38 Common Needs for a New Community/Application Understand what the grid is about Try it out, get a feeling of how it works (grid user temporary access) Understand what could be the added value for their applications Discover/identify new kind of applications of value for their community Deployment of middleware on their machines Join an existing VO/infrastructure Experience with grid programming: look and learn from example code of applications that use the middleware Understand what the middleware does and what it does not do (know about current bugs and their workarounds in the middleware) Be informed about/get in contact with other applications Ask questions to other grid developers Set up their own infrastructure (identify the infrastructure requirements) Organize their own VO Understand project positions with respect to standards Have a plan about what will be available and when for adoption Provide relevant requirements/feedback in a coordinated way Be constantly informed about relevant events Be involved in the discussions about the hot topics (concertation events) Source: EGEE Application Migration Report, EGEE-DNA v doc 33

39 e-science Applications in Taiwan High Energy Physics: WLCG Bioinformatics: mpiblast-g2 Biomedicine: Distributing AutoDock tasks on the Grid using DIANE Digital Archive: Data Grid for Digital Archive Longterm preservation Atmospheric Science Geoscience: GeoGrid for data management and hazards mitigation Ecology Research and Monitoring: EcoGrid BioPortal Biodiversity: TaiBIF/GBIF e-science Application Framework Development 34

40 WLCG Services in Taiwan PHEDEX Transfer All! ASGC ASGC! All 35

41 EGEE Biomed DC II Large Scale Virtual Screening of Drug Design on the Grid Biomedical goal accelerating the discovery of novel potent inhibitors thru minimizing nonproductive trial-and-error approaches improving the efficiency of high throughput screening Grid goal aspect of massive throughput: reproducing a grid-enabled in silico process (exercised in DC I) with a shorter time of preparation aspect of interactive feedback: evaluating an alternative light-weight grid application framework (DIANE) Grid Resources: AuverGrid, BioinfoGrid, EGEE-II, Embrace, & TWGrid Problem Size: around 300 K compounds from ZINC database and a chemical combinatorial library, need ~ 137 CPU years in 4 weeks a world-wide infrastructure providing over than 5,000 CPUs

42 Distributed Data Management & Long-term Preservation of NDAP Long-term Preservation Automatic remote replication with 3 copies in different sites Effective migration based on metadata not just the digitized contents were archived, but als o their metadata, methods/procedures, standard format, and management information Separation of data representation and presentation Secure Access Reduce the total cost of management Data Management Framework could be shared for contentbased applications, e.g., federation etc. Sustainable Operation and Services 37

43 Architecture NDAP LTP Infrastructure Storage Resource Network: = 8 Storage Resource Centre + IDC + TWAREN/GSN Middleware replication, fail-over, uniform namespace, metadata catalog federation Security Infrastructure Operation and Management Interface to Applications Discovery Management 38

44 SRB-based Data Grid System Architecture for NDAP

45 Taiwan GeoGrid Applications Grid for Geoscience, Earth Science and Environmental Research and Applications Land Use and Natural Resources Plan/Management Hazards Mitigation Typhoon Earthquake Flood Coast line changes Landslide/Debris flow On-the-fly overlay of base maps and thematic maps, from distributed data sources (of variant resolution, types, and time) based on Grid Data Management WebGIS/Goole Earth based UI Integration of Applications with Grid

46 ASGC e-science Application Focus Grid Portal common data sharing environment as a one stop shop to search for and access data from difference administrative domains on heterogeneous system in a UNIFORM way Content Analysis & Management Metadata model, Content management framework, data federation Security Framework PKI based authentication, authorization, accounting and encryption Storage Resource Broker (collaborate with SDSC) pure distributed data management system, integration to Grid infrastructure, development of SRB-SRM Long-Term Preservation (LTP) & Data Curation Persistent archive, Mass storage technology, sustainable operation/business model

47 Common Framework for Application Development Data Management Web-Based Portal User Interface Job repository User/Grid Proxy Manager DataBank/Storage Element Virtual Queuing System Computing Grid Agent Grid Computing Element

48 How to Measure the Success Increasing the number of infrastructure users by increasing awareness Dissemination and outreach Training and education Increasing the number of applications by improving application support and middleware functionality Improved usability through high level grid middleware extensions Increasing the grid infrastructure Incubating related projects Ensuring interoperability between projects Protecting user investments Towards a sustainable grid infrastructure 43

49 A Service-Oriented Grid Grid middleware services Job-Submit Service Brokering Service Registry Service Advertise Notify Virtualized resources CPU Compute Resource Service Data Service Application Service Printer Service Hiro Kishimoto: Keynote GGF17 Ischia, Italy July

50 Unreliability Counter Measures Requires much R&D Continuous arms race as scale of Grids grow Ideal of a continuously available stable service Not achievable recognise that drops in response and local failures must be dealt with Design resilient architectures Design resilient algorithms Improve reliability of each component Distribute the responsibility For failure detection For recovery action Ischia, Italy July 2006

51 Summary e-science infrastructure is available and will be more robust and stable very soon by 2008 Have to take advantage of the core services provided by e-infrastructure and develop the best application model in Grid way based on domain consensus. Communication between application and e-infrastructure is indispensable. e- Science services need to be driven by application rather than the technology. Build up the production quality Grid Infrastructure for WLCG and e-science Applications in Taiwan, and happy to work with all PNC members to Have the e-infrastructure in place collaboratively Identify the core services required (collection of use cases) Foster grid applications DAGrid, Geospatial DataGrid, etc., and moving toward semantic content discovery Training and workshops 46

e-science Infrastructure and Applications in Taiwan Eric Yen and Simon C. Lin ASGC, Taiwan Apr. 2008

e-science Infrastructure and Applications in Taiwan Eric Yen and Simon C. Lin ASGC, Taiwan Apr. 2008 e-science Infrastructure and Applications in Taiwan Eric Yen and Simon C. Lin ASGC, Taiwan Apr. 2008 1 Outline Driving by WLCG -- Infrastructure, Reliability and Scalability Customized services and Application

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

Grid Interoperation and Regional Collaboration

Grid Interoperation and Regional Collaboration Grid Interoperation and Regional Collaboration Eric Yen ASGC Academia Sinica Taiwan 23 Jan. 2006 Dreams of Grid Computing Global collaboration across administrative domains by sharing of people, resources,

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

AMGA metadata catalogue system

AMGA metadata catalogue system AMGA metadata catalogue system Hurng-Chun Lee ACGrid School, Hanoi, Vietnam www.eu-egee.org EGEE and glite are registered trademarks Outline AMGA overview AMGA Background and Motivation for AMGA Interface,

More information

The European DataGRID Production Testbed

The European DataGRID Production Testbed The European DataGRID Production Testbed Franck Bonnassieux CNRS/UREC ENS-Lyon France DataGrid Network Work Package Manager Franck.Bonnassieux@ens-lyon.fr Presentation outline General DataGrid project

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Grid Computing Middleware. Definitions & functions Middleware components Globus glite

Grid Computing Middleware. Definitions & functions Middleware components Globus glite Seminar Review 1 Topics Grid Computing Middleware Grid Resource Management Grid Computing Security Applications of SOA and Web Services Semantic Grid Grid & E-Science Grid Economics Cloud Computing 2 Grid

More information

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization In computer science, storage virtualization uses virtualization to enable better functionality

More information

Geographical failover for the EGEE-WLCG Grid collaboration tools. CHEP 2007 Victoria, Canada, 2-7 September. Enabling Grids for E-sciencE

Geographical failover for the EGEE-WLCG Grid collaboration tools. CHEP 2007 Victoria, Canada, 2-7 September. Enabling Grids for E-sciencE Geographical failover for the EGEE-WLCG Grid collaboration tools CHEP 2007 Victoria, Canada, 2-7 September Alessandro Cavalli, Alfredo Pagano (INFN/CNAF, Bologna, Italy) Cyril L'Orphelin, Gilles Mathieu,

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

Grid Security Policy

Grid Security Policy CERN-EDMS-428008 Version 5.7a Page 1 of 9 Joint Security Policy Group Grid Security Policy Date: 10 October 2007 Version: 5.7a Identifier: https://edms.cern.ch/document/428008 Status: Released Author:

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

Grid Challenges and Experience

Grid Challenges and Experience Grid Challenges and Experience Heinz Stockinger Outreach & Education Manager EU DataGrid project CERN (European Organization for Nuclear Research) Grid Technology Workshop, Islamabad, Pakistan, 20 October

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

Distributed Data Management with Storage Resource Broker in the UK

Distributed Data Management with Storage Resource Broker in the UK Distributed Data Management with Storage Resource Broker in the UK Michael Doherty, Lisa Blanshard, Ananta Manandhar, Rik Tyer, Kerstin Kleese @ CCLRC, UK Abstract The Storage Resource Broker (SRB) is

More information

Mitigating Risk of Data Loss in Preservation Environments

Mitigating Risk of Data Loss in Preservation Environments Storage Resource Broker Mitigating Risk of Data Loss in Preservation Environments Reagan W. Moore San Diego Supercomputer Center Joseph JaJa University of Maryland Robert Chadduck National Archives and

More information

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen Grid Computing 7700 Fall 2005 Lecture 5: Grid Architecture and Globus Gabrielle Allen allen@bit.csc.lsu.edu http://www.cct.lsu.edu/~gallen Concrete Example I have a source file Main.F on machine A, an

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT.

Chapter 4:- Introduction to Grid and its Evolution. Prepared By:- NITIN PANDYA Assistant Professor SVBIT. Chapter 4:- Introduction to Grid and its Evolution Prepared By:- Assistant Professor SVBIT. Overview Background: What is the Grid? Related technologies Grid applications Communities Grid Tools Case Studies

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE The glite middleware Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 John.White@cern.ch www.eu-egee.org EGEE and glite are registered trademarks Outline glite distributions Software

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Monitoring tools in EGEE

Monitoring tools in EGEE Monitoring tools in EGEE Piotr Nyczyk CERN IT/GD Joint OSG and EGEE Operations Workshop - 3 Abingdon, 27-29 September 2005 www.eu-egee.org Kaleidoscope of monitoring tools Monitoring for operations Covered

More information

Introduction to Grid Computing

Introduction to Grid Computing Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able

More information

Deploying the TeraGrid PKI

Deploying the TeraGrid PKI Deploying the TeraGrid PKI Grid Forum Korea Winter Workshop December 1, 2003 Jim Basney Senior Research Scientist National Center for Supercomputing Applications University of Illinois jbasney@ncsa.uiuc.edu

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

The EGEE-III Project Towards Sustainable e-infrastructures

The EGEE-III Project Towards Sustainable e-infrastructures The EGEE-III Project Towards Sustainable e-infrastructures Erwin Laure EGEE-III Technical Director Erwin.Laure@cern.ch www.eu-egee.org EGEE-II INFSO-RI-031688 EGEE and glite are registered trademarks EGEE

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

High Performance Computing Course Notes Grid Computing I

High Performance Computing Course Notes Grid Computing I High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are

More information

Juliusz Pukacki OGF25 - Grid technologies in e-health Catania, 2-6 March 2009

Juliusz Pukacki OGF25 - Grid technologies in e-health Catania, 2-6 March 2009 Grid Technologies for Cancer Research in the ACGT Project Juliusz Pukacki (pukacki@man.poznan.pl) OGF25 - Grid technologies in e-health Catania, 2-6 March 2009 Outline ACGT project ACGT architecture Layers

More information

European Grid Infrastructure

European Grid Infrastructure EGI-InSPIRE European Grid Infrastructure A pan-european Research Infrastructure supporting the digital European Research Area Michel Drescher Technical Manager, EGI.eu Michel.Drescher@egi.eu TPDL 2013

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Grid Infrastructure For Collaborative High Performance Scientific Computing

Grid Infrastructure For Collaborative High Performance Scientific Computing Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific

More information

A Simplified Access to Grid Resources for Virtual Research Communities

A Simplified Access to Grid Resources for Virtual Research Communities Consorzio COMETA - Progetto PI2S2 UNIONE EUROPEA A Simplified Access to Grid Resources for Virtual Research Communities Roberto BARBERA (1-3), Marco FARGETTA (3,*) and Riccardo ROTONDO (2) (1) Department

More information

THE GLOBUS PROJECT. White Paper. GridFTP. Universal Data Transfer for the Grid

THE GLOBUS PROJECT. White Paper. GridFTP. Universal Data Transfer for the Grid THE GLOBUS PROJECT White Paper GridFTP Universal Data Transfer for the Grid WHITE PAPER GridFTP Universal Data Transfer for the Grid September 5, 2000 Copyright 2000, The University of Chicago and The

More information

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007 Grid Programming: Concepts and Challenges Michael Rokitka SUNY@Buffalo CSE510B 10/2007 Issues Due to Heterogeneous Hardware level Environment Different architectures, chipsets, execution speeds Software

More information

Scheduling Computational and Storage Resources on the NRP

Scheduling Computational and Storage Resources on the NRP Scheduling Computational and Storage Resources on the NRP Rob Gardner Dima Mishin University of Chicago UCSD Second NRP Workshop Montana State University August 6-7, 2018 slides: http://bit.ly/nrp-scheduling

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008 Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, 13-14 November 2008 Outline Introduction SRM Storage Elements in glite LCG File Catalog (LFC) Information System Grid Tutorial, 13-14

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN) A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN) 10-15 PetaByte/year The LHC challenge Data analysis requires at least ~100k typical PC processor cores Scientists in tens

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap

THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap THEBES: THE GRID MIDDLEWARE PROJECT Project Overview, Status Report and Roadmap Arnie Miles Georgetown University adm35@georgetown.edu http://thebes.arc.georgetown.edu The Thebes middleware project was

More information

Enabling Grids for E-sciencE. EGEE security pitch. Olle Mulmo. EGEE Chief Security Architect KTH, Sweden. INFSO-RI

Enabling Grids for E-sciencE. EGEE security pitch. Olle Mulmo. EGEE Chief Security Architect KTH, Sweden.  INFSO-RI EGEE security pitch Olle Mulmo EGEE Chief Security Architect KTH, Sweden www.eu-egee.org Project PR www.eu-egee.org EGEE EGEE is the largest Grid infrastructure project in the World? : 70 leading institutions

More information

The glite File Transfer Service

The glite File Transfer Service The glite File Transfer Service Peter Kunszt Paolo Badino Ricardo Brito da Rocha James Casey Ákos Frohner Gavin McCance CERN, IT Department 1211 Geneva 23, Switzerland Abstract Transferring data reliably

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

The EPIKH, GILDA and GISELA Projects

The EPIKH, GILDA and GISELA Projects The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) The EPIKH, GILDA and GISELA Projects Antonio Calanducci INFN Catania (Consorzio COMETA) - UniCT Joint GISELA/EPIKH School for

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

The PanDA System in the ATLAS Experiment

The PanDA System in the ATLAS Experiment 1a, Jose Caballero b, Kaushik De a, Tadashi Maeno b, Maxim Potekhin b, Torre Wenaus b on behalf of the ATLAS collaboration a University of Texas at Arlington, Science Hall, PO Box 19059, Arlington, TX

More information

Distributed Data Management on the Grid. Mario Lassnig

Distributed Data Management on the Grid. Mario Lassnig Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems

More information

Promoting Open Standards for Digital Repository. case study examples and challenges

Promoting Open Standards for Digital Repository. case study examples and challenges Promoting Open Standards for Digital Repository Infrastructures: case study examples and challenges Flavia Donno CERN P. Fuhrmann, DESY, E. Ronchieri, INFN-CNAF OGF-Europe Community Outreach Seminar Digital

More information

Scientific data management

Scientific data management Scientific data management Storage and data management components Application database Certificate Certificate Authorised users directory Certificate Certificate Researcher Certificate Policies Information

More information

Data Replication: Automated move and copy of data. PRACE Advanced Training Course on Data Staging and Data Movement Helsinki, September 10 th 2013

Data Replication: Automated move and copy of data. PRACE Advanced Training Course on Data Staging and Data Movement Helsinki, September 10 th 2013 Data Replication: Automated move and copy of data PRACE Advanced Training Course on Data Staging and Data Movement Helsinki, September 10 th 2013 Claudio Cacciari c.cacciari@cineca.it Outline The issue

More information

The Virtual Observatory and the IVOA

The Virtual Observatory and the IVOA The Virtual Observatory and the IVOA The Virtual Observatory Emergence of the Virtual Observatory concept by 2000 Concerns about the data avalanche, with in mind in particular very large surveys such as

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Comparative evaluation of software tools accessing relational databases from a (real) grid environments

Comparative evaluation of software tools accessing relational databases from a (real) grid environments Comparative evaluation of software tools accessing relational databases from a (real) grid environments Giacinto Donvito, Guido Cuscela, Massimiliano Missiato, Vicenzo Spinoso, Giorgio Maggi INFN-Bari

More information

Grid Middleware and Globus Toolkit Architecture

Grid Middleware and Globus Toolkit Architecture Grid Middleware and Globus Toolkit Architecture Lisa Childers Argonne National Laboratory University of Chicago 2 Overview Grid Middleware The problem: supporting Virtual Organizations equirements Capabilities

More information

Knowledge Discovery Services and Tools on Grids

Knowledge Discovery Services and Tools on Grids Knowledge Discovery Services and Tools on Grids DOMENICO TALIA DEIS University of Calabria ITALY talia@deis.unical.it Symposium ISMIS 2003, Maebashi City, Japan, Oct. 29, 2003 OUTLINE Introduction Grid

More information

Introduction to GT3. Introduction to GT3. What is a Grid? A Story of Evolution. The Globus Project

Introduction to GT3. Introduction to GT3. What is a Grid? A Story of Evolution. The Globus Project Introduction to GT3 The Globus Project Argonne National Laboratory USC Information Sciences Institute Copyright (C) 2003 University of Chicago and The University of Southern California. All Rights Reserved.

More information

Production Grids. Outline

Production Grids. Outline Production Grids Last Time» Administrative Info» Coursework» Signup for Topical Reports! (signup immediately if you haven t)» Vision of Grids Today» Reality of High Performance Distributed Computing» Example

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Sphinx: A Scheduling Middleware for Data Intensive Applications on a Grid

Sphinx: A Scheduling Middleware for Data Intensive Applications on a Grid Sphinx: A Scheduling Middleware for Data Intensive Applications on a Grid Richard Cavanaugh University of Florida Collaborators: Janguk In, Sanjay Ranka, Paul Avery, Laukik Chitnis, Gregory Graham (FNAL),

More information

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Abstract. The Data and Storage Services group at CERN is conducting

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

SDS: A Scalable Data Services System in Data Grid

SDS: A Scalable Data Services System in Data Grid SDS: A Scalable Data s System in Data Grid Xiaoning Peng School of Information Science & Engineering, Central South University Changsha 410083, China Department of Computer Science and Technology, Huaihua

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

Presentation Title. Grid Computing Project Officer / Research Assistance. InfoComm Development Center (idec) & Department of Communication

Presentation Title. Grid Computing Project Officer / Research Assistance. InfoComm Development Center (idec) & Department of Communication BIRUNI Grid glite Middleware Deployment : From Zero to Hero, towards a certified EGEE site Presentation Title M. Farhan Sjaugi,, Mohamed Othman, Mohd. Zul Yusoff, Speaker Mohd. Rafizan Ramly and Suhaimi

More information

EUDAT - Open Data Services for Research

EUDAT - Open Data Services for Research EUDAT - Open Data Services for Research Johannes Reetz EUDAT operations Max Planck Computing & Data Centre Science Operations Workshop 2015 ESO, Garching 24-27th November 2015 EUDAT receives funding from

More information

EGEE - providing a production quality Grid for e-science

EGEE - providing a production quality Grid for e-science EGEE - providing a production quality Grid for e-science Fabrizio Gagliardi EGEE Project Director CERN Fabrizio. Gagliardi@cern.ch Marc-Elian Begin CERN Marc-Elian.Begin@cern. ch On behalfofthe EGEE Collaboration

More information

Service Mesh and Microservices Networking

Service Mesh and Microservices Networking Service Mesh and Microservices Networking WHITEPAPER Service mesh and microservice networking As organizations adopt cloud infrastructure, there is a concurrent change in application architectures towards

More information

DSpace Fedora. Eprints Greenstone. Handle System

DSpace Fedora. Eprints Greenstone. Handle System Enabling Inter-repository repository Access Management between irods and Fedora Bing Zhu, Uni. of California: San Diego Richard Marciano Reagan Moore University of North Carolina at Chapel Hill May 18,

More information

Credential Management in the Grid Security Infrastructure. GlobusWorld Security Workshop January 16, 2003

Credential Management in the Grid Security Infrastructure. GlobusWorld Security Workshop January 16, 2003 Credential Management in the Grid Security Infrastructure GlobusWorld Security Workshop January 16, 2003 Jim Basney jbasney@ncsa.uiuc.edu http://www.ncsa.uiuc.edu/~jbasney/ Credential Management Enrollment:

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

ALICE Grid Activities in US

ALICE Grid Activities in US ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions

More information

Globus GTK and Grid Services

Globus GTK and Grid Services Globus GTK and Grid Services Michael Rokitka SUNY@Buffalo CSE510B 9/2007 OGSA The Open Grid Services Architecture What are some key requirements of Grid computing? Interoperability: Critical due to nature

More information

EarthCube and Cyberinfrastructure for the Earth Sciences: Lessons and Perspective from OpenTopography

EarthCube and Cyberinfrastructure for the Earth Sciences: Lessons and Perspective from OpenTopography EarthCube and Cyberinfrastructure for the Earth Sciences: Lessons and Perspective from OpenTopography Christopher Crosby, San Diego Supercomputer Center J Ramon Arrowsmith, Arizona State University Chaitan

More information

R-GMA (Relational Grid Monitoring Architecture) for monitoring applications

R-GMA (Relational Grid Monitoring Architecture) for monitoring applications R-GMA (Relational Grid Monitoring Architecture) for monitoring applications www.eu-egee.org egee EGEE-II INFSO-RI-031688 Acknowledgements Slides are taken/derived from the GILDA team Steve Fisher (RAL,

More information