NAREGI Middleware Overview V 1.1 March, 2011 National Institute of Informatics

Size: px
Start display at page:

Download "NAREGI Middleware Overview V 1.1 March, 2011 National Institute of Informatics"

Transcription

1 GD-OVW NAREGI Middleware Overview V 1.1 March, 2011 National Institute of Informatics

2 Copyright 2004 National Institute of Informatics, Japan. All rights reserved. This file or a portion of this file is licensed under the terms of the NAREGI Public License, found at If you redistribute this file, with or without modifications, you must include this notice in the file.

3 J2SSH Library This product includes software developed by SSHTools ( openssl This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit ( This product includes cryptographic software written by Eric Young This product includes software written by Tim Hudson MyProxy This product includes software developed by Computing Services at Carnegie Mellon University ( This product includes software developed by the NetBSD Foundation, Inc. and its contributors. mod-ssl This product includes software developed by Ralf S. Engelschall for use in the mod_ssl project ( NetBSD libnbcompat This product includes software developed by the NetBSD Foundation, Inc. and its contributors. glite/glite Security Utilities This product includes software developed by The EU EGEE Project ( VOMS/VOMS C API This product includes software developed by the EU DataGrid ( aica This product includes software developed by Akira Iwata Laboratory,Nagoya Institute of Technology in Japan ( Xerces Java Parser, XML Security This product includes software developed by the Apache Software Foundation (

4 Trademarks and Registered Trademarks Linux is a registered trademark of Linus Torvalds in the United States and other countries. AIX is a trademark of International Business Machines Corporation in the United States. Solaris is a trademark or registered trademark of Sun Microsystems, Inc. in the United States and other countries. GGF and Global Grid Forum are trademarks of GGF. PBS Professional is a trademark of Altair Grid Technologies, LLC in the United States. Other company and product names mentioned in this document are trademarks or registered trademarks of their respective companies.

5 Introduction This document is the Overview of NAREGI Middleware. NAREGI Middleware is software that has been developed by the National Research Grid Initiative (NAREGI). NAREGI Middleware not only manages distributed computing resources, but also provides an integrated grid environment including environments for usage and programming. Lots of components comprising NAREGI Middleware are implemented as Web services that are compliant with the Web Services Resource Framework (WSRF). NAREGI Middleware coordinates these components to realize a more user-friendly and advanced grid environment. For details of NAREGI Middleware, see Target readers of this document This document is intended for individuals who install, maintain, or use NAREGI Middleware. Document configuration NAREGI Middleware documents are configured in the following separate volumes: NAREGI Middleware Overview (this document) This document provides a functional overview of the entire NAREGI Middleware. NAREGI Middleware Installation Guide This document describes the operating conditions for the entire NAREGI Middleware and how to install it using the rpm package. NAREGI Middleware Administrator s Guide This document describes the requirements necessary to run the installed NAREGI Middleware as a whole. For details on operations of each service of NAREGI Middleware, refer to the NAREGI Grid Middleware AG and UG below. NAREGI Middleware User s Guide This document describes how to operate each service of the installed NAREGI Middleware, focusing on services with GUI. For details on how to operate each service of NAREGI Middleware, refer to NAREGI Middleware UG below. NAREGI Middleware AG (Administrator Guide) This is an instruction manual for an administrator of each service of NAREGI Middleware. It consists of separate volumes by service component. NAREGI Middleware UG (User Guide) This is an instruction manual for a user of each service of NAREGI Middleware. It consists of separate volumes by service component with user functions. i

6 Important For the latest information on the NAREGI Middleware V1.1 (including the supported OS versions), refer to the Readme file. If you have any questions about the NAREGI Middleware, please visit the NAREGI Middleware FAQ page. (In Japanese) ii

7 Table of Contents Introduction...i 1. Overview What is NAREGI? Project Overview Next-generation Research Environment Virtual Organization (VO) Cyber Science Infrastructure (CSI) Future Trend and Science Grid Development What is NAREGI Middleware? Elementary Technologies for NAREGI Middleware System Configuration NAREGI Middleware Processing Flow Job Types Operation Model NARAGI Grid Middleware Functions Security NAREGI-CA VOMS UMS (User Management Server) User Environment NAREGI Portal Registration, Search and Update of Applications Application Compilation Application Deployment Workflow Import Function Status Acquisition Job Execution Command Line Tool GVS (Grid Visualization System) Grid Programming Environment GridMPI GridRPC Grid Application Support Process Management Semantics Conversion of Different Kinds of Data Synchronous Data Transfer (SBC)...38 iii

8 2.5 Data Grid Data Grid Resource Management System Data Grid Access Management System Resource Management Resource Information Collection and Storage Computing Resource Search and Reservation Job Reservation and Management Access Control Job Monitoring Information Display Function...44 iv

9 1. Overview 1.1 What is NAREGI? Project Overview NAREGI is a research and development project of grid middleware which aims to realize a science grid that networks supercomputers installed in universities and research institutes. This science grid will be core technology in the next-generation research environment for supporting academic research, taking full advantage of supercomputers in universities and research institutes. For more information concerning the NAREGI project, please access and browse Next-generation Research Environment Large-scale, high-speed processing by supercomputers is essential for incoming studies such as nanotechnology and biotechnology fields requiring computational science and elementary particle physics, and astronomy requiring a huge amount of measured or observed data for handling. For this purpose, supercomputers are used not only by conventional restricted researchers but by a wide variety of researchers. In addition, the incoming studies are advanced and complex and many elements are correlated with each other, so that researchers must cooperate with each other more. Because the need for establishing a research environment is increasing to support this kind of research, the NAREGI project plans to build a next-generation research environment such as a virtual organization (VO) which not only enables necessary computing resources and experiment facilities to be used by everybody anywhere, but also allows researchers studying related fields to do research together. The National Institute of Informatics is making efforts to build a cyber science infrastructure (CSI) comprising a network and authentication and science grid as a next-generation research infrastructure Virtual Organization (VO) VO stands for Virtual Organization which shares computing resources, data and applications among multiple real organizations and employs them collaboratively. The NAREGI Grid Middleware environment builds a VO from a set of computing resource nodes provided to the VO and a set of management nodes for managing the VO. The user can share the computing resources and use the applications without being aware of the specific organization of VO computing resources. The VO operations can be performed mutually with security so long as they are authenticated mutually among VOs having different authentication policies. In addition, accounting can be conducted based on user VO information and the amount of resources used Cyber Science Infrastructure (CSI) CSI is primarily intended to build a next-generation research infrastructure for efficiently 1

10 conducting advanced research by organizing research communities, storing knowledge, studying collaboratively and sharing research resources. CSI builds a university public key infrastructure (UPKI) based on the Science Information Network 3 (SINET3) for security, considering the collaboration among universities. In this science grid, researchers form VO-based research communities different from actual organizations of universities and research institutes which they belong to according to the purposes of research. They may belong to different research communities as needed and manage the VOs dynamically, for example, they may dissolve them if they are no longer needed or build an appropriate organization according to the research to be conducted. Within a VO, its members can share computer resources, experiment facilities, data and programs to conduct efficient large-scale simulations, promote advanced research such as coupled analysis simulation and efficiently use large-scale data. Furthermore, they can collaborate to accumulate intellectual properties through CSI operations. Cyber Science Infrastructure : CSI Organization Grid Operation and Coordination team (GOC) Research community by NAREGI middleware VO:Virtual organization Next-generation academic research infrastructure VO:1 VO:2 VO:3 Virtual organization University public key infrastructure (UPKI) Science Information Network (SINET3) University Public Key Infrastructure(UPKI) C college B institute D college A college E institute Real organization SINET3 Figure Cyber science infrastructure (CSI) diagram Future Trend and Science Grid Development Computational science is essential for future research and development (R&D) activities in conventional electrical and automobile industries and new nanotechnology and biotechnology fields, as well as academic research. In future, supercomputers will be used not only by specific researchers but also by general researchers. For this reason, supercomputers won t be used in a narrow, limited environment in future, but they will need to be scalable to the size of research, and be available anywhere as needed according to the style of research. In addition, as the number of supercomputer users or application fields increases, it will become evident that the computation cost and turnaround time must decrease while the need for multi-physics and multi-scale computation increases. 2

11 A science grid principally meets the new needs for various uses such as supercomputer selection and usage according to a scale best-suited for research, linkage and use of various types of supercomputers and usage of supercomputers from portal sites without being aware of the supercomputers themselves. To make the science grid very useful for researchers, however, various kinds of support must be provided. To cope with this, the Grid Operation and Coordination team (GOC) is planning an organization to provide certificate authority (CA) operations, grid certificate issuance, cooperation with academic authentication infrastructure, and user support and user training. The science grid is also expected to extend its coverage from supercomputers in universities and research institutes to next-generation supercomputers and systems installed in laboratories, to establish an environment in which supercomputer resources in Japan can be effectively used. The new studies cannot overcome these issues in the global research and development environment as long as they are still conducted only in the closed environment of Japan. It is essential that the science grid environment is open to the world. To build such a research environment, the NAREGI Grid Middleware Ver.1.0 complies with global standards. Peta-scale computing environment Information infrastructure center NAREGI Middleware Research institutes by field International collaboration(gin,ege E,TeraGrid etc.) Cluster computer (laboratory level) University(internal) computing resources Research communities by field Figure Grid development for the next-generation research environment 3

12 1.2 What is NAREGI Middleware? The grid middleware comprises elementary technologies for virtualizing network resources (computing resources belonging to different architectures and a wide range of data) and combines and enables software implemented by these technologies to run in collaboration with each other. One of the purposes of this gird middleware is to ensure that users can use the network resources easily and seamlessly. It is generally said that the grid middleware components require the followings: Authentication technology to permit users to use a grid environment by single sign-on Resource management technology for collecting and managing computing resources connected to the grid environment Technology for searching for and reserving computing resources suitable for user s requests Technology for sharing distributed data and programs on grids Elementary Technologies for NAREGI Middleware The NAREGI middleware works as fully WSRF-compliant Web-based middleware in which technologies for realizing application use environments to improve user friendliness and grid-related programming environments are combined to collaborate with each other. The NAREGI middleware comprises the following elementary technologies (components) (Figure 1.2-1). (4) Grid Application (3) Grid Programing environment GridMPI GridRPC (2) Use environment NAREGI Portal(Portal) GridPSE (PSE) WorkFlowTool(WFT) Grid Visualization System (GVS) (5) DG: DataGrid (6) Resource Management IS : Information Service SS : Super Scheduler GridVM : Grid Vitual Machine (1) Security Figure NAREGI middleware components 4

13 (1) Security The security component provides authentication and authorization functions to implement GSI (Grid Security Infrastructure). The authentication function is a scheme for issuing and managing various kinds of grid certificates via the PKI (Public Key infrastructure). NAREGI-provided NAREGI-CA (Certificate Authority software) meets the operation standard defined by IGTF (International Grid Trust Federation) and enables grid certificate authorities to cooperate with each other all over the world. By defining a VO (Virtual Organization), the authorization functions allow the user to share resources across multiple organizations having organization-dependent resource access policies. For details on security, refer to 2.1 Security. (2) User environment The NAREGI middleware use environment comprises PSE (Grid Problem Solving Environment), WFT (WorkFlow Tool) and GVS (Grid Visualization System) components for simply and efficiently running applications on the grid environment, NAREGI portal which serve as entry points for using the NAREGI middleware. PSE places applications developed by researchers in the grid environment and helps research communities to share the applications. WFT allows job execution control codes to be created easily via a user-friendly GUI. GVS visualizes the computing results on the grid environment. For details of the use environment, refer to 2.2 User Environment (3) Grid programming environment To simplify programming for concurrent processing in an environment where a variety of resources of different architectures (different models) have been distributed far and wide, GridRPC (Grid Remote Procedure Call) and GridMPI (Grid Message Passing Interface) are provided. GridMPI ensures high performance communication of high interoperability considering a communication delay on the grid at TCP/IP and MPI library levels. GridRPC not only simplifies the development of grid applications having a large fault tolerance and enabling resources to be used dynamically, but also ensures high execution efficiency. For details of the grid programming environment, refer to 2.3 Grid Programming Environment. 5

14 (4) Grid application support To support the execution of coupled computation programs in a grid environment, the grid coupled middleware (Mediator) supporting advanced semantics conversion functions among multiple applications and functions (SBC) supporting synchronous data transfer by files among multiple applications are provided. For details of grid application support, refer to 2.4 Grid Application Support. (5) Data grid To use globally distributed data resources in the grid environment, a data grid is provided. The data grid is comprised of a data grid resource management system and a data grid access management system. The data grid resource management system accumulates a large amount of data acquired through a large-scale simulation in a shared file and allows the file to be used via distributed computing environments on the grid. The data grid access management system efficiently transfers the large amount of data in the shared file to a specific computing environment. This file is used together with SS according to the settings made in WFT. For details of the data grid, refer to 2.5 Data Grid. (6) Resource management The resource management is primarily designed to integrate and manage globally distributed computing resource environments comprising different models as an entity for operations, and is comprised of SS (Super Scheduler), GridVM (grid Virtual Machine) and distributed resource information service (simply called IS) components. SS has a resource brokering function and a job workflow engine for management of resources and jobs. GridVM virtualizes computing resources of different models or jobs, and provides management services for the resources and jobs via standard interfaces. IS provides integrated management for information concerning computing resources, networks, software and accounting in the grid environment. These three components cooperate with each other to implement a Co-allocation function for computing resources of different architectures and a bulk job submission function for parameter survey jobs as well as a regular job entry function. For details of resource management, refer to 2.6 Resource Management System Configuration This section describes VO configurations available in the NAREG middleware and then describes the respective NAREGI middleware components implemented in the computers (nodes) that make up the VO. 6

15 (1) VO configuration VO is comprised of human and computing resource. Here the computing resource consists of an administrative node and a computation node. The administration node is referred to as a group of computers for NAREGI middleware resource management, information service provision and data management. The computing node is referred to as a group of computing resources for job execution and resource information collection. In the NAREGI middleware, each VO is recommended to have one administration node (see (a) and (b) in Figure 1.2-2). VO1 VO1 VO2 Administrative Node Administrative Node Administrative Node Computation Node Computation Node Computation Node (a) Configuration of single VO (b) Configuration of multiple VOs Figure Configuration containing administration nodes in units of VOs However, the administration node may be shared by multiple VOs so long as they are under control by the same administrator (Figure 1.2-3). In this case, however, special care should be taken for designing the administration node because load may be concentrated on the administration node. VO1 Administrative Node VO2 Computation Node Computation Node 7

16 Figure Example configuration of multiple VOs sharing an administration node The administration and computing nodes comprise the following nodes respectively. For details of the respective nodes and the NAREGI middleware components installed in those nodes, refer to the next section 1.2.2(2) VO configuration node. Administration node UMS node, Portal node, SS node and IS-NAS node Computing node GridVM administration node, GridVM computing node and IS-CDAS node (2) VO configuration node The following description relates to VO component nodes and NAREGI Middleware components running on each node. Figure shows an example with a group of administration nodes serving as a computer making up the VO ((a) in Figure 1.2-2). 8

17 UMS Node glite VOMS *1 IS-NAS Node IS-NAS Certificate Authority CA Node CA Administrative Node Computation Node UMS Portal Node MyProxy *2 Portal Renewal Service GridPSE WFT GVS SP IS SS Client DataGrid UI *3 DG-UFT Node *3 DG-DGUFT IS-MCDAS LRPS IS-SCDAS SBC SS Node SS Renewal Service DG-DRMS-SV Node *3 DG-Gfarm-VO Registration Authority RA Node RA GridVM Scheduler Node GridVM Scheduler LRPS IS-CDAS Node IS-MCDAS IS-RUS IS-SCDAS Cell Domain Cell Domain GridVM Cluster GridVM Cluster GridVM Engine Node GridVM Engine Node GridVM Engine Node GridVM Engine GridVM Engine GridVM Engine GVS PV GVS PV GVS PV SS Client SS Client SS Client SBC SBC SBC Mediator Mediator Mediator GridRPC GridRPC GridRPC GridMPI GridMPI GridMPI *1 glitevoms is not included in the NAREGI middleware. *2 MyProxy is not included in the NAREGI middleware. *3 When a DataGrid is used, this node or component is mandatory. Figure Node configuration example In this figure, the GridVM cluster refers to one or more GridVM computing modes under control by one GridVM administration node. The cell domain is a unit for information collection by IS and comprises one or more GridVM clusters, one IS-CDAS node or a set of IS-MCDAS nodes, and an IS-SCDAS node (multiple sets of IS-CDAS nodes can be placed within the same cell domain so long as the IS-CDAS redundancy function is used). (2-1) Flagship node This is the node necessary for installing NAREGI Middleware by using the rpm package. (2-2) CA node and RA node Each node is provided with a Certificate Authority (CA) software and a Registration Authority (RA) software. When NAREGI-CA included in the rpm package of the NAREGI middleware is used, it is possible to issue a simple certificate for assessment and research. For details of NAREGI-CA, refer to 2.1 Security. 9

18 (2-3) UMS node This node consists of a VOMS (VO Membership Service) function for managing VOs and a UMS (User Management Server) function for managing users. Though VOMS creates and deletes VOs and registers VO managers and users, their operations are under control of the VO managers and they are transparent to the users. UMS has a user management function including making user certificate issue requests and depositing the certificates. For details of VOMS and UMS, refer to 2.1 Security. (2-4) Portal node To sign on to the NAREGI middleware via a Web browser, users have to access a Web server in this node. This node contains not only use environment components such as Portal, PSE, WFT and GVS SP (GVS Service Provider) but also client functions of the resource management components IS and SS and the security component Renewal Service. This node also comprises the Data Grid UI. MyProxy in Globus Toolkit is installed in this node. MyProxy creates a short-term proxy certificate from a user certificate issued from a public key and a private key included in a user certificate issued by a CA within a valid term of the user certificate. This proxy certificate enables single sign-on and access to a node demanding authentication. For more information on the use environment, resource management, and security, refer to 2.2 User Environment, 2.6 Resource Management and 2.1 Security respectively. (2-5) IS-NAS node IS-NAS (Node Aggregator Service), IS-MCDAS (Monitoring Celldomain Aggregator Service), IS-SCDAS and LRPS (Local Resource Provider Service) acting as subcomponents of the resource management component SS and SBC acting as a component in the grid application environment, are installed in this node. IS-NAS manages information in units of VOs. It receives information from SS and PSE by means of IS-MCDAS, IS-SCDAS and LRPS and saves it in IS-NAS collectively. It also performs job monitoring and collects status information. LRPS in this node has a role different from that of LRPS in the GridVM management node (later described) and collects the status information of jobs to be registered from SS. SBC has a synchronous data transfer function. For details on resource management and grid application environment, refer to 2.6 Resource Management and 2.4 Grid Application Support. (2-6) IS-CDAS node IS-CDAS (IS-MCDAS and IS-SCDAS) and RUS acting as subcomponents of the resource management component IS are installed in this node. 10

19 The IS-MCDAS subcomponent collects information in units of celldomains and handles information which can be referenced currently. On the other hand, the IS-SCDAS subcomponent can reference the history information, The RUS subcomponent handles resource use records. For more information concerning resource management, refer to 2.6 Resource Management. (2-7) SS node The server functions of the resource management component SS and the security component Renewal Service are both installed in this node. SS provides functions for assuring computing resources according to job requests, brokering them among other components, reserving and managing jobs and demanding multiple machine resources for a single job. For details on resource management and security, refer to 2.6 Resource Management and 2.1 Security respectively. (2-8) GridVM node (GridVM administration node and GridVM computation node) The GridVM administration node GridVM Scheduler is installed in this node. The GridVM Scheduler controls the GridVM computation nodes (later described). In addition, the IS subcomponent LRPS is installed in this node. LRPS in this node has a role different from that of LRPS in IS-NAS and collects information concerning jobs in the computation node (GridVM computation node) site and CPU information. For details on resource management, refer to 2.6 Resource Management. Installing the GridVM Engine subcomponent in this node enables to act as a GridVM computation node as well as a GridVM administration node. In this case, note that the node load itself increases. In addition, when any local scheduler (PBS professional or Sun Grid Engine) except GridVM is used together with GridVM, schedulers other than Sun Grid Engine must be installed before installing the NAREGI Middleware. In addition to the GridVM subcomponent GridVM Engine, the client functions GVS PV (Parallel Viewer) and Mediator of SS and the subcomponents of SBC, GridMPI and GridRPC are all installed in this GridVM computation node. This node is managed by the GridVM Scheduler and is operated as a computing resource for the NAREGI Middleware. For more information concerning resource management, refer to 2.6 Resource Management. This node allows a reserved job (job for which the computing resource usage time has been reserved) and a non-reserved job to be submitted together. In this case, the running non-reserved job is cancelled because the reserved job takes precedence over the non-reserved job (when the local job scheduler is PBS Professional). To assure an environment for running the non-reserved job, you are recommended to prepare a GridVM computation node for which reservation is disabled in the GridVM Scheduler setting. For details of job types to be supported by NAREGI middleware, refer to Job Types. 11

20 (2-9) DG-UFT node and DG-DRMS-SV node The DataGrid subcomponents DG-DGUFT and DG-Gfarm-VO have been installed in these nodes respectively. You can select whether or not to use DataGrid. DG-DGUFT has an access management function for registering, updating and deleting shared files, while DG-Gfarm-VO has a resource management function for integrally managing the allocation, arrangement and used capacity of the shared file in the unified manner. DataGrid makes it possible to reserve a large capacity of disk in common space on a grid environment, so that data can be registered as meta data (information-added data). In addition, the shared files can be staged on a job allocation node (data transfer) according to the workflow. For more details on DataGrid, refer to 2.5 Data Grid. (2-10) GridRPC frontend node This is the node necessary for using GridRPC from NAREGI Middleware. GridRPC and GridMPI components are built into this node NAREGI Middleware Processing Flow The following describes a flow of processing using the NAREGI middleware, referring to Figure NAREGI CA Import from external file server File Server DG UTF Data transfers Single Sign-On MyProxy Portal WFT PSE GVS Shared file system Renewal Data Grid DG-Gfarm-VO SS client Data registration and deployment and search Gfarm Metadata Server Staging files File File Server File File Server File File Server ACS VOMS/UMS Job entry and SS monitoring Application registration and deployment Co-allocation Non reserved job entry Monitoring Control GRAM GridVM IS Local Scheduler Local Disk GridMPI GRAM GridVM IS Accounting Infomation Local Scheduler Local Disk IS Resources Info incl. VO CIM Infomation Computing Resources-A Computing Resources-B Figure NAREGI Middleware processing flow 12

21 (1) User certificate acquisition sign-on To use NAREGI middleware, access and log in to the NAREGI Portal via a Web browser and acquire a user certificate. Here a proxy certificate is created by adding the attributes of the user s VO to the retrieved certificate and is stored on the MyProxy server. Then, sign on to the NAREGI Portal. For more information concerning the user certificate acquisition and sign-on procedures, refer to the NAREGI Middleware User manual (for Users). (2) Application registration and deployment Users have to register applications and programs to be used on NAREGI middleware in the application repository on the PSE, and then compile and deploy them to an available computation site. At this time, if necessary data and files are imported into the DataGrid shared file system in advance, it is possible to give a specification to call them at job execution. For details on application registration and assignment using PSE, refer to descriptions given in Registration, Search and Update of Applications to Application Deployment. (3) Job execution (3-1) Workflow definition to job entry Define a workflow (job execution procedure, relationship with I/O files) and its job execution requirements using WFT. Then, enter a job from WFT. At this time, the proxy certificate of a job entry user is passed to SS and GridVM. The job is executed on a computation node under control by GridVM after the VO attribute and user are authorized successfully. For more information concerning WFT, refer to Job Execution. (3-2) Search and reservation of job execution resources Information concerning computation node VOs and computing resource conditions such as the number of CPUs are stored in the database. SS compares the execution requirements of a submitted job with VO-related information and resource information collected in IS to find a computing resource meeting the execution requirements (resource brokering). For jobs (reserved jobs) demanding a resource usage time reservation like a Co-al location job, if an adequate computing resource is found, SS makes a time-specified resource usage reservation for a computing resource local scheduler. GridVM on each computing resource checks the execution right of the user to judge whether to execute the job. In addition, it 13

22 compares the policy defined for each job with the job requirement to determine the allowable amount of each resource in units of VOs. In addition, jobs demanding no reservation (non-reserved jobs) can be used together with jobs directly submitted to the local scheduler according to the defined resource property. For more details on job types supported by NAREGI middleware, refer to Job Types. (3-3) Job execution The job reserved in GridVM is started at its reservation time. The non-reserved job is placed in the queue of the local scheduler and it is started according to the scheduling policy of the local scheduler. (4) Visualization of computation results The status and result of computation in progress are visualized by GVS for confirmation. (5) Collection of accounting information Accounting information on job execution such as computing resource information and usage rate is registered in IS from GridVM. This registration information can be referenced via GUI of IS Job Types The NAREGI middleware supports the following types of jobs: Reserved job The NAREGI middleware handles both the MPI job and Co-allocation job as the reserved ones. MPI jobs demanding no Co-allocation are to be changed to non-reserved ones in future. The Co-allocation jobs are referred to as jobs which run multiple computing resources concurrently to obtain solutions. To ensure their operations, the reserved jobs are assigned a higher priority than the non-reserved ones. While any reserved job is run, the NAREGI middleware occupies the corresponding computing resources (nodes). The reservation time is automatically set by SS. Non-reserved job Non-reserved jobs are run without usage time reservation according to the operation policy (First Come Fair Share, etc.). Other than the MPI and Co-al location jobs are submitted as non-reserved jobs. These non-reserved jobs may be put together with the reserved jobs and jobs which are submitted directly to the local scheduler. In this case, however, the running non-reserved jobs are canceled when any reserved job is submitted or run 14

23 (depending on the local scheduler in use). When the local scheduler is PBS Professional, the running non-reserved job is canceled when a reserved job is submitted. For this reason, you are recommended to prepare a computing resource which cannot be reserved to assure an environment for running the non-reserved job (Table Job types and computing resource properties). The computing resource property can be set in units of GridVM clusters. Table Job types and computing resource properties Computing Submission via NAREGI middleware Job directly submitted resource property Reserved job Non-reserved job to local scheduler Reservation Δ available (Node occupied) (Node not occupied) Reservation unavailable (Node not occupied) indicates that the job can be submitted; Δ indicates that job may be cancelled; indicates that the job cannot be submitted Operation Model The following describes the operation model of VO-based research communities, referring to Figure VO:Nano-1 Research team A Belonging to M Center VO Administrator:AB GroupA Member1:AB GroupA Member2:AC GroupB Member3:BD GroupB Member4:BF VOMS DataGrid Research team B Belonging to M Center VO:Bio-2 Portal VOMS DataGrid Portal SS IS SS IS Accounting group:nano1-m Local account Member1:AB-M Member2:AC-M Additional registration account Member3:BD-M Member4:BF-M Conventional user GridVM LS GridVM LS IS GridVM LS GridVM LS IS GridVM LS Conventional user LS GridVM Accounting group:nano1-n Local account Member3:BD-N Member4:BF-N Additional registration account Member1:AB-N Member2:AC-N M Center Cluster1 Cluster2 Cluster3 Resource Provider Cluster1 Cluster2 Cluster3 LS:Local Scheduler N Center Figure VO creation example The example shown in Figure shows M Center and N Center as real organizations (ROs) and Nano-1 and Bio-2 as virtual organizations (VOs). The Nano-1 VO has two research teams: team A in M Center and team B in N Center. The following descriptions assume a model where a new member in the Nano-1 VO uses computing resources belonging to M Center and N Center. 15

24 (1) Preparations The administrator of the Nano-1 VO (simply called the VO administrator) should register a new VO member in VOMS. At this time, a user certificate for the new user is demanded. This user certificate shall be obtained from the IGTF-authorized certificate authority (CA) (a user certificate for a user and it is used after UPKI operation.) (2) User account preparation The VO administrator asks center administrators (administrators working for M Center and N Center) to create a user account (account for accessing computing resources) for a new VO member (simply called the user). (3) grid-mapfile preparation The VO administrator describes a created user account in grid-mapfile and submits it to the center administrators concerned. The file grid-mapfile is a map describing in pairs a user name in the user certificate and a user name in the computing resource concerned. Each center administrator places the received grid-mapfile file in the computing resource of his or her center. (4) NAREGI middleware usage The user should access the NAREGI Portal via a Web browser and select and sign on to his or her VO to use various functions of the NAREGI middleware. For details of the procedure for using the NAREGI middleware, refer to NAREGI Middleware Processing Flow. 16

25 2. NARAGI Grid Middleware Functions This chapter deals with functions of respective NAREGI middleware services. 2.1 Security The NAREGI middleware security and authentication functions comprise the NAREGI-CA acting as certificate authority (CA) software, VOMS acting as a VO member management system and a user management server (UMS). It provides security services based on Globus GSI (security system which relinquishes privilege using the X.509 user certificate and proxy certificate).owing to these security and authentication functions, each user can use the NAREGI middleware with a single sign-on. In addition, in order to establish linkages among VOs, a Renewal Service function for automatically updating a proxy certificate in conjunction with SS during job execution is also provided NAREGI-CA NAREGI-CA serves as NAREGI middleware certificate authority software. A certificate authority established by this NAREGI-CA is called NAREGI CA. To use the NAREGI middleware environment, it is essential to have a certificate issued by this NAREGI CA. When using the NAREGI middleware, it is essential to obtain a user certificate from the officially authorized certificate authority (CA). In this case, however, it is possible to obtain the certificate via the NAREG-CA included in the rpm package of the NAREGI middleware. CA and RA allow different or identical nodes to be installed. (1) NAREGI-CA main functions The following lists the main functions of NAREGI-CA. 17

26 Table NAREGI-CA main functions Function name Certificate request creation Issue to certificate-signing request Registration, verification and issue of certificate-signing request Renewal of certificate Batch issue of certificate Revocation of certificate Cancellation of certificate revocation Issue of CRL Export of certificate and private key Import of private key Deletion of private key Display of profile settings Renewal of profile settings Renewal of profile extension information Addition, deletion and renaming of profiles Addition and deletion of operator Overview This function generates a key pair and a certificate-signing request (CSR). This function signs a certificate-signing request and issues a certificate. This function accumulates certificate-signing requests (CSRs) in the CA CSR queue and issues a certificate from the CSR in that queue. Note that when a certificate-signing request (CSR) comes in from a remote machine, its certificate is not issued immediately, that is, a certificate issue operation is allowed only after verification by the operator. This function changes only the expiration date and extended information of a certificate without changing the serial number and public key. This function issues certificates at one time by using a CSV file conforming to a designated format. This function revokes a certificate issued for users who have lost their private key or had it stolen. This function restores a revoked certificate to a normal certificate (revocation canceled certificate). This function issues a certificate revocation list (CRL: CertificateRevocationList). The CRL is issued to check the status of revocation at the time of user certificate or sign verification. This function deposits a certificate and a private key in the corresponding CA so that the user certificate or user private key can be exported by designating a serial number. In addition, the certificate-signing request deposited in the CA CSR queue can be exported. The CA has a function for depositing a user private key and allows the user to import a user private key so long as an issued certificate and a private key are provided in pairs. The CA has a function for depositing an issued certificate and a user private key. The user private key can be deleted from the key store of the CA. In addition, it makes it possible to delete the certificate-signing request placed in the CA CSR queue. This function displays the profile settings held in the CA. If the user attempts to run this command, the Issuer and Subject of the CA and the current serial number, valid days and extended information of the designated profile will be displayed. This function renews the profile settings held in the CA. In addition, the user can run this command to set the current serial number and valid days of the designated file. This function sets the extended information of a certificate in units of profiles. Because the extended information set in the designated profile is added to the certificate at the time of certificate issuance, information set here is reflected as it is at the time of certificate issuance. Profile extended information can be set step by step according to the setting information displayed on the screen. Each CA can hold multiple profiles, so that profiles can be created in units of groups for which certificates are issued or unnecessary profiles can be deleted. In addition, the profile names can be changed as necessary. When a CA is started on a local machine to do direct operations, it can be operated by inputting the CA master password. However, when the CA server is connected from a remote console for CA operations, the user must be authenticated. This function makes it possible to make settings for issuing the certificates of operators connectable to this CA server, IDs, passwords and access privileges. (2) Flow from certificate registration to authentication The following figure shows a processing flow from certificate registration to authentication through CA (Certificate Authority) and RA (Registration Authority). 18

27 Local RA (User 2Authorize to pass License ID 1Get License ID 4Pass License ID & Public Key RA (Registration Authority) 7Get Certificate 5Send CSR 6Issue Certificate CA (Certificate Authority) 8Get Grid Map file 3Genera te Key Pair End User & Host Site Administrator Figure Processing flow from certificate registration to authentication The following describes the processing flow according to numbers in this figure. 1 The user administrator (Local RA) acquires an end user license ID from RA (Registration Authority). 2 The user administrator (Local RA) examines and confirms the end user identity and approves the license ID acquired in (1) only for successful identification (OK). 3 The end user (or host administrator) acquires the license ID approved by the user administrator (Local RA). 4 The end user (or host administrator) creates a pair key (private key and public key) and passes it to the RA (Registration Authority) together with the license ID. 5 The RA (Registration Authority) requests a certificate from the CA (Certificate Authority). 6 The CA (Certificate Authority) issues and manages the certificate. 7 The end user (or host administrator) acquires the issued certificate from the RA (Registration Authority). 8 The site administrator acquires a grid-mapfile from the RA (Registration Authority) for management and operation VOMS The NAREGI middleware embeds VO information in a certificate based on glite VOMS developed by EGEE to ensure security among VOs. VOMS has functions for creating, adding, deleting VOs, registering VO administrators, registering and deregistering users in the VOs and managing the approved attributes (Group, Role and Capability) of users registered in the VOs. In addition, it issues a proxy certificate with a VO attribute to the user registered in the VO. This proxy certificate with a VO attribute contains information (approved attribute) concerning the VO to which the user belongs. When a job is executed by the user, the computing resource uses this approved attribute for authorization. 19

28 When UMS (described later) is used, the setting file (vomses file) of each user is set in UMS. This vomses file describes a VOMS server managing the corresponding VO in UMS and allows the proxy certificate life time, Portal s automatic time-out time interval and NAREGI middleware session time-out time interval to be set UMS (User Management Server) UMS provides integrated certificate management by issuing user certificate requests, depositing the certificates, acquiring a proxy certificate with a VO attribute from VOMS and depositing the proxy certificate in a certificate repository (MyProxy server). This integrated management means the users do not need to manage the user certificates and private keys themselves. 20

29 2.2 User Environment NAREGI Portal is provided as interfaces for permitting the user to use the NAREGI middleware. The grid application use environment (function group) supported by intuitive GUI, various display functions integrated into IS and data grid start screen are also provided NAREGI Portal NAREGI Portal functions as an interface between the user and the NAREGI middleware. The user can call and use various functions (grid tool) seamlessly by single sign-on to the NAREGI middleware. In addition, the registration of a user certificate required for single sign-on and proxy certificate registration can be performed via the Portal screen. The following describes the screens and functions provided by NAREGI Portal. (1) Sign-on/sign-out screen This screen is used by the user to sign onto or sign out from the NAREGI middleware. (2) Grid tool start screen This screen is an initial screen for starting various NAREGI middleware functions (Grid Tools). By default, in addition to grid application environments including PSE, WFT and GVS, resource management component IS and data grid are also established. Figure Grid Tools screen (3) Login/logout screen This screen is used by the user in to log in or log out from UMS (refer to 2.1.3). 21

30 (4) Certificate acquisition/registration screen To ensure that the user can use the NAREGI middleware functions seamlessly, a user certificate for user identification and a proxy certificate to allow usage over multiple VOs must be provided. In addition, NAREGI Portal provides a screen to acquire or register the certificate. User password saved in the UMS server can be changed via the screen. (5) Customization The Portal screen configuration can be customized in units of VOs by editing the setting file. The following table summarizes the customizable items. Grid tools Service setting Title screen Display color Adds or deletes an available grid tool or specifies its opening method. Specify the following items as the information on the available services. For more details, you are recommended to refer to each Administrator s Guide regarding Portal and Renewal Service. Host name and port number of the SS node( node s FQDN>:8080 as a default setting) Host name and port number of the node running Renewal Service(SS node) URL of the IS node (as a default setting, node s FQDN>:8443/wsrf/services/org/naregi/infoservice/aggregator/node/factor y/nafs) Specifies the title image of the grid tools startup screen or sign-out screen. Specifies the display color of the grid tools startup screen and sign-out screen Registration, Search and Update of Applications The user-created application can be registered in the NAREGI middleware or deleted from that middleware. In addition, it is possible to search for applications registered by other users. (1) Application registration The user can register applications in an application repository (ACS (Application Contents Service)) on the NAREGI middleware by means of PSE among use environment components. At the time of application registration, the user can also input program names, input files required for application execution, application descriptions, requirements of resources required for application execution, user names to be registered and names of VOs to which the user belongs. In addition, it is possible to register the requirements of resources in a destination repository narrowed by referencing computing resource information collected in advance by IS. This approach is effective when the application to be registered sets conditions for resources (OS, processor type and amount of memory). 22

31 In addition, it is possible to load an application binary into computers and register a script to be run after binary loading, or register a keyword so that the application can be retried later. Input information is reflected in JSDL (Job Submission Description language) required for execution and then saved in PSE. If necessary, scripts can be registered for compilation and test execution. In addition, application registration can be performed with a command line tool stored on a client PC in conjunction with NAREGI Portal. (1-1) Format of application to be registered Applications to be registered must be prepared in ZIP file format together with input data and parameter data and uploaded into a node (Portal node in a sample node configuration diagram). At this time, it is possible to determine whether to upload the applications from a client PC (local environment) or a remote environment. Note that three kinds of applications, namely, source files, executable modules (binary format), and preinstalled applications (Pre-installed Application Binary) can be registered. (1-2) Cooperation with WFT A procedure (workflow) describing an I/O file at application execution and a flow of parameters required for application execution can be prepared in advance with WFT and then registered in PSE. This function is primarily designed to execute applications easily even when application developers are different from application users (application practitioners). (1-3) Cooperation with DataGrid When an application is registered, it is possible to register its relationship with a data file required for application execution as part of the application information. At this time, this function can be used together with DataGrid not only to register the data file to be associated in DataGrid but also to associate it with the application information. (1-4) Application replica registration A new application can be registered using the previously registered application information as a template. This eliminates the need to input the common items. When an application is selected as a template, it is possible to select the application best-suited for a VO to which the user belongs via an application search interface. (1-5) Batch registration of multiple applications When multiple executable modules are generated from a source tree, they can be registered 23

32 at a time by single registration. At this time, they can be compiled concurrently (refer to Application Compilation ). (1-6) Registration of private or VO-shared applications Private applications refer to applications which can be referenced, updated, compiled, deployed and deleted only by users who have been registered in advance. This category of application can be registered as those which can be privately used by application developers who have prepared them. VO-shared applications refer to applications which can be referenced, compiled, deployed and undeployed in common by users who belong to the same VO. (2) Application search Application search can take place using an application name, user name, group name or overview as a search item (key). At this time, multiple search items (keys) can be specified by separating the items from others with a comma and they can be specified as either AND or OR conditions. In addition, computing resources can be referenced in accordance with source requirements. Furthermore, application search can take place via a Portal command line interface as well as application registration. (3) Application update When the previously registered application information is updated, it is possible to specify and update an application component source file or binary file. When the target source file or binary file is selected, it is possible to select an application best-suited for a VO to which the user belongs via an application search interface Application Compilation A registered application is converted into a binary application meeting resource requirements (OS, processor type, etc.). In addition, a compile script to run can be registered and its running can be confirmed. (1) Compilation of multiple applications When an application is compiled, batch registration applications to be associated with the application are compiled together at a time. At this time, the sequence of compilation needs to be determined. 24

33 (2) Automatic compilation of related applications When an application is registered, its references to/from other applications (for example, calling and execution of the other applications) can also be registered(figure 2.2-2). If such registration takes place, the applications to be referenced will all be loaded and executed automatically, so that users need not be conscious of those applications to be compiled and run. This approach is effective when an application registration user is different from an application practitioner. proga/ a.out Application registrant Application user prog.c compile.sh execute.sh lib/ foo.c Registration of relationship (proga is required for pogb compilation) PSE proga progb Easy use of progb libfoo.a A Deploy B progb/ a.out Reference at Compilation time Compile Compile Deploy prog.c A A B B compile.sh Server#1 Server#2 Server#3 execute.sh Figure Automatic compilation of related applications (3) Test execution This function is used to check the normality of application operations or conduct operation confirmation tests on environment-specialized processing in advance Application Deployment The application registered in the application repository is deployed to the computing resource. Application deployment has two methods: non-sharing setting VO on user s home directory, or Sharing setting VO on sharing directory. Sharing setting VO can use only when permitting by the policy of each VO and each computing resource (refer to the following figure). Everyone who belongs to same VO can use application that you deployed in sharing directory for the job execution. 25

34 Sharing setting host Non-Sharing setting host Sharing setting VO 1Deployment to shared directory 2Deployment to non-shared directory( ) No n-sharing setting VO 3 Deployment to non-shared directory ( ) 3 Deployment to non-shared directory ( ) Deployment of non-shared directory in deploy user home directory VO1 User1 User3 User2 HostA of HostA of GridVM GridVM HostB of HostB of GridVM GridVM User:VO1 VO2 Presence of shared directory(e.g.,/share/vo1) VO1 allowing shared deployment VO2 allowing nonshared deployment User:VO1 VO2 Absence of shared directory VO1 and VO2 allowing nonshared deployment Shared directory in use VO2 User4 User6 User5 Shared directory not in use HostC of HostC of GridVM GridVM HostD of HostD of GridVM GridVM HostE of HostE of GridVM GridVM User:VO1 Presence of shared directory(e.g.,/share/vo1) VO1 allowing shared deployment User:VO2 presence of shared directory(e.g.,/share/vo2) VO2 allowing nonshared deployment User:VO1 Absence of shared directory VO1 allowing nonshared deployment Figure Sharing deployed applications in VO As in compilation, the users need not be conscious of applications whose references to/from other applications have been registered and all the referenced applications are automatically deployed (loaded). (1) Deployment via multiple repositories The desired number of application repositories (ACS) can be created depending on the number of computers or deployment on a network. In this way, even when applications need to be deployed over multiple sites, they can be deployed efficiently on many computers. When computers are located in different sites, those applications can be deployed efficiently with a minimal number of file transfers between the sites so long as the ACS service and application deployment service are both provided to each side. 26

35 User( ( VO1) PSE Server Application Information Application Repository (ACS) Duplicate application archive Delete the duplication Application Repository (ACS) PSE UI Registration Search Deployment from multiple application repositories Application deployment function Application Deployment function IS- CDAS (VO1) SS (VO1) Registration of deployment infomation Execution of post processing File transfer Computation Server Application アプリ IS-LRPS Computation Server Application IS-LRPS Figure Deployment from multiple repositories (2) Test execution As in compilation, this function is used to check the normality of application operations or conduct operation confirmation tests on environment-specialized processing in advance Workflow Import Function This function cooperates with WFT to import an application workflow (described later) Status Acquisition This function makes it possible to confirm on the screen the status of application registration, compilation or deployment in progress Job Execution There are two methods of executing a job. The first method uses GUI representing programs, data and I/O relations as icons and the second method uses a command line tool for command-based job execution. The following describes them in detail. (1) Job execution from a workflow Programs, data and I/O relations are represented as icons and then executed via GUI. Furthermore, the execution sequence and relationship (workflow) can be represented as icons, thus the programs can be executed intuitively. Icons such as Grid MPI, Co-allocation and coupled job icons are prepared in advance for 27

36 large-scale science for which execution of multiple jobs need to be executed concurrently via one icon. If the created job workflow is submitted from the NAREGI middleware, its execution status will be monitored via the workflow GUI. The execution status can be monitored through the GUI as follows: the indicator is yellow in the initial state; it turns blue at the beginning of computing; green at the end of computing; and red in abnormal end. (1-1) Icon type The NAREGI middleware provides the following icons. The listed icons are used for workflow definition. Icon type Workflow icon Program icon Data icon Array job icon GridMPI icon Co-Allocation icon Flow control icon Description Refers to an icon for a work flow defined in such a way that multiple programs and data are associated with program and data icons and can be run collaboratively in multiple systems. The workflow itself can be saved in the workflow folder. Refers to an icon to be created in units of programs to run and defines execution requirements and I/O information. A multi-computing icon can support definitions for programs to be run concurrently by changing parameters for the purpose of parameter study. Refers to an icon representing a data file on a computer. Supports the execution of a bulk job (a set of programs to be run concurrently) using an array job function of the local scheduler by means of the workflow using this icon. Allows a user to define a set of programs using GridMPI. This icon can be called from a workflow icon or Co-Allocation icon. This icon supports concurrent execution of jobs by changing parameters for the purpose of parameter study. Provides for starting a set of programs constituting a workflow while allocating resources to them. Namely, this icon is available to start all the associated programs at the same time. This icon can be called via the workflow icon for workflow definition as in the other icons. Provides two icons: conditional branch icon and loop execution icon. These icons can control the flow of programs to run like conditional and loop statements in a regular program. (1-2) File manipulation When a workflow is defined, this function allows file transfer and editing between a client PC and an administrative node computer. File manipulation File upload/download Editing of administrative node Description Uploads or downloads data files created on a client machine to administrative node computer. The uploaded file can be defined as a data icon at workflow creation so that the file can be used via the defined icon. Edits a data file on an administrative node which has been defined as a data 28

37 File manipulation computer data files Description icon on the workflow edit screen or Co-Allocation job edit screen. (2) Job execution Job execution can be triggered by a command line tool installed on a client PC as well as a method using the workflow. In this case, it is necessary to sign on to the NAREGI middleware by means of the sign-on tool. The status of a submitted job can be confirmed on the job list screen. In addition, its execution status can be confirmed via a job constitution icon and its log can be confirmed via the Monitor screen. These screens are refreshed periodically. (3) Debug function WFT provides the following debug functions. File manipulation Step execution Next execution Breakpoint setting/clearing Partial execution Continue execution Debug execution cancel Debug execution screen job save Rescue execution Description Selects one of the program icons on the debug execution screen and runs it one by one. Once execution of the selected program icon terminates, program execution stops and execution of the subsequent icons suspends. Selects one of the workflow icons and control icons on the debug execution screen and runs all its nested workflows. Once execution of the selected icon terminates, execution stops and execution of the subsequent icons suspends. The breakpoints set in the nested workflow are all discarded. Sets one or more breakpoints in an icon on the debug execution screen or clears the breakpoints set in the icon. Executes processing from the current execution stop point to the next breakpoint. When the nested workflows of the workflow icon contain one or more breakpoints, processing suspends at the breakpoint set in the nested workflow. Execute all the subsequent icons from the current execution stop position. The breakpoints set in those icons are all discarded. Cancels the job being debugged. Saves a job on the debug execution screen. When the debug execution screen is closed, a save confirmation dialog box is displayed. Clicking the Yes button causes the job being debugged to be saved and the debug execution screen to be closed. In contrast, clicking the No button causes job debugging to be cancelled and the debug execution screen to be closed. Clicking the [Cancel] button neither saves the debugging job nor closes the debug execution screen. Once a job is saved, it can be called again by selecting the job with (Debug) appended to a execution status character strung displayed in [Status] on the job list screen. Opens a workflow whose regular job execution (not debug execution) has terminated abnormally on the execution status screen and reruns it from an abnormally terminated icon after a program or input data associated with an abnormally terminated icon has been corrected. This execution is performed by an execution administrator for regular execution rather than a simplified WF engine. 29

38 (4) Saving and restoring the workflow environment The user can save a set of workflow icons created in a folder other than the one called Folder on the main screen in any client machine, or restore a user environment file saved on the client machine onto WFT from the folder. Export UserEnv: A workflow icon in the folder is saved on a client machine as a tar file. Import UserEnv: A user environment saved on the client machine is reloaded into a workflow. Delete UserEnv:A workflow icon is deleted. Figure Operations via a workflow environment folder Command Line Tool Some of the job operation functions among user environment functions discussed so far can be run via a command line tool (CLT: Command Line Tool) installed on the client PC. The following lists the main operations that can be run via the command line tool. Registration of applications and workflows on a client PC Search and acquisition of registered applications and workflows Search of computing resources in which application deployment can take place Deployment of registered applications onto computing resources Definition of job execution requirements and job execution Listing of jobs under WFT control and display of job execution status Cancellation of running jobs 30

39 2.2.9 GVS (Grid Visualization System) This section describes an execution image of GVS. The Client is a GUI started on a user s PC and the Visualizer is the main body of the visualization processing started on a computation server. Acting as an access point from the user s PCs, the Service Provider relays communications with the Visualizer. By linking with another Service Provider, it also coordinates multiple Visualizers in parallel. In a coupled or parallel computation across multiple sites, calculated data are distributed and stored at multiple sites and visualized independently on each computation server. Then, multiple visualization images are transferred under compression and superimposed into one consistent image. This image appears on a user s PC. This eliminates the need to transfer massive amounts of calculated data not only between a user s PC and a site, but also among multiple sites. Only small-size data, such as visualization parameters and compressed images, are transferred. GVS features a breakthrough in memory and performance constraints by parallel visualization on computation servers. This enables users to visualize calculated data of any size, up to the limit of the number of computation nodes. (1) Dynamic Switching of Target Data to Visualize During a visualization session, you can switch target data to visualize to another piece of calculated data via GUI without restarting the remote visualization process in which the Parallel Visualization Module is incorporated. Also, the following features are incorporated. Calculated data to visualize can be specified and switched during a visualization session, not at the start of a session, any number of times. To visualize coupled simulations, multiple paths of calculated data files can be specified. For each of the above multiple calculated data, the number of parallel processes to be assigned to the visualization processing can be specified. At the start of a visualization session, the maximum number of parallel processes must be specified. If the total number of assigned parallel processes exceeds the maximum number, a user is prompted to correct the number. (2) Video Creation Scenario Editor You can create a video creation scenario by specifying the key frame (frame that defines the shape of an object and changes in position) and timing via GUI. (3) Display Functions Table 2.2-1, List of the GVS display functions, lists the major display functions as follows: 31

40 Table List of the GVS display functions Function name Molecule model display function Translucent display function isosurface Contour plane display function Contour isosurface display function Tube model display function Ribbon model display function Overview Based on molecule data defined in a three-dimensional physical space, the Parallel Visualization Module has the following functions for drawing a molecule model in real time on a computation server: Draws a molecule model with shading Switches between the display of a space-filling model, stick model, and ball and stick model at any time Based on the scalar data defined in a three-dimensional physical space, the Parallel Visualization Module has the following functions for drawing a translucent isosurface in real time on a computation server: Draws a translucent isosurface with shading based on the scalar physical quantity defined in the three-dimensional physical space Two or more isosurfaces can be drawn simultaneously, and the transparency of the isosurfaces can be specified Displays the distribution of a three-dimensional scalar field by color classification of a color map on a parallelogram plane, placed arbitrarily. Hereinafter such plane is called a contour place. The following operations are possible with this function: Two or more contour planes can be set and overlaid simultaneously for display The position, direction, and shape of a parallelogram plane on each contour place can be set arbitrarily On an isosurface of a three-dimensional scalar field, the distribution of another three-dimensional scalar field by color classification of a color map is displayed. The following operations are possible with this function: Two or more contour isosurfaces can be set and overlaid including normal isosurfaces simultaneously for display A color map to be used for color classification of each contour isosurface can be set arbitrarily Based on the structural data (atomic coordinates and hierarchical structure information) of a biopolymer of protein and the like, the principal chains are displayed with tubes. The following operations are possible with this function: For each of multiple principal chains, whether or not to apply a tube model can be selected. Also, multiple tube models can be displayed simultaneously The radius and color of each tube model can be set Based on the structural data (atomic coordinates and hierarchical structure information) of a biopolymer of protein and the like, the secondary structures are displayed with ribbons. The following operations are possible with this function: For each of multiple secondary structures, whether or not to apply a ribbon model can be selected. Also, multiple ribbon models can be displayed simultaneously The width, thickness, and color of each ribbon model can be set (4) Pause Function for Target Data Update When a pause message is received from a Client, the Parallel Visualization Module moves into the Pause mode, which stops the visualization function. During time-series data visualization, the time step stops. Only, when the Client updates the visualization parameters the visualization images are updated. (5) Structural Lattice Auto Setting Function for Isosurface Creation To reduce user's load when setting parameters for displaying isosurfaces, the function to automatically create the coordinate values (the position and size of a rectangular 32

41 parallelepiped) of physical quantity lattice data based on the shape of a molecule is implemented. (6) Time-Series Data Display Function Any time step data included in time-series data is visualized and images are displayed. Details on this function are described below. (7) Image Recording Operation Function After specifying the destination directory in which images for video are stored, the start and end step numbers, and the increment number of steps, click the Start button. The Client moves into the image-recording mode and starts image creation for displaying video. If there is no specified directory, video is newly created only when the parent directory exists. During the image recording mode, the main window displays the latest image during the process of creation from moment to moment, and the status bar indicates that images are being created with the number of images. The session and the Client cannot be terminated until the image has been created. To forcefully stop the image creation in midstream, click the Stop button. These menus are enabled for time-series data. If the steps specified via the Step menu are inconsistent with the actual visualization processing of the Visualizer session, an error message appears. (8) Control Step Operation Function After a session is started, when target data to visualize is time-series data, the time steps of the target data can be changed without stopping the visualization operation. This menu is disabled for non-time-series data. (9) Dynamic Data Screen (Target Data Dynamic Switching Function) During a visualization session, you can switch target data to visualize to another calculated data via GUI without restarting a remote visualization process in which the Parallel Visualization Module is incorporated. (10) Image Viewer Screen (Time-Series Image Replay Function) When replaying visualized time-series data continuously, various settings are possible, such as the increment of time steps for one frame and the replay speed. 33

42 2.3 Grid Programming Environment To develop a large-scale application program in a grid environment, it is essential to study and develop programming techniques for starting programs in computing resources that are geographically far apart, performing data transfers among concurrently running programs and accomplishing data transfer synchronization. The NAREGI middleware provides GridMPI and GridROC as these programming techniques GridMPI GridMPI is an MPI communication library which enables communication considering communication delays and communication assuring high interoperability to ensure that MPI applications can run in the NAREGI middleware with high performance. GridMPI has the following features: Compliance with standards GridMPI complies with MPI-1.2 standard and MPI-2.0 standard. It enables the existing MPI applications to run in a grid environment. In addition, the IMPI (Interoperable MPI) standard is employed for inter-cluster communication. Hetero environment support This support provides for a mixed environment of different computer architectures (Linux/IA32 and64, AIX/POWER, Solaris/SPARC, SX-8) to ensure operations in a grid environment. Improvement of inter-cluster communication performance A group communication algorithm suitable for high-band wide area networks contributes to reduce round-trip communications between clusters, resulting in effective use of bandwidths. In addition, the paging module PSPacer* for improving congestion performance in the TCP/IP protocol is developed and provided. * PSPacer: Figure shows an outline of GridMPI execution. To exchange cluster information with each other for inter-cluster communication (via TCP/IP connection), the IMPI server process (IMPI server) is started. Then, the mpirun command is executed in units of clusters and the GridMPI process establishes a communication path for a process in the other cluster via the IMPI server. Furthermore, the rank number of the GridMPI process is assigned to be unique among all the rank numbers and then job execution starts. The original protocol (YAMPI) or vendor-provided MPI is used for this inter-cluster communication. 34

43 Component element - IMPI server - Each cluster(multiple) impi -server -server 2 1.Start IMPI Server Process Connection of each cluster to IMPI server Exchange of cluster infomation mpirun -np 4 a.out 2.Start TCP Port mpirun -np 4 a.out 3.Start Cluster A (@naregi.org) Cluster B (@ims.ac.jp) 4.TCP connection MPI_COMM_WORLD Figure Overview of GridMPI execution GridRPC GridRPC provides a function for easily developing tens or hundreds of CPU grid applications based on a model calling a library function on remote computers and enabling the developed applications to run with a high execution efficiency. GridRPC is primarily designed to enable a specific part of computing to be performed on a number of remote computing resources and is effective in describing relatively simple parallelism conforming to a RPC model. Figure GridRPC GridRPC has the following advantages: A particular computer resource on a grid can be used. 35

44 Offloading can be performed to a wide area computing server to shorten the time required for program execution. Parameter sweep can be performed by multiple servers on grids. Parameter sweep refers to running multiple servers concurrently for a particular computation by using some of the parameters. At this time, each server independently operates for computation using different parameters. A concurrent task program can be created easily. API is available to ensure synchronization in a variety of concurrent tasks involving interactions between clients and serves. Mathematical computations can be coded easily and processing can be performed depending on the grid scale which changes dynamically. 36

45 2.4 Grid Application Support The NAREGI middleware provides the coupled middleware Mediator which runs as an application on a grid. The Mediator provides data exchange functions to cope with tasks associated with computations using different representations to the same physical quantity in simulation programs of different physical models or computation algorithms for coupled simulations or with tasks in coupled simulations in loosely-coupled wide area grids or tightly-coupled local area grids. The main data exchange functions comprise the process management function, different data semantics conversion function and synchronous data transfer function and provide them as a common library (API). These APIs are included in the coupled simulation program to achieve efficient coupling with program changes minimized. API function API function Sim. A Mediator A Process management Mediator B Sim. Sim. B B Sim. A Mediator A Semantics conversion of different types of data Sim. A Mediator A Synchronous data transfer Mediator B Sim. B Figure Mediator basic configuration and functions The following describes the main data exchange functions Process Management This function is primarily intended for simulation process, group division in Mediator process and Media process support for data exchange in units of simulation processes. This function is also available for synchronization management of file transfers which take place between multiple simulations Semantics Conversion of Different Kinds of Data This function is intended to accomplish high-speed searches for a positional relationship between discrete points requiring data exchange among different kinds of discrete points, such as atomic coordinates defined in the molecular dynamics method and molecular orbital method, and mesh points defined in the finite difference method and finite element method. 37

46 This positional relationship between different kinds of discrete points involving data exchange is defined as a correlation between different kinds of discrete points. For example, if the correlation between the atomic coordinates and a mesh point is defined as the nearest neighbor, data exchange will take place between the atomic coordinates and the mesh point at the nearest neighbor. Data at a discrete point having a correlation is subject to data exchange using a built-in function or a user-defined conversion function (semantics conversion) Synchronous Data Transfer (SBC) Data transfer takes place in two ways. A message-passing method uses the MPI library for data transfer while a synchronous file transfer method uses GridFTP for synchronous file transfer. This synchronous data transfer function is also referred to as Synchronous File Transfer Library (SBC) function. The method to be used is selected flexibly depending on the linkage degree of coupled simulation. 38

47 2.5 Data Grid The amount of research data to be handled by grids has increased from terabytes to petabytes and it is increasingly difficult to accumulate and use it in a single file server. For this reason, an extendable and shared file system function, which makes it possible to use distributed data resources as a large virtual file system, has been implemented to use the advantages of grids. The data grid components provided by the NAREGI middleware not only realize advanced, high-speed computation and simulation but also link widely spread computing resources to enable large-scale computation and save or use a large amount of data. The data grid comprises a data grid resource management system and a data grid access management system. The data grid resource management system realizes a shared file space and manages which data resources are in that shared data space. The data grid access management system provides file management functions to register, update and delete shared files. The following describes the respective data grids systems Data Grid Resource Management System The data grid resource management system (DGRMS) conforms to the international GGF indications and integrally manages the extendable shared file allocation and usage based on WSRF(Web Service Resource Framework). The main functions of this system are as follows: The following information regarding the number of files per file system node and usage (capacity) as well as the number of occupied files per user and usage (size), following information is indicated in tabular form according to categories such as total, particular user, particular node and particular directory (Figure 2.5-2). The file name, file size (min or max), file owner and update date can be specified as search conditions. Gfarm Node Mozilla Firefox xxx.naregi.org yyy.naregi.org zzz.naregi.org Figure Number of display files by node 39

48 2.5.2 Data Grid Access Management System The data grid access management system (DGAMS) performs file management by registering updating and deleting a shared file. The main functions of this system are as follows: A local host file can be imported to Gfarm. A list of file transfer jobs can be displayed. Transfer jobs can be deleted. As a proxy certificate required for data access, a certificate with the VO attribute may be used. glite saves VO information in the extended information part of the proxy certificate so access can be made via the proxy certificate. File attribute information such as a URL owner can be passed to WFT. Files placed in Gfarm, GridFTP and SRM can be operated (for import, export, file directory deletion, directory creation, etc.). File transfer can be suspended temporarily or restarted. The number of files per file system node, usage (size), number of files per use and usage (capacity) can be displayed. File division information and storage destination can be displayed. Files on Gfarm can be provided with comment information. The file name, file size (min or max), file owner, update date and comment can be specified as search conditions. Figure Data grid access management system top screen 40

49 2.6 Resource Management The NAREGI middleware resource management function is intended to integrally manipulate widely distributed computing resources in a grid environment in a consistent manner and is comprised of three components: SS, GridVM and IS. SS consists of the job brokering function and job flow engine and performs resource and job management. GridVM absorbs differences among local schedulers so that the users can submit jobs without being conscious of differences among the job schedulers. IS comprehensively manages information concerning the computing resources, networks, software, accounting, and other things in the grid environment. These three components collaborate to realize not only the general job submission function, but also Co-allocation function among computing resources in different computer architectures, and bulk job submission function for parameter survey jobs. The following describes the three components in more detail Resource Information Collection and Storage The NAREGI middleware provides a dynamic computing environment for information resources over multiple computer centers distributed in a grid environment and a wide variety of VOs, to ensure that information required for management based on the respective VO policy, that is, resource usage records such as time, place, person concerned, job type and job execution count are retrieved, collected, registered and notified. Information is collected hierarchically and stored and managed according to NAREGI Schema based on CIM (Common Information Model) (Figure 2.6-1). In addition, a function to display the collected resource information on GUI is provided for support configuration management, user management and resource usage recording (accounting information) by operation administrators ( Information Display Function ). Site Administrator Grid middle Resource information search, setting and notification User VO Administrator Resource information search, setting and notification Area infomation Site infomation VO infomation VO manager Site infomation Cell Domain Cell Domain Cell Domain Cell Domain Computer center Local xxmanager Cell Domain Cell Domain Cell Domain Cell Domain Computer center Local xxmanager Figure Diagram of hierarchical information collection 41

50 2.6.2 Computing Resource Search and Reservation The NAREGI middleware provides functions for confirming resource restrictions and integrity with the VO policy and finding, selecting, reserving and submitting available resources. At this time, multiple computing resources can be reserved concurrently. The resource information collection and storage function shown in the above figure is used to find the available computing resources. In addition, the function shown in the above figure mediates job requests (for job submission, job status acquisition and job deletion) and provides job management and reservation services Job Reservation and Management A temporary reservation is made for reserved jobs based on the starting time, period and number of nodes. If the reservation is accepted successfully, it is established as a normal reservation. No resources are reserved for non-reserved jobs, and the jobs are submitted to the local job scheduler. Once resource reservation is established for a reserved job, the job is started at the specified time and is allowed to use the resources required for the specified duration without any influence from the subsequent reserved jobs and non-reserved jobs. Meanwhile, the non-reserved job is scheduled to run using free resources and free time of the established job. When a reserved job is submitted before a non-reserved job is scheduled (establishment of resources to be used and starting time), the reserved job takes precedence. If an attempt is made to run the reserved or non-reserved job exceeding the specified duration, the job is forcibly terminated. For details of job types supported by the NAREGI middleware, refer to Job Types Access Control The NAREGI middleware resource management system provides access control functions to protect local resources against illegal resource accesses. 42

51 (1) File access protection The UNIX-based system handles resources as files, so that resource protection is implemented through file access control. Because file access takes place via a system call to be executed by a job, the system call is monitored and controlled based on the policy. The system administrator specifies a person (subject), a resource (target), an access type (action) and an access permission (permit/deny) in an XML access protection policy. In this policy, the subject is a global user ID or VO name rather than a local account. The subject (or its attribute), an access file (and its directory), an access operation (action) and other conditions are judged to determine whether to allow or reject the execution of a trapped system call. (2) Resource usage control Based on the resource use condition specified as a policy, job control takes place automatically to prevent excessive resource use for resource protection. The site administrator defines an amount of available resources for users or VOs in units of jobs in the site policy for users and VOs. The policy file is an XML file where a set of actors (subject), monitoring objects (metric) and controls are defined. The execution time, CPU time and disk usage can be specified as target items for resource usage monitoring. In addition, the resource usage is defined in advance to implement job control (job deletion) at the time of excessive resource usage Job Monitoring To manage computing resources and/or jobs to be run on the computing resources, job management information is provided to system administrators and users. (1) Resource information provider This function is primarily designed to register the resource information owned by each site in the NAREGI middleware resource management function. The system administrator of each site can designate the site resources to be allocated to the grid environment autonomously according to the policy. Specifically, the system administrator does not allocate all the physical resources to the grid environment and restrict the users and jobs that can use the physical resources. 43

52 (2) Job tracking Knowing computing resources necessary for running each job is helpful when the system administrator or user locates a defective location. In addition, this function is a helpful utility function in application development when a job serving as another communication party during inter-site communication is dynamically determined after brokering. A group of computing resources determined at job execution is registered in the NAREGI middleware resource management function. Information registered in this management function can be referenced by using as a key the ID of a job from an operation management tool or an application. (3) Accounting information report After a job ends, this function registers the resources consumed by a job in the information service. Accounting information is registered according to the Usage Record (UR) and Resource Usage Services (RUS) standardized by OGF (Open Grid Forum). UR is an XML schema which defines registration information and the defined registration data is registered as XML data Information Display Function The resource management function can display the respective activities of the NAREGI middleware via GUI. The following describes the main information display functions in more detail. (1) Entrance screen When the Grid Tools startup screen of NAREGI Portal is selected, this screen appears. This screen displays the identification name, VO name and remaining validity time (in seconds) acquired from the user proxy certificate. Clicking the Next button causes the screen to switch to the main screen. 44

53 Figure Entrance screen (2) Main screen The upper part of the main screen displays the contents inherited from the entrance screen, a URL for the distributed resource information service to be connected for search and a selected cell domain name. In addition, the pull-down menu displays a list of selectable functions while the lower part of this screen displays a tree of cell domains in a range accessible (searchable) from the distributed resource information service to be connected. Figure Main screen The following lists the functions selectable on the main screen. CPULoad Graph function UsageRecordUser function JobMonitorUser function Job execution result summary screen VomsQueueUser function Resource function LogBrowser function UsageReordAdmin function JobMonitorAdmin function Interoperation common attribute display screen Service activity list screen The following gives a brief description of each function or screen. For details of each function and screen, refer to User Guide NAREGI Middleware IS (Distributed Information Service). 45

54 (3) CPULoad Graph function This function displays the CPU activities of the target host when the name of a target host is specified. The rightmost position of the graph indicates the current state and the graph represents time-series values up to the current state. The vertical axis represents the CPULoad as a percentage. The screen data is refreshed at one-minute intervals automatically. Figure CPULoadGraph function screen (4) UsageRecordUser function This function displays a summary of records on resources used by users as a summary table of global job IDs, job execution queue names and job start times only. Figure UsageRecordUser function screen 46

55 (5) JobMonitorUser function This function displays summary information concerning workflow information, job activity information and resource reservation information recorded in cell domains under a connected node as a list of global job IDs, user identification names, execution start times and states. Figure JobMonitorUser function screen (6) Job execution result summary screen This screen displays the total number of successfully executed jobs and error jobs in a certain period in units of VO and resource provider or in the entire grid. Figure VomsQueueUser function screen (at summation in units of VO) (7) VomsQueueUser function This function displays information on user available queues. 47

56 xxx.naregi.org yyy.naregi.org xxx.naregi.org yyy.naregi.org Figure VomsQueueUser function screen (8) Resource function This function selects a CIM class for search and displays the instance information of the corresponding class with respect to the target cell domain. xxx.naregi.org Figure Resource function screen (detailed display) (9) LogBrowser function Log information search begins with the root node Logs and narrowing a range of target search data takes place in a hierarchical tree gradually from top to bottom until the required log information is located ultimately. 48

57 Figure LogBrowser function screen (detailed display) (10) UsageReordAdmin function This function displays resource usage records on a target cell domain. Figure UsageRecordAdmin function screen (queue usage record display) (11) JobMonitorAdmin function This function displays a summary of workflow information recorded in cell domain under the node connected, such as job activity information and resource reservation information as a list of global job IDs, user identification names and states. 49

58 Figure JobMonitorAdmin function screen (12) Interoperation common attribute display function This function selects a target search class and displays the instance information of the corresponding class with respect to the target cell domain. xxx.naregi.org xxx.naregi.org xxx.naregi.org xxx.naregi.org Figure Interoperation common attribute display screen (detailed display) (13) Service activity list screen This screen displays the service activities status recorded in the selected cell domain as a list of service names (ServiceName), service URI (URI) and service activity status (Status). The listed service activities are displayed in different colors depending on the operating states. 50

59 Figure Service activity status list screen 51

Administrator s Guide

Administrator s Guide GD-ADG0101-06 NAREGI Middleware Administrator s Guide V 1.1 March, 2011 National Institute of Informatics Copyright 2004 National Institute of Informatics, Japan. All rights reserved. This file or a portion

More information

Introduction of NAREGI-PSE implementation of ACS and Replication feature

Introduction of NAREGI-PSE implementation of ACS and Replication feature Introduction of NAREGI-PSE implementation of ACS and Replication feature January. 2007 NAREGI-PSE Group National Institute of Informatics Fujitsu Limited Utsunomiya University National Research Grid Initiative

More information

NAREGI Middleware Mediator

NAREGI Middleware Mediator Administrator's Guide NAREGI Middleware Mediator October, 2008 National Institute of Informatics Documents List Administrator s Guide Group Administrator s Guide, NAREGI Middleware IS(Distributed Information

More information

Scalable, Reliable Marshalling and Organization of Distributed Large Scale Data Onto Enterprise Storage Environments *

Scalable, Reliable Marshalling and Organization of Distributed Large Scale Data Onto Enterprise Storage Environments * Scalable, Reliable Marshalling and Organization of Distributed Large Scale Data Onto Enterprise Storage Environments * Joesph JaJa joseph@ Mike Smorul toaster@ Fritz McCall fmccall@ Yang Wang wpwy@ Institute

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007

Grid Programming: Concepts and Challenges. Michael Rokitka CSE510B 10/2007 Grid Programming: Concepts and Challenges Michael Rokitka SUNY@Buffalo CSE510B 10/2007 Issues Due to Heterogeneous Hardware level Environment Different architectures, chipsets, execution speeds Software

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

Managing Certificates

Managing Certificates CHAPTER 12 The Cisco Identity Services Engine (Cisco ISE) relies on public key infrastructure (PKI) to provide secure communication for the following: Client and server authentication for Transport Layer

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

NAREGI PSE with ACS. S.Kawata 1, H.Usami 2, M.Yamada 3, Y.Miyahara 3, Y.Hayase 4, S.Hwang 2, K.Miura 2. Utsunomiya University 2

NAREGI PSE with ACS. S.Kawata 1, H.Usami 2, M.Yamada 3, Y.Miyahara 3, Y.Hayase 4, S.Hwang 2, K.Miura 2. Utsunomiya University 2 NAREGI PSE with ACS S.Kawata 1, H.Usami 2, M.Yamada 3, Y.Miyahara 3, Y.Hayase 4, S.Hwang 2, K.Miura 2 1 Utsunomiya University 2 National Institute of Informatics 3 FUJITSU Limited 4 Toyama College National

More information

Deploying the TeraGrid PKI

Deploying the TeraGrid PKI Deploying the TeraGrid PKI Grid Forum Korea Winter Workshop December 1, 2003 Jim Basney Senior Research Scientist National Center for Supercomputing Applications University of Illinois jbasney@ncsa.uiuc.edu

More information

REsources linkage for E-scIence - RENKEI -

REsources linkage for E-scIence - RENKEI - REsources linkage for E-scIence - - hlp://www.e- sciren.org/ REsources linkage for E- science () is a research and development project for new middleware technologies to enable e- science communi?es. ""

More information

OpenIAM Identity and Access Manager Technical Architecture Overview

OpenIAM Identity and Access Manager Technical Architecture Overview OpenIAM Identity and Access Manager Technical Architecture Overview Overview... 3 Architecture... 3 Common Use Case Description... 3 Identity and Access Middleware... 5 Enterprise Service Bus (ESB)...

More information

New open source CA development as Grid research platform.

New open source CA development as Grid research platform. New open source CA development as Grid research platform. National Research Grid Initiative in Japan Takuto Okuno. 1 About NAREGI PKI Group (WP5) 2 NAREGI Authentication Service Perspective To develop

More information

DIRAC distributed secure framework

DIRAC distributed secure framework Journal of Physics: Conference Series DIRAC distributed secure framework To cite this article: A Casajus et al 2010 J. Phys.: Conf. Ser. 219 042033 View the article online for updates and enhancements.

More information

Application Hosting Services for Research Community on Multiple Grid Environments

Application Hosting Services for Research Community on Multiple Grid Environments Hosting Services for Research Community on Multiple Grid Environments Hiroyuki Kanazawa, Naoki Onishi, Yuri Mizusawa, Takahiro Tsunekawa, Hitohide Usami Hosting Services for Research Community on Multiple

More information

Grid Computing Middleware. Definitions & functions Middleware components Globus glite

Grid Computing Middleware. Definitions & functions Middleware components Globus glite Seminar Review 1 Topics Grid Computing Middleware Grid Resource Management Grid Computing Security Applications of SOA and Web Services Semantic Grid Grid & E-Science Grid Economics Cloud Computing 2 Grid

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Public. Atos Trustcenter. Server Certificates + Codesigning Certificates. Version 1.2

Public. Atos Trustcenter. Server Certificates + Codesigning Certificates. Version 1.2 Atos Trustcenter Server Certificates + Codesigning Certificates Version 1.2 20.11.2015 Content 1 Introduction... 3 2 The Atos Trustcenter Portfolio... 3 3 TrustedRoot PKI... 4 3.1 TrustedRoot Hierarchy...

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

This help covers the ordering, download and installation procedure for Odette Digital Certificates.

This help covers the ordering, download and installation procedure for Odette Digital Certificates. This help covers the ordering, download and installation procedure for Odette Digital Certificates. Answers to Frequently Asked Questions are available online CONTENTS Preparation for Ordering an Odette

More information

Credential Management in the Grid Security Infrastructure. GlobusWorld Security Workshop January 16, 2003

Credential Management in the Grid Security Infrastructure. GlobusWorld Security Workshop January 16, 2003 Credential Management in the Grid Security Infrastructure GlobusWorld Security Workshop January 16, 2003 Jim Basney jbasney@ncsa.uiuc.edu http://www.ncsa.uiuc.edu/~jbasney/ Credential Management Enrollment:

More information

SSH Communications Tectia SSH

SSH Communications Tectia SSH Secured by RSA Implementation Guide for 3rd Party PKI Applications Last Modified: December 8, 2014 Partner Information Product Information Partner Name Web Site Product Name Version & Platform Product

More information

Security Digital Certificate Manager

Security Digital Certificate Manager System i Security Digital Certificate Manager Version 6 Release 1 System i Security Digital Certificate Manager Version 6 Release 1 Note Before using this information and the product it supports, be sure

More information

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux Systemwalker Service Quality Coordinator Technical Guide Windows/Solaris/Linux J2X1-6800-02ENZ0(00) November 2010 Preface Purpose of this manual This manual explains the functions and usage of Systemwalker

More information

Configure the IM and Presence Service to Integrate with the Microsoft Exchange Server

Configure the IM and Presence Service to Integrate with the Microsoft Exchange Server Configure the IM and Presence Service to Integrate with the Microsoft Exchange Server Configure a Presence Gateway for Microsoft Exchange Integration, page 1 SAN and Wildcard Certificate Support, page

More information

A VO-friendly, Community-based Authorization Framework

A VO-friendly, Community-based Authorization Framework A VO-friendly, Community-based Authorization Framework Part 1: Use Cases, Requirements, and Approach Ray Plante and Bruce Loftis NCSA Version 0.1 (February 11, 2005) Abstract The era of massive surveys

More information

SLCS and VASH Service Interoperability of Shibboleth and glite

SLCS and VASH Service Interoperability of Shibboleth and glite SLCS and VASH Service Interoperability of Shibboleth and glite Christoph Witzig, SWITCH (witzig@switch.ch) www.eu-egee.org NREN Grid Workshop Nov 30th, 2007 - Malaga EGEE and glite are registered trademarks

More information

Certification Practice Statement of the Federal Reserve Banks Services Public Key Infrastructure

Certification Practice Statement of the Federal Reserve Banks Services Public Key Infrastructure Certification Practice Statement of the Federal Reserve Banks Services Public Key Infrastructure 1.0 INTRODUCTION 1.1 Overview The Federal Reserve Banks operate a public key infrastructure (PKI) that manages

More information

SSL Certificates Certificate Policy (CP)

SSL Certificates Certificate Policy (CP) SSL Certificates Last Revision Date: February 26, 2015 Version 1.0 Revisions Version Date Description of changes Author s Name Draft 17 Jan 2011 Initial Release (Draft) Ivo Vitorino 1.0 26 Feb 2015 Full

More information

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI Setup Desktop Grids and Bridges Tutorial Robert Lovas, MTA SZTAKI Outline of the SZDG installation process 1. Installing the base operating system 2. Basic configuration of the operating system 3. Installing

More information

Grid Computing with Voyager

Grid Computing with Voyager Grid Computing with Voyager By Saikumar Dubugunta Recursion Software, Inc. September 28, 2005 TABLE OF CONTENTS Introduction... 1 Using Voyager for Grid Computing... 2 Voyager Core Components... 3 Code

More information

DIRAC Distributed Secure Framework

DIRAC Distributed Secure Framework DIRAC Distributed Secure Framework A Casajus Universitat de Barcelona E-mail: adria@ecm.ub.es R Graciani Universitat de Barcelona E-mail: graciani@ecm.ub.es on behalf of the LHCb DIRAC Team Abstract. DIRAC,

More information

Federated XDMoD Requirements

Federated XDMoD Requirements Federated XDMoD Requirements Date Version Person Change 2016-04-08 1.0 draft XMS Team Initial version Summary Definitions Assumptions Data Collection Local XDMoD Installation Module Support Data Federation

More information

Technical Trust Policy

Technical Trust Policy Technical Trust Policy Version 1.2 Last Updated: May 20, 2016 Introduction Carequality creates a community of trusted exchange partners who rely on each organization s adherence to the terms of the Carequality

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

Odette CA Help File and User Manual

Odette CA Help File and User Manual How to Order and Install Odette Certificates For a German version of this file please follow this link. Odette CA Help File and User Manual 1 Release date 31.05.2016 Contents Preparation for Ordering an

More information

(1) Jisc (Company Registration Number ) whose registered office is at One Castlepark, Tower Hill, Bristol, BS2 0JA ( JISC ); and

(1) Jisc (Company Registration Number ) whose registered office is at One Castlepark, Tower Hill, Bristol, BS2 0JA ( JISC ); and SUB-LRA AGREEMENT BETWEEN: (1) Jisc (Company Registration Number 05747339) whose registered office is at One Castlepark, Tower Hill, Bristol, BS2 0JA ( JISC ); and (2) You, the Organisation using the Jisc

More information

Grid Security Infrastructure

Grid Security Infrastructure Grid Computing Competence Center Grid Security Infrastructure Riccardo Murri Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich Oct. 12, 2011 Facets of security Authentication

More information

IBM. Security Digital Certificate Manager. IBM i 7.1

IBM. Security Digital Certificate Manager. IBM i 7.1 IBM IBM i Security Digital Certificate Manager 7.1 IBM IBM i Security Digital Certificate Manager 7.1 Note Before using this information and the product it supports, be sure to read the information in

More information

VSP16. Venafi Security Professional 16 Course 04 April 2016

VSP16. Venafi Security Professional 16 Course 04 April 2016 VSP16 Venafi Security Professional 16 Course 04 April 2016 VSP16 Prerequisites Course intended for: IT Professionals who interact with Digital Certificates Also appropriate for: Enterprise Security Officers

More information

CA GovernanceMinder. CA IdentityMinder Integration Guide

CA GovernanceMinder. CA IdentityMinder Integration Guide CA GovernanceMinder CA IdentityMinder Integration Guide 12.6.00 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

OGCE User Guide for OGCE Release 1

OGCE User Guide for OGCE Release 1 OGCE User Guide for OGCE Release 1 1 Publisher s Note Release 2 begins the migration to open standards portlets. The following has been published by the Open Grids Computing Environment: OGCE Release 2

More information

Hardware Tokens in META Centre

Hardware Tokens in META Centre MWSG meeting, CERN, September 15, 2005 Hardware Tokens in META Centre Daniel Kouřil kouril@ics.muni.cz CESNET Project META Centre One of the basic activities of CESNET (Czech NREN operator); started in

More information

IBM Tivoli Directory Server

IBM Tivoli Directory Server Build a powerful, security-rich data foundation for enterprise identity management IBM Tivoli Directory Server Highlights Support hundreds of millions of entries by leveraging advanced reliability and

More information

Gergely Sipos MTA SZTAKI

Gergely Sipos MTA SZTAKI Application development on EGEE with P-GRADE Portal Gergely Sipos MTA SZTAKI sipos@sztaki.hu EGEE Training and Induction EGEE Application Porting Support www.lpds.sztaki.hu/gasuc www.portal.p-grade.hu

More information

IBM SecureWay On-Demand Server Version 2.0

IBM SecureWay On-Demand Server Version 2.0 Securely delivering personalized Web applications IBM On-Demand Server Version 2.0 Highlights Delivers personalized Web solutions on demand to anyone, anywhere using profile serving Provides industry-leading,

More information

Configuring Certificate Authorities and Digital Certificates

Configuring Certificate Authorities and Digital Certificates CHAPTER 43 Configuring Certificate Authorities and Digital Certificates Public Key Infrastructure (PKI) support provides the means for the Cisco MDS 9000 Family switches to obtain and use digital certificates

More information

Structure and Overview of Manuals

Structure and Overview of Manuals FUJITSU Software Systemwalker Operation Manager Structure and Overview of Manuals UNIX/Windows(R) J2X1-6900-08ENZ0(00) May 2015 Introduction Purpose of This Document Please ensure that you read this document

More information

Inca as Monitoring. Kavin Kumar Palanisamy Indiana University Bloomington

Inca as Monitoring. Kavin Kumar Palanisamy Indiana University Bloomington Inca as Monitoring Kavin Kumar Palanisamy Indiana University Bloomington Abstract Grids are built with multiple complex and interdependent systems to provide better resources. It is necessary that the

More information

VSP18 Venafi Security Professional

VSP18 Venafi Security Professional VSP18 Venafi Security Professional 13 April 2018 2018 Venafi. All Rights Reserved. 1 VSP18 Prerequisites Course intended for: IT Professionals who interact with Digital Certificates Also appropriate for:

More information

Entrust. Discovery 2.4. Administration Guide. Document issue: 3.0. Date of issue: June 2014

Entrust. Discovery 2.4. Administration Guide. Document issue: 3.0. Date of issue: June 2014 Entrust Discovery 2.4 Administration Guide Document issue: 3.0 Date of issue: June 2014 Copyright 2010-2014 Entrust. All rights reserved. Entrust is a trademark or a registered trademark of Entrust, Inc.

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

Send documentation comments to

Send documentation comments to CHAPTER 6 Configuring Certificate Authorities and Digital Certificates This chapter includes the following topics: Information About Certificate Authorities and Digital Certificates, page 6-1 Default Settings,

More information

Systemwalker Software Configuration Manager. Technical Guide. Windows/Linux

Systemwalker Software Configuration Manager. Technical Guide. Windows/Linux Systemwalker Software Configuration Manager Technical Guide Windows/Linux B1X1-0126-04ENZ0(00) January 2013 Preface Purpose of this Document This document explains the functions of Systemwalker Software

More information

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux Systemwalker Service Quality Coordinator Technical Guide Windows/Solaris/Linux J2X1-6800-03ENZ0(00) May 2011 Preface Purpose of this manual This manual explains the functions and usage of Systemwalker

More information

Interfacing Operational Grid Security to Site Security. Eileen Berman Fermi National Accelerator Laboratory

Interfacing Operational Grid Security to Site Security. Eileen Berman Fermi National Accelerator Laboratory Interfacing Operational Grid Security to Site Security Eileen Berman Fermi National Accelerator Laboratory Introduction Computing systems at Fermilab belong to one of two large enclaves The General Computing

More information

Juliusz Pukacki OGF25 - Grid technologies in e-health Catania, 2-6 March 2009

Juliusz Pukacki OGF25 - Grid technologies in e-health Catania, 2-6 March 2009 Grid Technologies for Cancer Research in the ACGT Project Juliusz Pukacki (pukacki@man.poznan.pl) OGF25 - Grid technologies in e-health Catania, 2-6 March 2009 Outline ACGT project ACGT architecture Layers

More information

SAML-Based SSO Configuration

SAML-Based SSO Configuration Prerequisites, page 1 SAML SSO Configuration Task Flow, page 5 Reconfigure OpenAM SSO to SAML SSO Following an Upgrade, page 9 SAML SSO Deployment Interactions and Restrictions, page 9 Prerequisites NTP

More information

Configuring SSL CHAPTER

Configuring SSL CHAPTER 7 CHAPTER This chapter describes the steps required to configure your ACE appliance as a virtual Secure Sockets Layer (SSL) server for SSL initiation or termination. The topics included in this section

More information

DAP Controller FCO

DAP Controller FCO Release Note DAP Controller 6.61.0790 System : Business Mobility IP DECT Date : 20 December 2017 Category : General Release Product Identity : DAP Controller 6.61.0790 Queries concerning this document

More information

Blue Coat ProxySG First Steps Solution for Controlling HTTPS SGOS 6.7

Blue Coat ProxySG First Steps Solution for Controlling HTTPS SGOS 6.7 Blue Coat ProxySG First Steps Solution for Controlling HTTPS SGOS 6.7 Legal Notice Copyright 2018 Symantec Corp. All rights reserved. Symantec, the Symantec Logo, the Checkmark Logo, Blue Coat, and the

More information

Apple Inc. Certification Authority Certification Practice Statement Worldwide Developer Relations

Apple Inc. Certification Authority Certification Practice Statement Worldwide Developer Relations Apple Inc. Certification Authority Certification Practice Statement Worldwide Developer Relations Version 1.18 Effective Date: August 16, 2017 Table of Contents 1. Introduction... 5 1.1. Trademarks...

More information

Michigan Grid Research and Infrastructure Development (MGRID)

Michigan Grid Research and Infrastructure Development (MGRID) Michigan Grid Research and Infrastructure Development (MGRID) Abhijit Bose MGRID and Dept. of Electrical Engineering and Computer Science The University of Michigan Ann Arbor, MI 48109 abose@umich.edu

More information

Using the MyProxy Online Credential Repository

Using the MyProxy Online Credential Repository Using the MyProxy Online Credential Repository Jim Basney National Center for Supercomputing Applications University of Illinois jbasney@ncsa.uiuc.edu What is MyProxy? Independent Globus Toolkit add-on

More information

Deliverable D3.5 Harmonised e-authentication architecture in collaboration with STORK platform (M40) ATTPS. Achieving The Trust Paradigm Shift

Deliverable D3.5 Harmonised e-authentication architecture in collaboration with STORK platform (M40) ATTPS. Achieving The Trust Paradigm Shift Deliverable D3.5 Harmonised e-authentication architecture in collaboration with STORK platform (M40) Version 1.0 Author: Bharadwaj Pulugundla (Verizon) 25.10.2015 Table of content 1. Introduction... 3

More information

MasterScope Virtual DataCenter Automation Media v3.0

MasterScope Virtual DataCenter Automation Media v3.0 MasterScope Virtual DataCenter Automation Media v3.0 Release Memo 1st Edition June, 2016 NEC Corporation Disclaimer The copyrighted information noted in this document shall belong to NEC Corporation. Copying

More information

PageScope Box Operator Ver. 3.2 User s Guide

PageScope Box Operator Ver. 3.2 User s Guide PageScope Box Operator Ver. 3.2 User s Guide Box Operator Contents 1 Introduction 1.1 System requirements...1-1 1.2 Restrictions...1-1 2 Installing Box Operator 2.1 Installation procedure...2-1 To install

More information

Oracle Utilities Meter Data Management Integration to SAP for Meter Data Unification and Synchronization

Oracle Utilities Meter Data Management Integration to SAP for Meter Data Unification and Synchronization Oracle Utilities Meter Data Management Integration to SAP for Meter Data Unification and Synchronization Release 11.1 Media Pack Release Notes Oracle Utilities Meter Data Management v2.1.0.0 SAP for Meter

More information

NEC ExpressUpdate Functions and Features. September 20, 2012 Rev. 4.0

NEC ExpressUpdate Functions and Features. September 20, 2012 Rev. 4.0 September 20, 2012 Rev. 4.0 Table of Contents Table of Contents...- 2 - Table of Figures...- 5 - Trademarks...- 9 - Notes...- 9 - About this Document...- 9 - Symbols in this Document...- 9 - Terminology...-

More information

Documentation Roadmap for Cisco Prime LAN Management Solution 4.2

Documentation Roadmap for Cisco Prime LAN Management Solution 4.2 Documentation Roadmap for Cisco Prime LAN Thank you for purchasing Cisco Prime LAN Management Solution (LMS) 4.2. This document provides an introduction to the Cisco Prime LMS and lists the contents of

More information

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen

Grid Computing Fall 2005 Lecture 5: Grid Architecture and Globus. Gabrielle Allen Grid Computing 7700 Fall 2005 Lecture 5: Grid Architecture and Globus Gabrielle Allen allen@bit.csc.lsu.edu http://www.cct.lsu.edu/~gallen Concrete Example I have a source file Main.F on machine A, an

More information

IT Security Evaluation and Certification Scheme Document

IT Security Evaluation and Certification Scheme Document IT Security Evaluation and Certification Scheme Document June 2015 CCS-01 Information-technology Promotion Agency, Japan (IPA) IT Security Evaluation and Certification Scheme (CCS-01) i / ii Table of Contents

More information

Apple Inc. Certification Authority Certification Practice Statement

Apple Inc. Certification Authority Certification Practice Statement Apple Inc. Certification Authority Certification Practice Statement Apple Application Integration Sub-CA Apple Application Integration 2 Sub-CA Apple Application Integration - G3 Sub-CA Version 6.2 Effective

More information

Genesys Security Deployment Guide. What You Need

Genesys Security Deployment Guide. What You Need Genesys Security Deployment Guide What You Need 12/27/2017 Contents 1 What You Need 1.1 TLS Certificates 1.2 Generating Certificates using OpenSSL and Genesys Security Pack 1.3 Generating Certificates

More information

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus

UNIT IV PROGRAMMING MODEL. Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus UNIT IV PROGRAMMING MODEL Open source grid middleware packages - Globus Toolkit (GT4) Architecture, Configuration - Usage of Globus Globus: One of the most influential Grid middleware projects is the Globus

More information

THE WIDE AREA GRID. Architecture

THE WIDE AREA GRID. Architecture THE WIDE AREA GRID Architecture Context The Wide Area Grid concept was discussed during several WGISS meetings The idea was to imagine and experiment an infrastructure that could be used by agencies to

More information

JP1/Integrated Management - Manager Configuration Guide

JP1/Integrated Management - Manager Configuration Guide JP1 Version 11 JP1/Integrated Management - Manager Configuration Guide 3021-3-A08-30(E) Notices Relevant program products For details about the supported OS versions, and about the OS service packs and

More information

McAfee Security Management Center

McAfee Security Management Center Data Sheet McAfee Security Management Center Unified management for next-generation devices Key advantages: Single pane of glass across the management lifecycle for McAfee next generation devices. Scalability

More information

FUJITSU Cloud Service S5. Introduction Guide. Ver. 1.3 FUJITSU AMERICA, INC.

FUJITSU Cloud Service S5. Introduction Guide. Ver. 1.3 FUJITSU AMERICA, INC. FUJITSU Cloud Service S5 Introduction Guide Ver. 1.3 FUJITSU AMERICA, INC. 1 FUJITSU Cloud Service S5 Introduction Guide Ver. 1.3 Date of publish: September, 2011 All Rights Reserved, Copyright FUJITSU

More information

VMware AirWatch Integration with OpenTrust CMS Mobile 2.0

VMware AirWatch Integration with OpenTrust CMS Mobile 2.0 VMware AirWatch Integration with OpenTrust CMS Mobile 2.0 For VMware AirWatch Have documentation feedback? Submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com.

More information

Knowledge Discovery Services and Tools on Grids

Knowledge Discovery Services and Tools on Grids Knowledge Discovery Services and Tools on Grids DOMENICO TALIA DEIS University of Calabria ITALY talia@deis.unical.it Symposium ISMIS 2003, Maebashi City, Japan, Oct. 29, 2003 OUTLINE Introduction Grid

More information

Installing and Configuring vcloud Connector

Installing and Configuring vcloud Connector Installing and Configuring vcloud Connector vcloud Connector 2.6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

MasterScope Virtual DataCenter Automation Media v5.0

MasterScope Virtual DataCenter Automation Media v5.0 MasterScope Virtual DataCenter Automation Media v5.0 Release Memo 1st Edition April, 2018 NEC Corporation Disclaimer The copyrighted information noted in this document shall belong to NEC Corporation.

More information

Index Introduction Setting up an account Searching and accessing Download Advanced features

Index Introduction Setting up an account Searching and accessing Download Advanced features ESGF Earth System Grid Federation Tutorial Index Introduction Setting up an account Searching and accessing Download Advanced features Index Introduction IT Challenges of Climate Change Research ESGF Introduction

More information

Apple Inc. Certification Authority Certification Practice Statement

Apple Inc. Certification Authority Certification Practice Statement Apple Inc. Certification Authority Certification Practice Statement Apple Application Integration Sub-CA Apple Application Integration 2 Sub-CA Apple Application Integration - G3 Sub-CA Version 6.3 Effective

More information

The most common type of certificates are public key certificates. Such server has a certificate is a common shorthand for: there exists a certificate

The most common type of certificates are public key certificates. Such server has a certificate is a common shorthand for: there exists a certificate 1 2 The most common type of certificates are public key certificates. Such server has a certificate is a common shorthand for: there exists a certificate signed by some certification authority, which certifies

More information

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3. Installing and Configuring VMware Identity Manager Connector 2018.8.1.0 (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.3 You can find the most up-to-date technical documentation on

More information

How to build Scientific Gateways with Vine Toolkit and Liferay/GridSphere framework

How to build Scientific Gateways with Vine Toolkit and Liferay/GridSphere framework How to build Scientific Gateways with Vine Toolkit and Liferay/GridSphere framework Piotr Dziubecki, Piotr Grabowski, Michał Krysiński, Tomasz Kuczyński, Dawid Szejnfeld, Dominik Tarnawczyk, Gosia Wolniewicz

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

Introduction. Overview of HCM. HCM Dashboard CHAPTER

Introduction. Overview of HCM. HCM Dashboard CHAPTER CHAPTER 1 This chapter describes the Hosted Collaboration Mediation (HCM) software. It includes: Overview of HCM, page 1-1 Terminology Used in HCM, page 1-2 HCM Dashboard Architecture, page 1-3 Starting

More information

Contents Overview... 5 Upgrading Primavera Gateway... 7 Using Gateway Configuration Utilities... 9

Contents Overview... 5 Upgrading Primavera Gateway... 7 Using Gateway Configuration Utilities... 9 Gateway Upgrade Guide for On-Premises Version 17 August 2017 Contents Overview... 5 Downloading Primavera Gateway... 5 Upgrading Primavera Gateway... 7 Prerequisites... 7 Upgrading Existing Gateway Database...

More information

AirWatch Mobile Device Management

AirWatch Mobile Device Management RSA Ready Implementation Guide for 3rd Party PKI Applications Last Modified: November 26 th, 2014 Partner Information Product Information Partner Name Web Site Product Name Version & Platform Product Description

More information

VMware AirWatch Integration with RSA PKI Guide

VMware AirWatch Integration with RSA PKI Guide VMware AirWatch Integration with RSA PKI Guide For VMware AirWatch Have documentation feedback? Submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com. This product

More information

ShibVomGSite: A Framework for Providing Username and Password Support to GridSite with Attribute based Authorization using Shibboleth and VOMS

ShibVomGSite: A Framework for Providing Username and Password Support to GridSite with Attribute based Authorization using Shibboleth and VOMS ShibVomGSite: A Framework for Providing Username and Password Support to GridSite with Attribute based Authorization using Shibboleth and VOMS Joseph Olufemi Dada & Andrew McNab School of Physics and Astronomy,

More information

Boundary control : Access Controls: An access control mechanism processes users request for resources in three steps: Identification:

Boundary control : Access Controls: An access control mechanism processes users request for resources in three steps: Identification: Application control : Boundary control : Access Controls: These controls restrict use of computer system resources to authorized users, limit the actions authorized users can taker with these resources,

More information