Installing OSG in a VirtualBox Machine

Similar documents
Globus Toolkit Manoj Soni SENG, CDAC. 20 th & 21 th Nov 2008 GGOA Workshop 08 Bangalore

glideinwms Frontend Installation

OSG Lessons Learned and Best Practices. Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session

osg roll: Users Guide Edition

Deploying Rubrik Datos IO to Protect MongoDB Database on GCP

RG-MACC_2.0 Installation Manual

VM Service. A Benchmark suite for cloud environment USER S MANUAL

LHC COMPUTING GRID INSTALLING THE RELEASE. Document identifier: Date: April 6, Document status:

glideinwms architecture by Igor Sfiligoi, Jeff Dost (UCSD)

MyProxy Server Installation

CLU S TER COMP U TI N G

Linux Essentials Objectives Topics:

Oracle Communications WebRTC Session Controller. Preparation Guide

glideinwms Training Glidein Internals How they work and why by Igor Sfiligoi, Jeff Dost (UCSD) glideinwms Training Glidein internals 1

Welcome to getting started with Ubuntu Server. This System Administrator Manual. guide to be simple to follow, with step by step instructions

Cluster Computing Spring 2004 Paul A. Farrell 4/25/2006. Dept of Computer Science Kent State University 1. Configuring & Tuning Cluster Networks

ARCHER Data Services Service Layer

Introduction to Grid Computing!

Oracle Communication WebRTC Session Controller. Preparation Guide

Certificate Authorities: Information and Usage

Cluster Computing Spring 2004 Paul A. Farrell

RG-MACC-BASE_v2.01. Installation Guide

Look What I Can Do: Unorthodox Uses of HTCondor in the Open Science Grid

Introducing the HTCondor-CE

Setting up a virtual testbed for ESGF

History of SURAgrid Deployment

Computer Systems and Architecture

Computer Systems and Architecture

Oracle Communication Policy Controller. Lab Environment Preparation

OpenNebula 4.4 Quickstart CentOS 6 and KVM. OpenNebula Project

Quick Note 52. Connecting to Digi Remote Manager Through Web Proxy. Digi Product Management February 2017

EUROPEAN MIDDLEWARE INITIATIVE

ARCHER Collaborative Workspace

Step 1 - Install Apache and PostgreSQL

Article Number: 549 Rating: Unrated Last Updated: Tue, May 30, 2017 at 11:02 AM

D, E I, J, K. Generalized Regular Expression Parser (GREP), 110 Generic 105 key (Intl) PC, 29 git command, 242

Newsreader virtual machines Technical Report NWR

Disaster Recovery Workflow

Configure HOSTNAME by adding the hostname to the file /etc/sysconfig/network. Do the same to all the all nodes.

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

OPERATING SYSTEMS LINUX

Red Hat.Actualtests.EX200.v by.Dixon.22q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator (RHCSA) Exam

The Microdrive and CF card are electrically compatible this means that a CF card reader can be used to program a Microdrive.

Zenoss Core Upgrade Guide

How to monitor RedHat Enterprise Linux 5 or 6 using Microsoft System Center Operations Manager (SCOM) 2012 SP1 - Part 1

OpenNebula 4.8 Quickstart CentOS 6 and Xen

Configure HOSTNAME by adding the hostname to the file /etc/sysconfig/network. Do the same to all the other 3(4) nodes.

Agent Teamwork Research Assistant. Progress Report. Prepared by Solomon Lane

Platform Migrator Technical Report TR

Zenoss Resource Manager Upgrade Guide

Quick Installation Guide

Linux Administration

How to Deploy Axon on VMware vcenter

Linux Kung-Fu. James Droste UBNetDef Fall 2016

T.A.D / ABS - Installation

Virtual Data Center (vdc) Manual

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux

EX200.Lead2pass.Exam.24q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator RHCSA. Version 14.0

Plexxi HCN Plexxi Connect Installation, Upgrade and Administration Guide Release 3.0.0

Installing SmartSense on HDP

FUJITSU Cloud Service S5 Installation and Configuration of MySQL on a CentOS VM

Tungsten Dashboard for Clustering. Eric M. Stone, COO

Vendor: RedHat. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator - RHCSA. Version: Demo

Network Configuration for Cisco UCS Director Baremetal Agent

Care and Feeding of HTCondor Cluster. Steven Timm European HTCondor Site Admins Meeting 8 December 2014

Seltestengine EX200 24q

Arm Licence Server User Guide. Version 18.0

DIRAC Server Installation. DIRAC Project

Cisco Prime Service Catalog Virtual Appliance Quick Start Guide 2

CE+WN+siteBDII Installation and configuration

Installing Cisco Multicast Manager

Chapter 2 Booting Up and Shutting Down

Teradici PCoIP Connection Manager 1.8 and Security Gateway 1.14

SVN UUID Mismatch and SVN Data Synchronization

ATLAS COMPUTING AT OU

OpenNebula 4.10 Quickstart CentOS 6 and KVM

Infoblox Kubernetes1.0.0 IPAM Plugin

Containerized Cloud Scheduling Environment

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide

Migrating the Cisco StadiumVision Director Server Environment to Platform 2 from the Cisco ADE 2140 Series Appliance

DiGS (Version 3.1) Setup Guide

Open a browser and download the Apache Tomcat 7 and Oracle JDBC 6 JAR from these locations. The Oracle site may require that you register as a user.

Braindumps EX200 15q

Booting Up and Shutting Down. lctseng / Liang-Chi Tseng

Useful Unix Commands Cheat Sheet

Illustrated Steps to create greggroeten.net with AWS

Introduction. What is Linux? What is the difference between a client and a server?

RAP Installation README

EX200 Q&A. DEMO Version

Shooting for the sky: Testing the limits of condor. HTCondor Week May 2015 Edgar Fajardo On behalf of OSG Software and Technology

7.3 Install on Linux and Initial Configurations

pulsarvmlite v Installation and Usage

WORKSHOP SHINKEN REFERENCE: Ref[1] tutorial for Centos7 Ref[2] offical documentation

Bitnami Moodle for Huawei Enterprise Cloud

Installing Hadoop / Yarn, Hive 2.1.0, Scala , and Spark 2.0 on Raspberry Pi Cluster of 3 Nodes. By: Nicholas Propes 2016

VCP-DCV5, OCP (DBA), MCSA, SUSE CLA, RHCSA-7]

Getting Started with Hadoop/YARN

Installing and Upgrading Cisco Video Surveillance Manager Release 6.3.1

Linux Kung Fu. Stephen James UBNetDef, Spring 2017

Edinburgh (ECDF) Update

Transcription:

SPRACE-Brazil December 10, 2008

VirtualBox Using SUN xvm VirtualBox version 2.0.2. Guest Operational system installed using CentOS-5.2-x86_64-bin-DVD.iso 12 GB image called testserver1 To use only your terminal, using ssh: # set the guest port (port 22 for sshd) VBoxManage setextradata "testserver1"\ "VBoxInternal/Devices/pcnet/0/LUN#0/Config/sshd/GuestPort" 22 # set the host port (the port where the VirtualBox-process listens on behalf of the VM) VBoxManage setextradata "testserver1"\ "VBoxInternal/Devices/pcnet/0/LUN#0/Config/sshd/HostPort" 2222 # set the protocol VBoxManage setextradata "testserver1"\ "VBoxInternal/Devices/pcnet/0/LUN#0/Config/sshd/Protocol" TCP adding a secondary network interface: VBoxManage modifyvm "testserver1" -nic2 intnet VBoxManage modifyvm "testserver1" -intnet2 intnet you can boot and access your virtual machine in text mode with: VBoxVRDP -startvm "testserver1"& ssh -p 2222 root@localhost

Network on VirtualBox Some words about networking, what was customized: [root@testserver1 ~]# more /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.1.152 testserver1.sprace.org.br [root@testserver1 ~]# more /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=testserver1.sprace.org.br the secondary network card: [root@testserver1 ~]# more /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 ONBOOT=yes BOOTPROTO=none NETMASK=255.255.255.0 IPADDR=192.168.1.152 GATEWAY=10.0.2.2 TYPE=Ethernet IPV6INIT=no PEERDNS=yes USERCTL=no

Pacman and Condor Pacman: To download and install all VDT components you will need it [root@testserver1 ~]# cd /opt/ [root@testserver1 opt]# wget \ http://physics.bu.edu/pacman/sample_cache/tarballs/pacman-3.26.tar.gz [root@testserver1 opt]# tar --no-same-owner -xzvf pacman-3.26.tar.gz [root@testserver1 opt]# cd pacman-3.26 [root@testserver1 pacman-3.26]# source setup.sh Condor: We installed it detached from VDT in order to upgrade it. [root@testserver1 pacman-3.26]# cd /opt [root@testserver1 opt]# mkdir condor [root@testserver1 opt]# cd /tmp/ [root@testserver1 tmp]# tar -xvzf condor-7.0.5-linux-x86_64-rhel5-dynamic.tar.gz [root@testserver1 tmp]# cd condor-7.0.5 [root@testserver1 condor-7.0.5]# groupadd condor [root@testserver1 condor-7.0.5]# adduser condor -g condor -d /home/condor [root@testserver1 condor-7.0.5]#./condor_configure --install --maybe-daemon-owner \ --make-personal-condor --install-log /opt/condor/post_install --install-dir /opt/condor/

Condor Configuration Some work on condor configuration: [root@testserver1 condor-7.0.5]# vim /opt/condor/etc/condor_config RELEASE_DIR = /opt/condor LOCAL_DIR = $(RELEASE_DIR)/hosts/$(HOSTNAME) LOCAL_CONFIG_FILE = $(LOCAL_DIR)/condor_config.local CONDOR_HOST = 192.168.1.152 FILESYSTEM_DOMAIN = grid COLLECTOR_NAME = GRIDUNESP HOSTALLOW_WRITE = *.sprace.org.br *.grid [root@testserver1 condor-7.0.5]# cd /opt/condor/ [root@testserver1 condor]# mkdir hosts [root@testserver1 condor]# mkdir hosts/ hostname -s [root@testserver1 condor]# mkdir hosts/testserver1/{log,execute,spool} [root@testserver1 condor]# vim hosts/testserver1/condor_config.local NETWORK_INTERFACE=192.168.1.152 DAEMON_LIST = MASTER, STARTD, SCHEDD, COLLECTOR, NEGOTIATOR [root@testserver1 condor]# chown condor: hosts/testserver1/* To each node repeat the last five steps, but put only the following line in their condor_config.local: NETWORK_INTERFACE=192.168.1.XX where 192.168.1.XX is the internal IP address.

Condor Initialization Now prepair the initilization service, using this script: [root@testserver1 condor]# vim /etc/init.d/condor # chkconfig: 345 99 99 # description: Condor batch system ### BEGIN INIT INFO # Provides: condor # Required-Start: $network # Required-Stop: # Default-Start: 3 4 5 # Default-Stop: 1 2 6 # Description: Condor batch system ### END INIT INFO # Determine if we re superuser case id in "uid=0("* ) vdt_is_superuser=y ;; * ) vdt_is_superuser=n ;; esac CONDOR_SBIN=/opt/condor/sbin MASTER=$CONDOR_SBIN/condor_master CONDOR_OFF=$CONDOR_SBIN/condor_off PS="/bin/ps auwx" case $1 in start ) if [ -x $MASTER ]; then echo "Starting up Condor" $MASTER

Condor Initialization else echo "$MASTER is not executable. exit 1 fi ;; Skipping Condor startup." stop ) pid= $PS grep $MASTER grep -v grep awk {print $2} if [ -n "$pid" ]; then echo "Shutting down Condor" $CONDOR_OFF -master else echo "Condor not running" fi ;; *) echo "Usage: condor {start stop}" ;; esac and add it in your initialization sequence: [root@testserver1 condor]# chmod a+x /etc/init.d/condor; /sbin/chkconfig --add condor

Condor Initialization Our test server is a standalone machine, so it will authenticate by itself using GUMS [3]. We also will install the OSG-CE core packages in the following lines: [root@testserver1 condor]# cd /shared [root@testserver1 shared]# mkdir osg-1.0.0 [root@testserver1 shared]# cd /shared/osg-1.0.0 [root@testserver1 osg-1.0.0]# export VDTSETUP_CONDOR_LOCATION=/opt/condor/ [root@testserver1 osg-1.0.0]# pacman -get OSG:ce [root@testserver1 osg-1.0.0]#source setup.sh [root@testserver1 osg-1.0.0]#pacman -get OSG:ManagedFork [root@testserver1 osg-1.0.0]# $VDT_LOCATION/vdt/setup/configure_globus_gatekeeper\ --managed-fork n --server y [root@testserver1 osg-1.0.0]#pacman -get OSG:gums our computer element alse will deal with managed fork jobs!

Obtain Certificates for our Machines In order to request a certificate you need your.globus in your user directory. Using the new procedure you can request it automaticaly using grid-certadmin [1, 2] (please read here [1] before): [root@testserver1 osg-1.0.0]# su - mdias [mdias@testserver1 mdias]$ mkdir.globus [mdias@testserver1 mdias]$ cd.globus [mdias@testserver1.globus]$ scp shell.ift.unesp.br:/users/mdias/user*.pem. [mdias@testserver1.globus]$ source /share/osg-1.0.0/setup.sh [mdias@testserver1.globus]$ su - mdias [mdias@testserver1 mdias]$ cert-gridadmin -host testserver1.sprace.org.br \ -prefix testserver1 ca doegrids -affiliation osg -vo dosar -show \ -email mdias@if.unesp.br [mdias@testserver1 mdias]$ exit [root@testserver1 ~]#mv /home/mdias/testserver1cert.pem /etc/grid-security/hostcert.pem [root@testserver1 ~]#mv /home/mdias/testserver1key.pem /etc/grid-security/hostkey.pem [root@testserver1 ~]#chown root: /etc/grid-security/host* [root@testserver1 ~]#chmod 400 /etc/grid-security/hostkey.pem [root@testserver1 ~]#chmod 444 /etc/grid-security/hostcert.pem [root@testserver1 ~]#mkdir /etc/grid-security/http

Obtain Certificates for Services Requesting apache certificate. The procedure is almost the same described to request one for a machine: [mdias@testserver1 ~]$ cert-gridadmin -host testserver1.sprace.org.br -service http\ -prefix testserver1 ca doegrids -affiliation osg -vo dosar -show \ -email mdias@ift.unesp.br [mdias@testserver1 ~]$ su [root@testserver1 mdias]# mkdir /etc/grid-security/http [root@testserver1 mdias]# mv testserver1cert.pem /etc/grid-security/http/httpcert.pem [root@testserver1 mdias]# mv testserver1key.pem /etc/grid-security/http/httpkey.pem [root@testserver1 mdias]# chmod 444 /etc/grid-security/http/httpcert.pem [root@testserver1 mdias]# chmod 400 /etc/grid-security/http/httpkey.pem [root@testserver1 mdias]# chown -R daemon.daemon /etc/grid-security/http

Install CA-Certificates In OSG 1.0.0 we don t have this package installed by default: [root@testserver1 mdias]# cd /shared/osg-1.0.0/ [root@testserver1 osg-1.0.0]# vim $VDT_LOCATION/vdt/etc/vdt-update-certs.conf Uncomment the following line to get the CAs from the OSG cacerts_url = http://software.grid.iu.edu/pacman/cadist/ca-certs-version [root@testserver1 osg-1.0.0]#. $VDT_LOCATION/vdt-questions.sh; [root@testserver1 osg-1.0.0]# $VDT_LOCATION/vdt/sbin/vdt-setup-ca-certificates [root@testserver1 osg-1.0.0]# vdt-control --enable vdt-update-certs [root@testserver1 osg-1.0.0]# vdt-control --on vdt-update-certs [root@testserver1 osg-1.0.0]# cd /etc/grid-security/ [root@testserver1 grid-security]# ln -s $VDT_LOCATION/globus/TRUSTED_CA certificates also we create a symbolic link to our certificates.

GUMS server configuration GUMS:We are ready to configurate our GUMS server now [root@testserver1 osg-1.0.0]# $VDT_LOCATION/post-install/mysql start [root@testserver1 osg-1.0.0]# $VDT_LOCATION/post-install/apache start [root@testserver1 osg-1.0.0]# $VDT_LOCATION/post-install/tomcat-55 start [root@testserver1 osg-1.0.0]# $VDT_LOCATION/tomcat/v55/webapps/gums/WEB-INF/scripts/addMySQLAdmin "/DC=org/DC=doegrids/OU=People/CN= 280904" [root@testserver1 osg-1.0.0]# cd $VDT_LOCATION/tomcat/v55/webapps/gums/WEB-INF/scripts [root@testserver1 scripts]#./gums-create-config --osg-template The last step will download a template used in OSG. Now, point your browser to http://localhost:8443/gums Remark: you can use your guest operational sistem to do this task, if you configure it as follows: VBoxManage setextradata "testserver1"\ "VBoxInternal/Devices/pcnet/0/LUN#0/Config/osg/GuestPort" 8443 VBoxManage setextradata "testserver1" \ "VBoxInternal/Devices/pcnet/0/LUN#0/Config/osg/HostPort" 8443 VBoxManage setextradata "testserver1" \ "VBoxInternal/Devices/pcnet/0/LUN#0/Config/osg/Protocol" TCP

GUMS server configuration Make sure that your certificate is loaded into your browser: but I have some remarks related to my particular account: 1. In order to my DN be mapped to a local account, it was necessary to edit this file: [root@testserver1 osg-1.0.0]# vim\ $VDT_LOCATION/tomcat/v55/webapps/gums/WEB-INF/config/gums.config and change the VOMS server for CMS to this one: baseurl= https://voms.cern.ch:8443/voms/cms/services/vomsadmin 2. CMS use pool accounts. A DN is mapped to a fixed user. So you will need to create them first, in Manage Pool Accounts :

GUMS server configuration Go to Update VO Members and click on it: I only created the accounted necessary to my DN be mapped checking it with: [root@testserver1 ~]# $VDT_LOCATION/gums/scripts/gums-host generategridmapfile \ /DC=org/DC=doegrids/OU=Services/CN=testserver1.sprace.org.br grep

Looking deep on OSG configuration Post-Install Configuration: Read carefully your $VDT_LOCATION/post-install/README. Lets start to follow it: Globus-Base-WS-Essentials and Globus-Base-WSGRAM-Server: [root@testserver1 scripts]# cd $VDT_LOCATION [root@testserver1 osg-1.0.0]#cd /etc/grid-security/ [root@testserver1 grid-security]# cp hostkey.pem containerkey.pem [root@testserver1 grid-security]# cp hostcert.pem containercert.pem [root@testserver1 grid-security]# chown daemon: containerkey.pem containercert.pem [root@testserver1 grid-security]# su - [root@testserver1 ~]# visudo Runas_Alias GLOBUSUSERS = ALL,!root daemon ALL=(GLOBUSUSERS) \ NOPASSWD: /shared/osg-1.0.0/globus/libexec/globus-gridmap-and-execute \ -g /etc/grid-security/grid-mapfile \ /shared/osg-1.0.0/globus/libexec/globus-job-manager-script.pl * daemon ALL=(GLOBUSUSERS) \ NOPASSWD: /shared/osg-1.0.0/globus/libexec/globus-gridmap-and-execute \ -g /etc/grid-security/grid-mapfile \ /shared/osg-1.0.0/globus/libexec/globus-gram-local-proxy-tool * [root@testserver1 ~]# cd $VDT_LOCATION

PRIMA: [root@testserver1 osg-1.0.0]# cp /shared/osg-1.0.0/post-install/gsi-authz.conf \ /etc/grid-security/. [root@testserver1 osg-1.0.0]# cp /shared/osg-1.0.0/post-install/prima-authz.conf \ /etc/grid-security/. GUMS-Client [root@testserver1 osg-1.0.0]#vdt-control --enable gums-host-cron [root@testserver1 osg-1.0.0]#vdt-control --on gums-host-cron PRIMA-GT4 [root@testserver1 osg-1.0.0]#$vdt_location/vdt/setup/configure_prima_gt4\ --enable --gums-server testserver1.sprace.org.br Condor-cron is used to deal with OSG-RSV probes. [root@testserver1 osg-1.0.0]# vim $VDT_LOCATION/condor-cron/etc/condor_config RELEASE_DIR = /shared/osg-1.0.0/condor-cron CONDOR_HOST = 192.168.1.152 LOCAL_DIR = $(RELEASE_DIR)/local.testserver1 UID_DOMAIN = grid FILESYSTEM_DOMAIN = grid COLLECTOR_NAME = GRIDUNESP HOSTALLOW_WRITE = testserver1.sprace.org.br,*.grid [root@testserver1 osg-1.0.0]#more /etc/passwd grep condor cut -d : -f3-4 501:501 [root@testserver1 osg-1.0.0]# vim $VDT_LOCATION/condor-cron/local.testserver1/condor_config.local CONDOR_HOST = 192.168.1.152 #UID_DOMAIN = sprace.org.br #FILESYSTEM_DOMAIN = sprace.org.br #COLLECTOR_NAME = Personal Condor at testserver1.sprace.org.br CONDOR_IDS = 501.501 #LOCK =

More Services to configure... For this example, we also comment all security setup section in Condor-cron config file. Monalisa: You have to complete your Monalisa monitoring configuration: [root@testserver1 osg-1.0.0]# vim $VDT_LOCATION/MonaLisa/Service/VDTFarm/ml.properties MonaLisa.Location=Sao Paulo MonaLisa.Country=Brazil MonaLisa.LAT=-23.5592 MonaLisa.LONG=-46.7358 lia.monitor.group=osg [root@testserver1 osg-1.0.0]# vim $VDT_LOCATION/MonaLisa/Service/CMD/ml_env JAVA_HOME=/shared/osg-1.0.0/jdk1.5 FARM_NAME=GRIDUNESP [root@testserver1 osg-1.0.0]# vdt-register-service --name MLD --enable Gratia-Metric-Probe [root@testserver1 osg-1.0.0]#$vdt_location/vdt/setup/configure_gratia --probe metric \ --site-name GRIDUNESP CEMon [root@testserver1 osg-1.0.0]# $VDT_LOCATION/vdt/setup/configure_cemon\ --consumer https://osg-ress-1.fnal.gov:8443/ig/services/ceinfocollector --topic OSG_CE

OSG directories Directories:We will need to allocate grid data, applications, etc [4]. 1. OSG_GRID : Directory where the worker node client (wn-client) or packages to use the grid are installed 2. OSG_APP : Directory available to install job specific applications and binaries 3. OSG_DATA:Directory available for jobs to store data and to stage data in and out, this directory is shared across the cluster 4. OSG_WN_TMP:Directory available for scratch space for worker nodes. This directory is local to each node. 5. OSG_SITE_READ: Directory available for staging in of files. Must be readable by all worker nodes. 6. OSG_SITE_WRITE: Directory available for staging out of files. Must be writable by all worker nodes.

OSG directories So, lets do it! [root@testserver1 osg-1.0.0]# mkdir /shared/osg_app [root@testserver1 osg-1.0.0]# mkdir /shared/osg_app/{app,data,read,write} [root@testserver1 osg-1.0.0]# chmod 1777 /shared/osg_app/* Now, we will retrieve some information from GUMS (https://localhost:8443/gums again) in order create our osg-user-vo-map.txt and copy and paste into this file [root@testserver1 osg-1.0.0]# vim $VDT_LOCATION/monitoring/osg-user-vo-map.txt

Configure OSG Attributes OSG Attributes:The major configuration is done in config.ini [root@testserver1 osg-1.0.0]# cp $VDT_LOCATION/monitoring/simple-config.ini $VDT_LOCATION/monitoring/conf [root@testserver1 osg-1.0.0]# vim $VDT_LOCATION/monitoring/config.ini localhost = testserver1.sprace.org.br admin_email = mafd@mail.cern.ch osg_location = /share/osg-1.0.0 site_name = GRIDUNESP sponsor = osg contact = city = Sao Paulo country = Brazil longitude = -46.7358 latitude = -23.5592 [PBS] enabled = False [Condor] enabled = True home = /opt/condor wsgram = True condor_config = /opt/condor/etc/condor_config [SGE] enabled = False [LSF] enabled = False [FBS] enabled = False

Configure OSG Attributes [Managed Fork] enabled = True condor_location = /opt/condor condor_config = /opt/condor/etc/condor_config [Storage] grid_dir = /shared/osg-wn-client app_dir = /shared/osg_app/app data_dir = /shared/osg_app/data worker_node_temp = /scratch/osg site_read = /shared/osg_app/read site_write = /shared/osg_app/write se_available = True default_se = testserver1.sprace.org.br [GIP] batch = condor gsiftp_path = /shared/osg_app/data [RSV] enabled = True rsv_user = mdias enable_ce_probes = True ce_hosts = testserver1.sprace.org.br enable_gridftp_probes = True gridftp_hosts = testserver1.sprace.org.br gridftp_dir = /shared/osg_app/data enable_srm_probes = True srm_hosts = testserver1.sprace.org.br srm_dir = /pnfs/sprace.org.br/data/mdias proxy_file = /tmp/x509up_u500 srm_dir = /pnfs/sprace.org.br/data/mdias [MonaLisa] enabled = True [root@testserver1 osg-1.0.0]# cd monitoring/

Configure OSG Attributes Remark: The options srm_hosts = testserver1.sprace.org.br srm_dir = /pnfs/sprace.org.br/data/mdias are not in the original file. Verify and correct errors in your config.ini using: [root@testserver1 monitoring]#./configure-osg.py -v Correct these errors. Running it for real now [root@testserver1 monitoring]# ${VDT_LOCATION}/monitoring/configure-osg.py -c -f config.ini [root@testserver1 monitoring]# cd.. Don t forget to turn off mysql, apache and tomcat... [root@testserver1 osg-1.0.0]# $VDT_LOCATION/post-install/apache stop [root@testserver1 osg-1.0.0]# $VDT_LOCATION/post-install/tomcat-55 stop [root@testserver1 osg-1.0.0]# $VDT_LOCATION/post-install/mysql stop and finish it (or almost...) [root@testserver1 osg-1.0.0]# vdt-control --on --force and perform the crucial test [root@testserver1 osg-1.0.0]# $VDT_LOCATION/verify/site_verify.pl

References [1]https://twiki.grid.iu.edu/bin/view/Security/OsgRaOperations#Letter requesting GridAdmin priv [2]https://twiki.grid.iu.edu/bin/view/Security/CertScriptsPackage [3]https://twiki.grid.iu.edu/bin/view/ArchivedDocumentation/OSG/OSG080/InstallConfigureAndManageGUMS [4]https://twiki.grid.iu.edu/bin/view/ReleaseDocumentation/EnvironmentVariables#Storage Related Parameters