CE+WN+siteBDII Installation and configuration

Size: px
Start display at page:

Download "CE+WN+siteBDII Installation and configuration"

Transcription

1 The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) CE+WN+siteBDII Installation and configuration Andrea Cortellese National Institute of Nuclear Physics Latin America Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

2 Outline Computing Element overview Worker Node overview CE CREAM overview glite stack overview glite CE cream and sitebdii Installation on CE and WN Configuration on CE and WN 2

3 glite stack overview 3

4 glite overview worker node 4

5 Computing Element Overview Computing Element provides some of main services of a site. Main functionalities: job management (job submission, job control) job status updated for WMS publish all site information (as site location, queues, CPU availability and so on...) through site BDII service. It can runs several kinds of batch system: Torque + MAUI LSF SGE Condor 5

6 Torque + MAUI Torque server service: pbs_server provides basic batch services such as receiving/creating a batch job. Torque client service: psb_mom places jobs into execution. It s is also responsible for returning job s output to the user. MAUI system service: job_scheduler contains site s policy to decide which job is going to be executed and when. 6

7 Site BDII* By default it is installed on CE It collect all site GRISes* (for example SE,RB,LFC,etc...) Service is named bdii Log file: /opt/bdii/var/bdii.log *BDII = Berkeley Database Information Index **GRIS = Grid Resouce Information Service 7

8 Worker Node Element Overview They are machines which really execute your job. User can only access their services by a Computing Element. Their characteristic are collected by Computing Element that publishes all information by BDII services 8

9 CE Cream overview Computing Resource Execution And Management Accept job submission requests belonging from a WMS and other job management request. It exposes a web services interface 9

10 Two or more machine: One will be used to perform CE installation; Others will be used to perform WN installation; Requirements FROM glite 3.2 are needed 3 machines for the installation, one for the CE, the second for BDII and the last for WN Architecture: 64 bit Operating System: Scientific Linux 5 At least one machine with a public ip address, direct and reverse address resolution on a DNS and equipped with an X509 certificate. 10

11 First machine: CE Cream Installation (on Torque/PBS) 11

12 Network Time Protocol Let s check if date s machine is correct with: # date if ntp date isn t correct, let s syncronize with infn server # /etc/init.d/ntpd stop # ntpdate ntp-1.infn.it # /etc/init.d/ntpd start if not let s configure file and make service start on boot: # chkconfig ntpd on 12

13 Repository set up (by UTFSM repo) Add to system repository ones specific for middleware to install # cd /etc/yum.repos.d/ mv dag.repo dag.repo.stop wget wget wget wget wget 13

14 Repository set up (by CNAF repo) Add to system repository ones specific for middleware to install # cd /etc/yum.repos.d/ # mv dag.repo dag.repo.stop # wget # wget # wget # wget <same> /glite-cream_torque.repo # wget 14

15 Which metapackages we are going to install? There are several kinds of metapackages to install: lcg-ca LHC Computing Grid rpm collection to support external Certification Authority. ig_cream_torque INFNGRID Compunting Element CREAM and torque services rpm. ig_bdii INFNGRIF BDII services rpm. 15

16 Middleware component installation Use yum to install needed packets # yum clean all # yum install -y lcg-ca # yum install -y ig_cream_torque # yum install -y gilda_utils Sometimes it s necessary add manually metapackage profiles which include middleware components # yum install -y xml-commons-apis 16

17 Before configuration Some preliminary steps before configuration: - copy host certificate in default path: # cd # mkdir cert # cd cert # wget # tar xvzf clusterxx.tar.gz # mv hostkey.pem /etc/grid-security/hostkey.pem # mv hostcert.pem /etc/grid-security/hostcert.pem # chmod 400 /etc/grid-security/hostkey.pem # chmod 644 /etc/grid-security/hostcert.pem 17

18 Before configuration/2 - Generate backup files from YAIM template # cd /opt/glite/yaim/examples/; # mkdir backup # cp -r wn-list.conf ig-users.conf ig-groups.conf siteinfo/vo.d/ siteinfo/services/ siteinfo/ig-site-info.def backup/ - Schedule configuration files to be modified: - 1) /opt/glite/yaim/examples/siteinfo/services/glite-creamce - 2) /opt/glite/yaim/examples/wn-list.conf - 3) /opt/glite/yaim/examples/ig-groups.conf - 4) /opt/glite/yaim/examples/ig-users.conf - 5) /opt/glite/yaim/examples/siteinfo/ig-site-info.def 18

19 YAIM configuration Main file to edit is ig-site-info.def, where you specify some general settings and other component s parameters (CE Cream) Set variables with corrected values replacing example ones. Let s start by editing the first one: glite-creamce: # vim services/glite-creamce CEMON_HOST=${CE_HOST} CREAM_DB_USER="cream_db_user" CREAM_DB_PASSWORD="cream_pass" BLPARSER_HOST=${CE_HOST} 19

20 YAIM configuration/3 Let s edit now the wn-list.conf file with the shell editor that for each you more familiar: # vim wn-list.conf ### Delete all example values present clusterxx.fis.utfsm.cl #insert worker nodes hostname or IP address if missing 20

21 Here some settings to support GISELA VO: YAIM configuration/4 # wget # tar xvf EELA-2VOs.tgz # cp siteinfo/vo.d/* /opt/glite/yaim/examples/siteinfo/vo.d // Append EELA VOs users to users' definition file # cat siteinfo/users.conf >> /opt/glite/yaim/examples/ig-users.conf // Append EELA VOs users to users' definition file # cat siteinfo/groups.conf >> /opt/glite/yaim/examples/ig-groups.conf # cd /etc/yum.repos.d # wget # yum install eela-vomscerts 21

22 YAIM configuration/5 In this way we have modified this two files: - ig-groups.conf - ig-users.conf # tail ig-groups.conf # ONLY to verify that lines have been appended correctly "/VO=prod.vo.eu-eela.eu/GROUP=/prod.vo.eueela.eu/ROLE=lcgadmin":seelaprod:127702:sgm: "/VO=prod.vo.eu-eela.eu/GROUP=/prod.vo.eueela.eu/ROLE=production":peelaprod:127701:p$ "/VO=prod.vo.eu-eela.eu/GROUP=/prod.vo.eu-eela.eu":eelaprod:127700:: "/VO=oper.vo.eu-eela.eu/GROUP=/oper.vo.eueela.eu/ROLE=lcgadmin":seelaoper:127705:sgm: "/VO=oper.vo.eu-eela.eu/GROUP=/oper.vo.eueela.eu/ROLE=production":peelaoper:127704:p$ "/VO=oper.vo.eu-eela.eu/GROUP=/oper.vo.eu-eela.eu":eelaoper:127703:: 22

23 YAIM configuration/6 # tail ig-users.conf #ONLY to verify that these lines have been appended correctly at the end of the file :eelaprod000:127700:eelaprod:prod.vo.eu-eela.eu:: :eelaprod001:127700:eelaprod:prod.vo.eu-eela.eu:: :eelaprod002:127700:eelaprod:prod.vo.eu-eela.eu:: :eelaprod003:127700:eelaprod:prod.vo.eu-eela.eu:: :eelaprod048:127700:eelaprod:prod.vo.eu-eela.eu:: :eelaprod049:127700:eelaprod:prod.vo.eu-eela.eu:: :peelaprod050:127701,127700:peelaprod,eelaprod:prod.vo.eu-eela.eu:prd: :peelaprod051:127701,127700:peelaprod,eelaprod:prod.vo.eu-eela.eu:prd: :peelaprod052:127701,127700:peelaprod,eelaprod:prod.vo.eu-eela.eu:prd: :peelaprod053:127701,127700:peelaprod,eelaprod:prod.vo.eu-eela.eu:prd: :seelaprod148:127702,127700:seelaprod,eelaprod:prod.vo.eu-eela.eu:sgm: :seelaprod149:127702,127700:seelaprod,eelaprod:prod.vo.eu-eela.eu:sgm: :eelaoper150:127703:eelaoper:oper.vo.eu-eela.eu:: :eelaoper151:127703:eelaoper:oper.vo.eu-eela.eu:: :eelaoper198:127703:eelaoper:oper.vo.eu-eela.eu:: :eelaoper199:127703:eelaoper:oper.vo.eu-eela.eu:: :peelaoper200:127704,127703:peelaoper,eelaoper:oper.vo.eu-eela.eu:prd: :peelaoper201:127704,127703:peelaoper,eelaoper:oper.vo.eu-eela.eu:prd: :seelaoper298:127705,127703:seelaoper,eelaoper:oper.vo.eu-eela.eu:sgm: :seelaoper299:127705,127703:seelaoper,eelaoper:oper.vo.eu-eela.eu:sgm: 23

24 YAIM configuration/7 Let s finish with: ig-site-info.def there are many variables to set: # vim ig-site-info.def WN_LIST=/opt/glite/yaim/examples/wn-list.conf USERS_CONF=/opt/glite/yaim/examples/ig-users.conf GROUPS_CONF=/opt/glite/yaim/examples/ig-groups.conf MYSQL_PASSWORD=good_mysql_pass # any password you want SITE_ ="mail1@test" SITE_NAME=UTFSMxx-GRID-SITE SITE_LAT=36.76 SITE_LONG=

25 YAIM configuration/8 # vim ig-site-info.def CE_HOST=ceristXX.grid.arn.dz # substitute with hostname machine CE_CPU_MODEL=XEON #cat /proc/cpuinfo CE_CPU_VENDOR=Intel CE_CPU_SPEED=2230 CE_OS=ScientificSL CE_OS_RELEASE=5.5 #cat /etc/redhat-release CE_OS_VERSION="Boron" CE_OS_ARCH=x86_64 CE_MINPHYSMEM=512 #cat /proc/meminfo on WN CE_MINVIRTMEM=512 CE_PHYSCPU=1 #total cpu in site (dual dual core) CE_LOGCPU=4 CE_SMPSIZE=4 CE_OUTBOUNDIP=TRUE CE_INBOUNDIP=FALSE 25

26 # vim ig-site-info.def CE_RUNTIMEENV=" LCG-2 LCG-2_1_0 LCG-2_1_1 LCG-2_2_0 LCG-2_3_0 LCG-2_3_1 LCG-2_4_0 LCG-2_5_0 LCG-2_6_0 LCG-2_7_0 GLITE-3_0_0 GLITE-3_1_0 GLITE-3_2_0 R-GMA SI00MeanPerCPU_3800 SF00MeanPerCPU_3800 " CE_SI00=3800 CE_SF00=3800 YAIM configuration/9 26

27 YAIM configuration/10 # vim ig-site-info.def CE_CAPABILITY="CPUScalingReferenceSI00=23.75" CE_OTHERDESCR="Cores=4,Benchmark=6.5-HEP-SPEC06 SE_MOUNT_INFO_LIST="${INT_HOST_SW_DIR}:/opt/exp_sof t,/opt/exp_soft" 27

28 YAIM configuration/11 How to set CE_SI00, CE_SF00, CE_CAPABILITY, CE_OTHERDESCR? Try to search for you value in thris link: For example if you have an Intel XEON GHz with no Hyper Threading will find in the table of previous link a value of 95 and a conversion factor of 1HS06=40 so: CE_SI00 = 3800 CE_SF00 = 3800 CE_CAPABILITY="CPUScalingReferenceSI00=3800 CE_OTHERDESCR="Cores=4,Benchmark=23.75-HEP-SPEC06 Where (3800/40)/4=

29 YAIM configuration/12 # vim ig-site-info.def BATCH_SERVER=clusterXX.fis.utfsm.cl JOB_MANAGER=pbs CE_BATCH_SYS=pbs BATCH_LOG_DIR=/var/spool/pbs APEL_DB_PASSWORD="anything" DGAS_ACCT_DIR=/var/spool/pbs/server_priv/accounting VOS="dteam lhcb ops dteam prod.vo.eu-eela.eu oper.vo.eu-eela.eu" QUEUES="cert prod oper" CERT_GROUP_ENABLE="ops dteam PROD_GROUP_ENABLE="prod.vo.eu-eela.eu /VO=prod.vo.eu-eela.eu/GROUP=/ prod.vo.eu-eela.eu/role=lcgadmin /VO=prod.vo.eu-eela.eu/GROUP=/ prod.vo.eu-eela.eu/role=production" OPER_GROUP_ENABLE="oper.vo.eu-eela.eu /VO=oper.vo.eu-eela.eu/GROUP=/ oper.vo.eu-eela.eu/role=lcgadmin /VO=oper.vo.eu-eela.eu/GROUP=/ oper.vo.eu-eela.eu/role=production" 29

30 YAIM configuration/13 If you get syntax error check them on the console with source ig-site-info.def command Are you becoming crazy with all this settings? In this tgz you will find all variables set for cerist44.grid.arn.dz ce that it has been installed for test. However if you are configuring a real site you have to change most of them so pay attention! # wget After editing you can launch command: # /opt/glite/yaim/bin/ig_yaim -c -s ig-site-info.def -n ig_cream_torque 30

31 Fixing errors Check tomcat running after configuration, if you get this message:: # /etc/init.d/tomcat5 status/etc/init.d/tomcat5 is stopped ONLY IF TOMCAT IS STOPPED TRY THIS SOLUTION: # rm -fr /var/lib/tomcat5/common/lib/jakarta* # /etc/init.d/tomcat5 start Starting tomcat5: /usr/bin/rebuild-jar-repository: error: Could not find log4j Java extension for this JVM/usr/bin/rebuild-jar-repository: error: Some detected jars were not found for this jvm [ OK ] 31

32 Second machine: BDII Installation (on Torque/PBS) 32

33 BDII installation On the NEW machine you have to replicate couple of steps of the previous installation: The server timing with the INFN one The repository setup (CREAM one excluded) wget The package installation # yum install -y lcg-ca # yum install -y ig_bdii_site The installation certificates setup The GISELA settings download and setup The configuration files upgrade: (same as previous installation, just copy them and update host) - /opt/glite/yaim/examples/wn-list.conf - /opt/glite/yaim/examples/siteinfo/ig-site-info.def (new configuration to make) - /opt/glite/yaim/examples/siteinfo/services/ig-bdii_site 33

34 BDII configuration We have to configure now /opt/glite/yaim/examples/siteinfo/services/ig-bdii_site: # vim services/ig-bdii_site SITE_DESC= utfsm Grid Site CE"SITE_SUPPORT_ =prod_mail@yourdomain.it SITE_SECURITY_ ="sec_mail@yourdomain.it"SITESIT E_LOC= Mexico city, DF SITE_OTHER_GRID="WLCG EGEE EUMED" BDII_REGIONS="CE" BDII_CE_URL="ldap://$CE_HOST:2170/mds-voname=resource,o=grid" After editing you launch command (bef: disable SElinux) # /opt/glite/yaim/bin/ig_yaim -c s ig-site-info.def -n ig_bdii_site 34

35 Third machine: WN Cream Installation (on Torque/PBS) 35

36 WN - Network Time Protocol Let s check if date s machine is correct with: # date if ntp date isn t correct # /etc/init.d/ntpd status # ntpdate ntp-1.infn.it if not let s configure file and make service start on boot: # /etc/init.d/ntpd start # chkconfig ntpd on 36

37 WN - Repository set up (by utfsm repo) Add to system repository ones specific for middleware to install # cd /etc/yum.repos.d/ # mv dag.repo dag.repo.stop # REPO= dag ig lcg-ca glite-wn.repo glite-wn_torque.repo # for rep_name in $REPO; do wget $rep_name.repo; done 37

38 WN - Repository set up (by CNAF repo) Add to system repository ones specific for middleware to install # cd /etc/yum.repos.d/ # mv dag.repo dag.repo.stop# REPO=" dag ig lcg-ca glite-wn.repo glite-wn_torque.repo "# for rep_name in $REPO; do wget done 38

39 Which metapackages we are going to install? There are several kinds of metapackages to install: lcg-ca LHC Computing Grid rpm collection to support external Certification Authority. ig_wn_torque_noafs INFNGRID Worker Node torque client in other to dialogue to torque server. We decide not to install afs file system. This metapackage is used with groupinstall option. 39

40 WN - Middleware component installation Use yum to install needed packets # yum clean all # yum install -y lcg-ca # yum groupinstall -y ig_wn_torque_noafs 40

41 WN - YAIM Configuration You can use same configuration file edited on CE: - this can be done on all worker node of a site; - so you don t neet to re-edit anything! Copy file from CE machine: # cd /opt/glite/yaim/examples/ # scp -r root@hostx(ce)x.utfsm.cl:/opt/glite/yaim/examples/. Ready to configure now # /opt/glite/yaim/bin/ig_yaim -c -s ig-site-info.def -n ig_wn_torque_noafs 41

42 Testing installation 42

43 Tests on CE SSH access to CE to test if CE can see WN and to test if all main service are up & running # pbsnodes cerist45.grid.arn.dz state = free np = 2 properties = lcgpro ntype = cluster status = opsys=linux,uname=linux grid-test-63.trigrid.it el5 #1 [cut] # /etc/init.d/glite status*** tomcat5:/opt/glite/etc/init.d/tomcat5 is already running (1514)*** glite-lblocallogger:glite-lb-logd runningglite-lb-interlogd running# /etc/init.d/globusgridftp statusglobus-gridftp-server (pid 25452) is running... 43

44 Tests on CE SSH access to CE and then become a gilda user: # su eumed001 Create a file and add the following: $ vi test.sh #!/bin/sh sleep 20 #(it's useful to see the job status) hostname Set right permission to be executable: $ chmod 700 test.sh 44

45 Tests on CE Launch job locally on CE $ qsub q eumed test.sh Then check list of job in execution on CE $ qstat -ace.localdomain: Req'd Req'd ElapJob ID Username Queue Jobname SessID NDS TSK Memory Time S Time wn.localdo gilda001 short test.sh :15 R -- In case you want to more info: $ qstat -f 3 In case you want to abort a job execution: $ qdel 3 #that is jobid 45

46 Tests on CE If typing qstat -a command you didn t get no output, no jobs are being executed on CE and this means your previous job terminated so now you can list output. $ lstest.sh.e3 test.sh.o3 $ cat test.sh.e3 #error file$$ cat test.sh.o3 #output filewn.localdomain 46

47 JDL example $ vim hostname-cream.jdl Type = "Job"; JobType = "Normal"; Executable = "/bin/hostname"; StdOutput = "hostname.out"; StdError = "hostname.err"; OutputSandbox = {"hostname.err","hostname.out"}; Arguments = "-f"; OutputSandboxBaseDestUri = "gsiftp://localhost"; ShallowRetryCount = 3; 47

48 Working test SSH access to UI to test if CE can receive and execute simple job $ ssh userxx@userinterface.utfsm.cl #password: userxx $ voms-proxy-init --voms prod.vo.eu-eela.org [cut] [rotondo@genius ~]$ glite-ce-delegate-proxy -e host(ce).utfsm.cl riccardo :36:21,683 WARN - No configuration file suitable for loading. Using built-in configuration :36:26,389 NOTICE - Proxy with delegation id [riccardo] succesfully delegated to endpoint [ $[rotondo@genius ~]$ glite-ce-job-submit -r hostxx.utfsm.cl:8443/cream-pbs-cert -D riccardo hostname-cream.jdl :39:06,444 WARN - No configuration file suitable for loading. Using built-in configuration $ glite-ce-job-status JobID=[ Status = [DONE-OK] ExitCode = [0] 48

49 Troubleshooting Which logs are supposed to be open if something goes wrong?: /var/log/message, for general errors /opt/glite/var/log (especially glitece-cream.log) /var/spool/pbs/server_priv/account ing/<data>, if even local submission on batch system doesn t work. 49

50 References INFNGRID generic installation guide: YAIM configuration variables CE Cream installation guide: GLITE Cream CE 3.2 SL5 Installation Guide [INFNGRID Release Wiki] YAIM system administrator guide: EUMEDGRID wiki: EuMedGRID sites installation and setup tips on How To Check And Test Your CREAMCE CE 50

51 Thank you for your kind attention! Any questions? 51

glite UI Installation

glite UI Installation The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite UI Installation Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso, Chile,

More information

CREAM Computing Element Overview

CREAM Computing Element Overview Co-ordination & Harmonisation of Advanced e-infrastructures for Research and Education Data Sharing Research Infrastructures Grant Agreement n. 306819 CREAM Computing Element Overview Bruce Becker, Coordinator

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

GRID COMPANION GUIDE

GRID COMPANION GUIDE Companion Subject: GRID COMPANION Author(s): Miguel Cárdenas Montes, Antonio Gómez Iglesias, Francisco Castejón, Adrian Jackson, Joachim Hein Distribution: Public 1.Introduction Here you will find the

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System International Collaboration to Extend and Advance Grid Education glite WMS Workload Management System Marco Pappalardo Consorzio COMETA & INFN Catania, Italy ITIS Ferraris, Acireale, Tutorial GRID per

More information

EMI Componets Installation And Configuration

EMI Componets Installation And Configuration EMI Componets Installation And Configuration Sara Bertocco INFN Padova - GridKA School EMI is partially funded by the European Commission under Grant Agreement RI-261611 Tour in the EMI site EMI site:

More information

YAIM Overview. Bruce Becker Meraka Institute. Co-ordination & Harmonisation of Advanced e-infrastructures for Research and Education Data Sharing

YAIM Overview. Bruce Becker Meraka Institute. Co-ordination & Harmonisation of Advanced e-infrastructures for Research and Education Data Sharing Co-ordination & Harmonisation of Advanced e-infrastructures for Research and Education Data Sharing Research Infrastructures Grant Agreement n. 306819 YAIM Overview Bruce Becker Meraka Institute Outline

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

WLCG Lightweight Sites

WLCG Lightweight Sites WLCG Lightweight Sites Mayank Sharma (IT-DI-LCG) 3/7/18 Document reference 2 WLCG Sites Grid is a diverse environment (Various flavors of CE/Batch/WN/ +various preferred tools by admins for configuration/maintenance)

More information

glite Middleware Usage

glite Middleware Usage glite Middleware Usage Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Nov. 18, 2008 www.eu-egee.org EGEE and glite are registered trademarks Usage

More information

AGATA Analysis on the GRID

AGATA Analysis on the GRID AGATA Analysis on the GRID R.M. Pérez-Vidal IFIC-CSIC For the e682 collaboration What is GRID? Grid technologies allow that computers share trough Internet or other telecommunication networks not only

More information

YAIM: The glite configuration tool

YAIM: The glite configuration tool YAIM: The glite configuration tool Sara Bertocco INFN Padova EMI is partially funded by the European Commission under Grant Agreement RI-261611 28 August 2012 GridKa School YAIM documentation Guide for

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

Release 0.1. Paolo Andreetto

Release 0.1. Paolo Andreetto CREAM G uidedocumentation Release 0.1 Paolo Andreetto Dec 15, 2017 Contents 1 CREAM Functional Description 3 2 CREAM installation 5 2.1 Requirements............................................... 5 2.2

More information

Gergely Sipos MTA SZTAKI

Gergely Sipos MTA SZTAKI Application development on EGEE with P-GRADE Portal Gergely Sipos MTA SZTAKI sipos@sztaki.hu EGEE Training and Induction EGEE Application Porting Support www.lpds.sztaki.hu/gasuc www.portal.p-grade.hu

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

LHC COMPUTING GRID INSTALLING THE RELEASE. Document identifier: Date: April 6, Document status:

LHC COMPUTING GRID INSTALLING THE RELEASE. Document identifier: Date: April 6, Document status: LHC COMPUTING GRID INSTALLING THE RELEASE Document identifier: EDMS id: Version: n/a v2.4.0 Date: April 6, 2005 Section: Document status: gis final Author(s): GRID Deployment Group ()

More information

MyProxy Server Installation

MyProxy Server Installation MyProxy Server Installation Emidio Giorgio INFN First Latin American Workshop for Grid Administrators 21-25 November 2005 www.eu-egee.org Outline Why MyProxy? Proxy Renewal mechanism. Remote authentication

More information

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN EMI Deployment Planning C. Aiftimiei D. Dongiovanni INFN Outline Migrating to EMI: WHY What's new: EMI Overview Products, Platforms, Repos, Dependencies, Support / Release Cycle Migrating to EMI: HOW Admin

More information

Architecture of the WMS

Architecture of the WMS Architecture of the WMS Dr. Giuliano Taffoni INFORMATION SYSTEMS UNIT Outline This presentation will cover the following arguments: Overview of WMS Architecture Job Description Language Overview WMProxy

More information

Quick Guide for the Torque Cluster Manager

Quick Guide for the Torque Cluster Manager Quick Guide for the Torque Cluster Manager Introduction: One of the main purposes of the Aries Cluster is to accommodate especially long-running programs. Users who run long jobs (which take hours or days

More information

Problemi di schedulazione distribuita su Grid

Problemi di schedulazione distribuita su Grid Problemi di schedulazione distribuita su Grid Ivan Porro Università degli Studi di Genova, DIST, Laboratorio BioLab pivan@unige.it 010-3532789 Riadattato da materiale realizzato da INFN Catania per il

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

The EPIKH, GILDA and GISELA Projects

The EPIKH, GILDA and GISELA Projects The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) The EPIKH, GILDA and GISELA Projects Antonio Calanducci INFN Catania (Consorzio COMETA) - UniCT Joint GISELA/EPIKH School for

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side Troubleshooting Grid authentication from the client side By Adriaan van der Zee RP1 presentation 2009-02-04 Contents The Grid @NIKHEF The project Grid components and interactions X.509 certificates, proxies

More information

Overview of HEP software & LCG from the openlab perspective

Overview of HEP software & LCG from the openlab perspective Overview of HEP software & LCG from the openlab perspective Andreas Unterkircher, CERN openlab February 2005 Andreas Unterkircher 1 Contents 1. Opencluster overview 2. High Energy Physics (HEP) software

More information

How to use computing resources at Grid

How to use computing resources at Grid How to use computing resources at Grid Nikola Grkic ngrkic@ipb.ac.rs Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Academic and Educat ional Gr id Init iat ive of S er bia Oct.

More information

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

Globus Toolkit Manoj Soni SENG, CDAC. 20 th & 21 th Nov 2008 GGOA Workshop 08 Bangalore

Globus Toolkit Manoj Soni SENG, CDAC. 20 th & 21 th Nov 2008 GGOA Workshop 08 Bangalore Globus Toolkit 4.0.7 Manoj Soni SENG, CDAC 1 What is Globus Toolkit? The Globus Toolkit is an open source software toolkit used for building Grid systems and applications. It is being developed by the

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:

More information

An Introduction to Cluster Computing Using Newton

An Introduction to Cluster Computing Using Newton An Introduction to Cluster Computing Using Newton Jason Harris and Dylan Storey March 25th, 2014 Jason Harris and Dylan Storey Introduction to Cluster Computing March 25th, 2014 1 / 26 Workshop design.

More information

Integrating the IEKP Linux cluster as a Tier-2/3 prototype centre into the LHC Computing Grid

Integrating the IEKP Linux cluster as a Tier-2/3 prototype centre into the LHC Computing Grid Institut für Experimentelle Kernphysik, Universität Karlsruhe (TH) IEKP KA/2006 3 Integrating the IEKP Linux cluster as a Tier-2/3 prototype centre into the LHC Computing Grid Volker Büge 1,2, Ulrich Felzmann

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

GPU Cluster Usage Tutorial

GPU Cluster Usage Tutorial GPU Cluster Usage Tutorial How to make caffe and enjoy tensorflow on Torque 2016 11 12 Yunfeng Wang 1 PBS and Torque PBS: Portable Batch System, computer software that performs job scheduling versions

More information

A Login Shell interface for INFN-GRID

A Login Shell interface for INFN-GRID A Login Shell interface for INFN-GRID S.Pardi2,3, E. Calloni1,2, R. De Rosa1,2, F. Garufi1,2, L. Milano1,2, G. Russo1,2 1Università degli Studi di Napoli Federico II, Dipartimento di Scienze Fisiche, Complesso

More information

Answers to Federal Reserve Questions. Training for University of Richmond

Answers to Federal Reserve Questions. Training for University of Richmond Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node

More information

a. puppet should point to master (i.e., append puppet to line with master in it. Use a text editor like Vim.

a. puppet should point to master (i.e., append puppet to line with master in it. Use a text editor like Vim. Head Node Make sure that you have completed the section on Precursor Steps and Storage. Key parts of that are necessary for you to continue on this. If you have issues, please let an instructor know to

More information

Future of Grid parallel exploitation

Future of Grid parallel exploitation Future of Grid parallel exploitation Roberto Alfieri - arma University & INFN Italy SuperbB Computing R&D Workshop - Ferrara 6/07/2011 1 Outline MI support in the current grid middleware (glite) MI and

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

Linux Administration

Linux Administration Linux Administration This course will cover all aspects of Linux Certification. At the end of the course delegates will have the skills required to administer a Linux System. It is designed for professionals

More information

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE The glite middleware Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 John.White@cern.ch www.eu-egee.org EGEE and glite are registered trademarks Outline glite distributions Software

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix

More information

EGI-InSPIRE. ARC-CE IPv6 TESTBED. Barbara Krašovec, Jure Kranjc ARNES. EGI-InSPIRE RI

EGI-InSPIRE. ARC-CE IPv6 TESTBED. Barbara Krašovec, Jure Kranjc ARNES.   EGI-InSPIRE RI EGI-InSPIRE ARC-CE IPv6 TESTBED Barbara Krašovec, Jure Kranjc ARNES Why ARC-CE over IPv6? - IPv4 exhaustion - On Friday 14 th, RIPE NCC has announced that the last /8 is being distributed from available

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

Job submission and management through web services: the experience with the CREAM service

Job submission and management through web services: the experience with the CREAM service Journal of Physics: Conference Series Job submission and management through web services: the experience with the CREAM service To cite this article: C Aiftimiei et al 2008 J. Phys.: Conf. Ser. 119 062004

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing February 8, 2007 Hardware 376 Opteron nodes, over 890 cores Gigabit networking, Myrinet networking, Infiniband networking soon Hardware: nyx nyx

More information

Lesson 6: Portlet for job submission

Lesson 6: Portlet for job submission Lesson 6: Portlet for job submission Mario Torrisi University of Catania - Italy (mario.torrisi@ct.infn.it) Sci-GaIA Winter School This project has received funding from the European Union s Horizon 2020

More information

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document

More information

Integration of Cloud and Grid Middleware at DGRZR

Integration of Cloud and Grid Middleware at DGRZR D- of International Symposium on Computing 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology March 12, 2010 Overview D- 1 D- Resource Center Ruhr 2 Clouds in the German

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

Implementing GRID interoperability

Implementing GRID interoperability AFS & Kerberos Best Practices Workshop University of Michigan, Ann Arbor June 12-16 2006 Implementing GRID interoperability G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, C. Scio**,

More information

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding

More information

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay) virtual organization Grid Computing Introduction & Parachute method Socle 2006 Clermont-Ferrand (@lal Orsay) Olivier Dadoun LAL, Orsay dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Preamble

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,

More information

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef

VMs at a Tier-1 site. EGEE 09, Sander Klous, Nikhef VMs at a Tier-1 site EGEE 09, 21-09-2009 Sander Klous, Nikhef Contents Introduction Who are we? Motivation Why are we interested in VMs? What are we going to do with VMs? Status How do we approach this

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side System and Network Engineering RP1 Troubleshooting Grid authentication from the client side Adriaan van der Zee 2009-02-05 Abstract This report, the result of a four-week research project, discusses the

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

Introduction to Programming and Computing for Scientists

Introduction to Programming and Computing for Scientists Oxana Smirnova (Lund University) Programming for Scientists Tutorial 4b 1 / 44 Introduction to Programming and Computing for Scientists Oxana Smirnova Lund University Tutorial 4b: Grid certificates and

More information

Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility Journal of Physics: Conference Series OPEN ACCESS Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility Recent citations - A new Self-Adaptive dispatching System for local clusters

More information

Batch system usage arm euthen F azo he Z J. B T

Batch system usage arm euthen F azo he Z J. B T Batch system usage 10.11.2010 General stuff Computing wikipage: http://dvinfo.ifh.de Central email address for questions & requests: uco-zn@desy.de Data storage: AFS ( /afs/ifh.de/group/amanda/scratch/

More information

Name Department/Research Area Have you used the Linux command line?

Name Department/Research Area Have you used the Linux command line? Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services

More information

glite Advanced Job Management

glite Advanced Job Management The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Advanced Job Management Porting Application School Cesar Fernández (cesar.fernandez@usm.cl) Valparaíso, 30 November 2010

More information

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing November 10, 2009 Outline 1 Resources Hardware Software 2 Mechanics: Access Transferring files and data to and from the clusters Logging into the

More information

Parallel Computing in EGI

Parallel Computing in EGI Parallel Computing in EGI V. Šipková, M. Dobrucký, and P. Slížik Ústav informatiky, Slovenská akadémia vied 845 07 Bratislava, Dúbravská cesta 9 http://www.ui.sav.sk/ {Viera.Sipkova, Miroslav.Dobrucky,

More information

LCG-2 and glite Architecture and components

LCG-2 and glite Architecture and components LCG-2 and glite Architecture and components Author E.Slabospitskaya www.eu-egee.org Outline Enabling Grids for E-sciencE What are LCG-2 and glite? glite Architecture Release 1.0 review What is glite?.

More information

WMS Application Program Interface: How to integrate them in your code

WMS Application Program Interface: How to integrate them in your code The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) WMS Application Program Interface: How to integrate them in your code Fabrizio Pistagna (fabrizio.pistagna@ct.infn.it) Kolkata,

More information

Certificate Authorities: Information and Usage

Certificate Authorities: Information and Usage Certificate Authorities: Information and Usage Marcus Christie December 16, 2003 1 Introduction It is assumed that the reader knows, basically, what a certificate authority (CA) is. This document describes

More information

Using Sapelo2 Cluster at the GACRC

Using Sapelo2 Cluster at the GACRC Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram

More information

Answers to Federal Reserve Questions. Administrator Training for University of Richmond

Answers to Federal Reserve Questions. Administrator Training for University of Richmond Answers to Federal Reserve Questions Administrator Training for University of Richmond 2 Agenda Cluster overview Physics hardware Chemistry hardware Software Modules, ACT Utils, Cloner GridEngine overview

More information

WMS overview and Proposal for Job Status

WMS overview and Proposal for Job Status WMS overview and Proposal for Job Status Author: V.Garonne, I.Stokes-Rees, A. Tsaregorodtsev. Centre de physiques des Particules de Marseille Date: 15/12/2003 Abstract In this paper, we describe briefly

More information

Beginner's Guide for UK IBM systems

Beginner's Guide for UK IBM systems Beginner's Guide for UK IBM systems This document is intended to provide some basic guidelines for those who already had certain programming knowledge with high level computer languages (e.g. Fortran,

More information

Cluster Nazionale CSN4

Cluster Nazionale CSN4 CCR Workshop - Stato e Prospettive del Calcolo Scientifico Cluster Nazionale CSN4 Parallel computing and scientific activity Roberto Alfieri - Parma University & INFN, Gr.Coll. di Parma LNL - 16/02/2011

More information

Grid Documentation Documentation

Grid Documentation Documentation Grid Documentation Documentation Release 1.0 Grid Support Nov 06, 2018 Contents 1 General 3 2 Basics 9 3 Advanced topics 25 4 Best practices 81 5 Service implementation 115 6 Tutorials

More information

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection Switching Operational modes: Store-and-forward: Each switch receives an entire packet before it forwards it onto the next switch - useful in a general purpose network (I.e. a LAN). usually, there is a

More information

Contents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version...

Contents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version... Contents Note: pay attention to where you are........................................... 1 Note: Plaintext version................................................... 1 Hello World of the Bash shell 2 Accessing

More information

Monitoring tools in EGEE

Monitoring tools in EGEE Monitoring tools in EGEE Piotr Nyczyk CERN IT/GD Joint OSG and EGEE Operations Workshop - 3 Abingdon, 27-29 September 2005 www.eu-egee.org Kaleidoscope of monitoring tools Monitoring for operations Covered

More information

Parallel computing on the Grid

Parallel computing on the Grid arallel computing on the Grid and application porting examples Roberto Alfieri - arma University & INFN Italy Comput-er.it Meeting - arma 21/06/2011 1 Current MI support in Grid (glite middleware) Issue

More information

E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT

E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT DSA1.1 Document Filename: Activity: Partner(s): Lead Partner: Document classification: EUFORIA-DSA1.1-v1.0-CSIC SA1 CSIC, FZK, PSNC, CHALMERS CSIC PUBLIC

More information

CREAM-WMS Integration

CREAM-WMS Integration -WMS Integration Luigi Zangrando (zangrando@pd.infn.it) Moreno Marzolla (marzolla@pd.infn.it) JRA1 IT-CZ Cluster Meeting, Rome, 12 13/9/2005 1 Preliminaries We had a meeting in PD between Francesco G.,

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI

Setup Desktop Grids and Bridges. Tutorial. Robert Lovas, MTA SZTAKI Setup Desktop Grids and Bridges Tutorial Robert Lovas, MTA SZTAKI Outline of the SZDG installation process 1. Installing the base operating system 2. Basic configuration of the operating system 3. Installing

More information

GILDA Virtual Machine

GILDA Virtual Machine The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) GILDA Virtual Machine Riccardo Rotondo (riccardo.rotondo@garr.it) Consortium GARR Joint CHAIN/EPIKH School for Application Porting

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

Dr. Giuliano Taffoni INAF - OATS

Dr. Giuliano Taffoni INAF - OATS Query Element Demo The Grid Query Element for glite Dr. Giuliano Taffoni INAF - OATS Overview What is a G-DSE? Use and Admin a DB: the Query Element; Upcoming Features; Working on QE People: Edgardo Ambrosi

More information

TORQUE Resource Manager Quick Start Guide Version

TORQUE Resource Manager Quick Start Guide Version TORQUE Resource Manager Quick Start Guide Version High Performance Computing Center Ferdowsi University of Mashhad http://www.um.ac.ir/hpcc Jan. 2006 1 Contents 1 Introduction 3 1.1 Feature................................

More information

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line

More information

Sharpen Exercise: Using HPC resources and running parallel applications

Sharpen Exercise: Using HPC resources and running parallel applications Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into

More information

Grid Infrastructure For Collaborative High Performance Scientific Computing

Grid Infrastructure For Collaborative High Performance Scientific Computing Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific

More information