glideinwms Frontend Installation

Size: px
Start display at page:

Download "glideinwms Frontend Installation"

Transcription

1 glideinwms UCSD glideinwms Frontend Installation Part 1 Condor Installation by Igor Sfiligoi (UCSD) UCSD Jan 17th 2012 Condor Install 1

2 Overview Introduction Planning and Common setup Central Manager Installation Submit node Installation UCSD Jan 17th 2012 Condor Install 2

3 Refresher - Glideins A glidein is just a properly configured Condor execution node submitted as a Grid job glideinwms provides automation Submit node Central manager Collector Negotiator CREAM Execution glideinnode Execution glideinnode Submit node Submit node Schedd glideinwms Globus Execution glideinnode Execution node glidein Startd Job UCSD Jan 17th 2012 Condor Install 3

4 Refresher - Glideins The glideinwms triggers glidein submission The regular negotiator matches jobs to glideins Central manager Submit node Collector Negotiator CREAM Execution glideinnode Execution glideinnode Submit node Submit node Schedd glideinwms Globus Execution glideinnode Execution node glidein Startd Job UCSD Jan 17th 2012 Condor Install 4

5 Bottom line Condor is king! (glideinwms just a small layer on top) UCSD Jan 17th 2012 Condor Install 5

6 Condor installation Proper Condor installation and configuration the most important task Condor will do most of the work and is thus the most resource hungry GlideinWMS installation almost an afterthought Although it does require proper security config of Condor GlideinWMS installation proper will be described in a separate talk UCSD Jan 17th 2012 Condor Install 6

7 Planning and Common setup UCSD Jan 17th 2012 Condor Install 7

8 Refresher - Condor Two main node types Submit node(s) Central manager (execute nodes are dynamic glideins) Public TCP/IP networking needed GSI used for network security Submit node Schedd Central manager Collector Negotiator glidein UCSD Jan 17th 2012 Condor Install 8

9 Planning the setup In theory, all Condor daemons can be installed on a single node However, if at all possible, put Central Manager on a dedicated node i.e. do not use it as a submit node, too Both for security and stability reasons You may want/need more than one submit node Depends on expected use and available HW You do need at least one, though UCSD Jan 17th 2012 Condor Install 9

10 Common system considerations Condor is supported on a wide variety of platforms Including Linux (e.g. RHEL5), MacOS and Windows Linux recommended in OSG (and assumed in the rest of talk) GSI security requires Host or service certificate CAs & CRLs Typically delivered via OSG RPMs (but other means acceptable) Full Grid Client software recommended (for ease of ops) UCSD Jan 17th 2012 Condor Install 10

11 OSG Grid Client Requires RHEL5-compatible Linux RHEL6 support promised for early 2012 Procedure in a nutshell Add EPEL and OSG RPM repositories to sys conf. yum install osg-ca-certs yum install osg-client Enable CRL fetching crontab Other Grid clients (e.g. EGI/glite) will work just as well UCSD Jan 17th 2012 Condor Install 11

12 Requesting a host certificate OSG provides a script to talk to DOEGrids Procedure in a nutshell Install OSG client yum install osg-cert-scripts cert-request Wait for cert-retrieve cp into /etc/grid-security/ If you have other ways to obtain a host cert, feel free to use them UCSD Jan 17th 2012 Condor Install 12

13 Condor Central Manager UCSD Jan 17th 2012 Condor Install 13

14 Refresher - Central Manager Two (groups of) processes Collector Negotiator Central manager Collector Negotiator The Collector defines the Condor pool Knows about all the glideins it owns Knows about all the schedds The Negotiator does the matchmaking Decides who gets what resources UCSD Jan 17th 2012 Condor Install 14

15 Condor Collector considerations The Collector is the repository of all knowledge All other daemons report to it Including the glideins, who get its address at run-time Must process lots of info One update every 5 mins from each and every daemon With strong security expensive Central manager Negotiator Collector Collector Collector Typically deployed as a tree of collectors All security handled in leafs glidein glidein Top one still has the complete picture glidein glidein UCSD Jan 17th 2012 Condor Install 15

16 CCB An additional cost The Condor collectors are also acting as CCBs Each glidein will open 5+ long-lived TCP sockets Make sure you have enough file descriptors Default OS limit is 1024 per process Plan on having one CCB per 100 glideins CCB Leafs in the tree of collectors Call me back I want to connect to the execute node transfer files UCSD Jan 17th 2012 Condor Install 16

17 High availability (theory) Central manager can be a single point of failure If it dies, the Condor pool dies with it! To avoid this, one can deploy multiple CMs All daemons will advertise to 2 (or more) Collectors Currently not supported by glideinwms All CMs will have the same view of the world There can only be one Negotiator, though One negotiator will be Active, all others in standby More details on Condor man page UCSD Jan 17th 2012 Condor Install 17

18 Hardware needs Tree of collectors spreads the load over multiple processes So several CPUs come handy Negotiator single threaded Will benefit from fast CPU Memory usage not terrible O(100k) per glidein to store ClassAds Exact footprint depends on how many additional attributes the VO defines Concrete CMS example: 25k glideins ~ 6G memory Negligible disk IO UCSD Jan 17th 2012 Condor Install 18

19 System considerations Does not need to run as root (although it can) Make sure the host cert is readable by that user Must be on the public IP network Each collector listens on its own well defined port, must be reachable by all glideins (WAN) Negotiator has a dynamic list port, must be reachable by submit nodes (schedds) Will use a large number of network sockets Will overwhelm most firewalls Minimize risk due to Condor bugs Consider disabling stateful firewalls (e.g. iptables) Must open firewall at least for these UCSD Jan 17th 2012 Condor Install 19

20 Security considerations Cannot be firewalled endpoint security GSI security used (i.e. x509 certs) for networking Limit administrative rights to local users (FS auth) The Collector is central trust point of the pool The DNs of all other daemons are whitelisted here, including: Schedds Glideins (i.e. pilot proxies) Clients (e.g. glideinwms Frontend) UCSD Jan 17th 2012 Condor Install 20

21 Installing the CM Two major burdens (for basic install) Collector tree Security setup The glideinwms installer helps with both Starting from Condor tarball As any user (e.g. as non-root) Highly recommended RPM install also an option Easy to keep up-to-date (i.e. yum update) But you will need to configure by hand And will run as root Easy-to-use update cmdline tool available, too Unless you hack the startup script UCSD Jan 17th 2012 Condor Install 21

22 Collector tree setup In a nutshell For each secondary collector: Tell Master to start a collector on different port repeat Forward ClassAds to main Collector COLLECTORXXX = $(COLLECTOR) = $(COLLECTOR) COLLECTORXXX_ENVIRONMENT = "_CONDOR_COLLECTOR_LOG=$(LOG)/CollectorXXXLog" COLLECTORXXX_ENVIRONMENT = "_CONDOR_COLLECTOR_LOG=$(LOG)/CollectorXXXLog" COLLECTORXXX_ARGS = -f -p YYYY COLLECTORXXX_ARGS = -f -p YYYY DAEMON_LIST = $(DAEMON_LIST) COLLECTORXXX DAEMON_LIST = $(DAEMON_LIST) COLLECTORXXX # forward ads to the main collector # forward ads to the main collector # (this is ignored by the main collector, since the address matches itself) CONDOR_VIEW_HOST # (this is ignored by the = main $(COLLECTOR_HOST) collector, since the address matches itself) CONDOR_VIEW_HOST = $(COLLECTOR_HOST) x N UCSD Jan 17th 2012 Condor Install 22

23 Security setup (1) In a nutshell Configure basic GSI (i.e. point to CAs and host cert) Set up authorization (i.e. switch to whitelist) Whitelist all DNs Enable GSI DN whitelisting a bit annoying Must be done in two places in condor_config, and in condor_mapfile glideinwms provides a cmdline tool And is a regexp here! UCSD Jan 17th 2012 Condor Install 23

24 Security setup (2) # condor_config.local # condor_config.local # Configure GSI CERTIFICATE_MAPFILE=/home/condor/glidecondor/certs/condor_mapfile # Configure GSI CERTIFICATE_MAPFILE=/home/condor/glidecondor/certs/condor_mapfile GSI_DAEMON_TRUSTED_CA_DIR=/etc/grid-security/certificates GSI_DAEMON_TRUSTED_CA_DIR=/etc/grid-security/certificates GSI_DAEMON_CERT = /home/condor/.globus/hostcert.pem GSI_DAEMON_CERT = /home/condor/.globus/hostcert.pem GSI_DAEMON_KEY = /home/condor/.globus/hostkey.pem GSI_DAEMON_KEY = /home/condor/.globus/hostkey.pem # Force whitelisting DENY_WRITE # Force whitelisting = anonymous@* DENY_WRITE = anonymous@* DENY_ADMINISTRATOR = anonymous@* DENY_ADMINISTRATOR = anonymous@* DENY_DAEMON = anonymous@* DENY_DAEMON = anonymous@* DENY_NEGOTIATOR = anonymous@* DENY_NEGOTIATOR = anonymous@* DENY_CLIENT = anonymous@* DENY_CLIENT = anonymous@* ALLOW_ADMINISTRATOR = $(CONDOR_HOST) ALLOW_ADMINISTRATOR = $(CONDOR_HOST) ALLOW_WRITE = * ALLOW_WRITE = * USE_VOMS_ATTRIBUTES = False # use only pilot DN, not FQAN USE_VOMS_ATTRIBUTES = False # use only pilot DN, not FQAN # list all DNs # condor_mapfile # list all DNs # condor_mapfile GSI_DAEMON_NAME=$(GSI_DAEMON_NAME),DNXXX GSI "^DNXXX$" UIDXXX x N GSI_DAEMON_NAME=$(GSI_DAEMON_NAME),DNXXX GSI "^DNXXX$" UIDXXX GSI (.*) anonymous GSI (.*) anonymous # enable GSI FS (.*) \1 # enable GSI FS (.*) \1 SEC_DEFAULT_AUTHENTICATION_METHODS = FS,GSI SEC_DEFAULT_AUTHENTICATION_METHODS = FS,GSI SEC_DEFAULT_AUTHENTICATION = REQUIRED SEC_DEFAULT_AUTHENTICATION = REQUIRED SEC_DEFAULT_ENCRYPTION = OPTIONAL SEC_DEFAULT_ENCRYPTION = OPTIONAL Also enable local auth SEC_DEFAULT_INTEGRITY = REQUIRED SEC_DEFAULT_INTEGRITY = REQUIRED # optionally, relax client and read settings # optionally, relax client and read settings UCSD Jan 17th 2012 Condor Install 24

25 Installing with Q&A installer ~/glideinwms/install$./glideinwms_install ~/glideinwms/install$./glideinwms_install Please select: 4 Please select: 4 [4] User Pool Collector [4] User Pool Collector Where do you have the Condor tarball? /home/condor/downloads/condor x86_rhap_5-stripped.tar.gz Where do you have the Condor tarball? /home/condor/downloads/condor x86_rhap_5-stripped.tar.gz Where do you want to install it?: [/home/condor/glidecondor] /home/condor/glidecondor If Where something do you goes want wrong to install with Condor, it?: [/home/condor/glidecondor] who should get about /home/condor/glidecondor it?: me@my If something goes wrong with Condor, who should get about it?: me@my Do you want to split the config files between condor_config and condor_config.local?: (y/n) [y] y Do you want to split the config files between condor_config and condor_config.local?: (y/n) [y] y Do you want to get it from VDT?: (y/n) y Do you want to get it from VDT?: (y/n) y Do you have already a VDT installation?: (y/n) y Do you have already a VDT installation?: (y/n) y Where is the VDT installed?: /etc/osg/wn-client Where is the VDT installed?: /etc/osg/wn-client Will you be using a proxy or a cert? (proxy/cert) cert Will you be using a proxy or a cert? (proxy/cert) cert Where is your certificate located?: /home/condor/.globus/hostcert.pem Where is your certificate located?: /home/condor/.globus/hostcert.pem Where is your certificate key located?: /home/condor/.globus/hostkey.pem Where is your certificate key located?: /home/condor/.globus/hostkey.pem My DN = 'DN1' My DN = 'DN1' DN: DNXXX DN: DNXXX x N nickname: [condor001] uidxxx nickname: [condor001] uidxxx Is this a trusted Condor daemon?: (y/n) y Is this a trusted Condor daemon?: (y/n) y DN: DN: How many slave collectors do you want?: [5] 200 How many slave collectors do you want?: [5] 200 What name would you like to use for this pool?: [My pool] MyVO What name would you like to use for this pool?: [My pool] MyVO What port should the collector be running?: [9618] 9618 What port should the collector be running?: [9618] 9618 You can also add the DNs as an independent step UCSD Jan 17th 2012 Condor Install 25

26 Maintenance If you need to add more DNs, use cmdline tool glidecondor_adddn ~/glideinwms/install$./glidecondor_adddn -daemon "DN of Schedd A" "DNA" UIDA ~/glideinwms/install$./glidecondor_adddn -daemon "DN of Schedd A" "DNA" UIDA Configuration files changed. Configuration files changed. Remember to reconfig the affected Condor daemons. Remember to reconfig the affected Condor daemons. To upgrade the Condor binaries, use cmdline tool glidecondor_upgrade ~/glideinwms/install$./glidecondor_upgrade ~/Downloads/condor x86_rhap_5-stripped.tar.gz ~/glideinwms/install$./glidecondor_upgrade ~/Downloads/condor x86_rhap_5-stripped.tar.gz Will update Condor in /home/condor/glidecondor Will update Condor in /home/condor/glidecondor.... Creating backup dir Creating backup dir Putting new binaries in place Putting new binaries in place Finished successfully Finished successfully Old binaries can be found in /home/condor/glidecondor/old _13 Old binaries can be found in /home/condor/glidecondor/old _13 UCSD Jan 17th 2012 Condor Install 26

27 Starting Condor The installer will start Condor for you, but you still should know how to stop and start it by hand To start condor, run: ~/glidecondor/start_condor.sh To stop Condor, use condor_off -daemon master Finally, to force Condor to re-read the config: ~/glidecondor/sbin/condor_reconfig UCSD Jan 17th 2012 Condor Install 27

28 Condor Submit node(s) UCSD Jan 17th 2012 Condor Install 28

29 Refresher - Submit node(s) Submit node defined by the schedd Which holds user jobs Submit node Shadows will be started as the jobs are matched to glideins One per running job Schedd Shadow.. Shadow At least one submit node is needed But there may be many UCSD Jan 17th 2012 Condor Install 29

30 Network use Glideins must contact the submit node in order to run jobs Both with standard protocol and CCB Each shadow normally uses 2 random ports Not firewall friendly Can be a problem over O(10k) jobs Newer versions of Condor support shared port daemon Listens on a single port Although firewalls can get overwhelmed anyhow (see CM slides) Does not reduce number of sockets Forwards the sockets to the appropriate local process UCSD Jan 17th 2012 Condor Install 30

31 Security considerations Like with CM, must use endpoint security Schedd and CM must whitelist each other Certificate DN based AuthZ with glideins indirect No need to whitelist glidein DN(s) Collector trusts glidein, Schedd trusts Collector Schedd also must whitelist any clients (e.g. VO Frontend) Only startds can use indirect AuthZ Submit node Schedd Local users use FS auth (i.e. UID based) Central manager Collector Negotiator glidein UCSD Jan 17th 2012 Condor Install 31

32 Hardware needs Submit node is memory hungry 1M per running jobs due to shadows O(10k) per job in queue for ClassAds Schedd can use a fast CPU (single threaded) Shadows very light CPU users Jobs may put substantial IO load on HDD Depends on how much data is being produced Depends how short are the jobs And the above is just for Condor VO may have portal software or actual interactive users Actual need depends on how many additional VO attributes used Make sure the remaining HW is adequate for these UCSD Jan 17th 2012 Condor Install 32

33 User account considerations Users must be able to launch condor_submit locally on the submit node Remote submission not recommended (and disabled by default) VO must decide how to do it SSHd (i.e. interactive use) Portal (e.g. CMS CRABServer) Will need one UID per user Non-UID based auth possible, but not recommended (but not supported out of the box) Still local from the Condor point of view No need to create user accounts before Installing Condor, but do plan for it UCSD Jan 17th 2012 Condor Install 33

34 Schedd is a superuser Schedd must run as root (euid==0, even as it drops ruid to condor ) So it can switch UID as needed To access user files Same for shadows (but ruid set to job user) Host cert thus must be owned by root UCSD Jan 17th 2012 Condor Install 34

35 Installing the submit node Two major burdens (for basic install) Shared port daemon Security setup The glideinwms installer helps with both Starting from Condor tarball Should be run as root Highly recommended RPM install also an option Easy-to-use update cmdline tool available, too Easy to keep up-to-date (i.e. yum update) But you will need to configure by hand UCSD Jan 17th 2012 Condor Install 35

36 Shared port daemon Not enabled by default in Condor In a nutshell Pick a port for it Enable it Add it to the list of Daemons to start # # condor_config.local condor_config.local # # Enable Enable shared_port_daemon shared_port_daemon SHARED_PORT_ARGS SHARED_PORT_ARGS = = -p -p USE_SHARED_PORT USE_SHARED_PORT = = True True DAEMON_LIST DAEMON_LIST = = $(DAEMON_LIST) $(DAEMON_LIST) SHARED_PORT SHARED_PORT UCSD Jan 17th 2012 Condor Install 36

37 Security setup (1) In a nutshell Configure basic GSI (i.e. point to CAs and host cert) Enable match authentication Set up authorization (i.e. switch to whitelist) Whitelist all DNs Enable GSI DN whitelisting a bit annoying Must be done in two places in condor_config, and And is a regexp here! in condor_mapfile glideinwms provides a cmdline tool UCSD Jan 17th 2012 Condor Install 37

38 Security setup (2) # condor_config.local # condor_config.local # Configure GSI CERTIFICATE_MAPFILE=/opt/glidecondor/certs/condor_mapfile # Configure GSI CERTIFICATE_MAPFILE=/opt/glidecondor/certs/condor_mapfile GSI_DAEMON_TRUSTED_CA_DIR=/etc/grid-security/certificates GSI_DAEMON_TRUSTED_CA_DIR=/etc/grid-security/certificates GSI_DAEMON_CERT = /etc/grid-security/hostcert.pem GSI_DAEMON_CERT = /etc/grid-security/hostcert.pem GSI_DAEMON_KEY = /etc/grid-security/hostkey.pem GSI_DAEMON_KEY = /etc/grid-security/hostkey.pem # Enable match authentication # Enable match authentication SEC_ENABLE_MATCH_PASSWORD_AUTHENTICATION = TRUE SEC_ENABLE_MATCH_PASSWORD_AUTHENTICATION = TRUE # Force whitelisting DENY_WRITE # Force whitelisting = anonymous@* DENY_WRITE = anonymous@* # see CM slides for details # see CM slides for details # list all DNs # list all DNs GSI_DAEMON_NAME=$(GSI_DAEMON_NAME),DNXXX GSI_DAEMON_NAME=$(GSI_DAEMON_NAME),DNXXX # enable GSI SEC_DEFAULT_AUTHENTICATION_METHODS # enable GSI = FS,GSI SEC_DEFAULT_AUTHENTICATION_METHODS = FS,GSI SEC_DEFAULT_AUTHENTICATION = REQUIRED SEC_DEFAULT_AUTHENTICATION = REQUIRED SEC_DEFAULT_ENCRYPTION = OPTIONAL SEC_DEFAULT_ENCRYPTION = OPTIONAL SEC_DEFAULT_INTEGRITY = REQUIRED SEC_DEFAULT_INTEGRITY = REQUIRED # optionally, relax client and read settings # optionally, relax client and read settings # condor_mapfile # condor_mapfile GSI "^DNXXX$" UIDXXX GSI "^DNXXX$" UIDXXX GSI (.*) anonymous GSI (.*) anonymous FS (.*) \1 FS (.*) \1 Also enable local auth x N UCSD Jan 17th 2012 Condor Install 38

39 Network optimization settings Since glideins often behind firewalls The glidein Startd setup optimized to avoid incoming connections and UDP The Schedd must also play along # # condor_config.local condor_config.local # # Reverse Reverse protocol protocol direction direction STARTD_SENDS_ALIVES STARTD_SENDS_ALIVES = = True True # # Avoid Avoid UDP UDP SCHEDD_SEND_VACATE_VIA_TCP SCHEDD_SEND_VACATE_VIA_TCP = = True True UCSD Jan 17th 2012 Condor Install 39

40 Installing with Q&A installer ~/glideinwms/install$./glideinwms_install ~/glideinwms/install$./glideinwms_install Please select: 5 [5] Please User Schedd select: 5 [5] User Schedd Which user should Condor run under?: [condor] condor Where Which do user you should have the Condor Condor run tarball? under?: /root/condor x86_rhap_5-stripped.tar.gz [condor] condor Where do you have the Condor tarball? /root/condor x86_rhap_5-stripped.tar.gz Where do you want to install it?: [/home/condor/glidecondor] /opt/glidecondor Where do you want to install it?: [/home/condor/glidecondor] /opt/glidecondor If something goes wrong with Condor, who should get about it?: me@my If something goes wrong with Condor, who should get about it?: me@my Do you want to split the config files between condor_config and condor_config.local?: (y/n) [y] y Do you want to split the config files between condor_config and condor_config.local?: (y/n) [y] y Do you want to get it from VDT?: (y/n) y Do you want to get it from VDT?: (y/n) y Do you have already a VDT installation?: (y/n) y Do you have already a VDT installation?: (y/n) y Where is the VDT installed?: /etc/osg/wn-client Where is the VDT installed?: /etc/osg/wn-client Will you be using a proxy or a cert? (proxy/cert) cert Will you be using a proxy or a cert? (proxy/cert) cert Where is your certificate located?: /etc/grid-security/hostcert.pem Where is your certificate located?: /etc/grid-security/hostcert.pem Where is your certificate key located?: /etc/grid-security/hostkey.pem Where is your certificate key located?: /etc/grid-security/hostkey.pem My DN = 'DN1' My DN = 'DN1' DN: DNXXX DN: DNXXX nickname: [condor001] uidxxx nickname: [condor001] uidxxx Is this a trusted Condor daemon?: (y/n) y Is this a trusted Condor daemon?: (y/n) y DN: DN: What node is the collector running (i.e. CONDOR_HOST)?: collectornode.mydomain What node is the collector running (i.e. CONDOR_HOST)?: collectornode.mydomain Do you want to enable the shared_port_daemon?: (y/n) y Do you want to enable the shared_port_daemon?: (y/n) y What port should it use?: [9615] 9615 What port should it use?: [9615] 9615 How many secondary schedds do you want?: [9] 0 How many secondary schedds do you want?: [9] 0 x N You can also add the DNs as an independent step UCSD Jan 17th 2012 Condor Install 40

41 Maintenance If you need to add more DNs, use cmdline tool glidecondor_adddn ~/glideinwms/install$./glidecondor_adddn -daemon "DN of Schedd A" "DNA" UIDA ~/glideinwms/install$./glidecondor_adddn -daemon "DN of Schedd A" "DNA" UIDA Configuration files changed. Configuration files changed. Remember to reconfig the affected Condor daemons. Remember to reconfig the affected Condor daemons. To upgrade the Condor binaries, use cmdline tool glidecondor_upgrade ~/glideinwms/install$./glidecondor_upgrade ~/Downloads/condor x86_rhap_5-stripped.tar.gz ~/glideinwms/install$./glidecondor_upgrade ~/Downloads/condor x86_rhap_5-stripped.tar.gz Will update Condor in /home/condor/glidecondor Will update Condor in /home/condor/glidecondor.... Creating backup dir Creating backup dir Putting new binaries in place Putting new binaries in place Finished successfully Finished successfully Old binaries can be found in /home/condor/glidecondor/old _13 Old binaries can be found in /home/condor/glidecondor/old _13 Do not use -daemon for client's DN UCSD Jan 17th 2012 Condor Install 41

42 Starting Condor The installer will start Condor for you, but you still should know how to stop and start it by hand The installer has created an init.d script for you /etc/init.d/condor start stop To force Condor to reload its config, still use /opt/glidecondor/sbin/condor_reconfig All as root UCSD Jan 17th 2012 Condor Install 42

43 Fine tunning UCSD Jan 17th 2012 Condor Install 43

44 Fine tunning The previous slides provide only basic setup Although the glideinwms does some basic tunning You will likely want to tune the system further Proper limits in the submit node Default job attributes Sanity checks Priority tunning Not part of this talk Will go into details tomorrow UCSD Jan 17th 2012 Condor Install 44

45 Integration with OSG Accounting UCSD Jan 17th 2012 Condor Install 45

46 OSG Accounting OSG tries to keep accurate accounting information of who used what resources Using GRATIA UCSD Jan 17th 2012 Condor Install 46

47 Per-user accounting OSG has per-user accounting, too With glideins, this level of detail lost Only pilot proxy seen by OSG (sites) UCSD Jan 17th 2012 Condor Install 47

48 The glidein GRATIA probe OSG thus asks glidein operators to install a dedicated probe alongside the glidein schedd(s) Which will provide per-user accounting info to the OSG GRATIA server Optimized for use with OSG glidein factory Submit node Schedd GRATIA Probe OSG GRATIA Server UCSD Jan 17th 2012 Condor Install 48

49 Installing the GRATIA probe In a nutshell Register submit node with GOC Tweak condor config yum install gratia-probe-condor Configure GRATIA UCSD Jan 17th 2012 Condor Install 49

50 Condor changes for GRATIA GRATIA gets information from history logs Requires one file per terminated job for efficiency GRATIA needs to know where the job ran Additional attribute added to the job ClassAd (more general details on this tomorrow) # # condor_config.local condor_config.local PER_JOB_HISTORY_DIR PER_JOB_HISTORY_DIR = = /var/lib/gratia/data /var/lib/gratia/data JOBGLIDEIN_ResourceName=\ JOBGLIDEIN_ResourceName=\ "$$([IfThenElse(IsUndefined(TARGET.GLIDEIN_ResourceName), "$$([IfThenElse(IsUndefined(TARGET.GLIDEIN_ResourceName), \ \ IfThenElse(IsUndefined(TARGET.GLIDEIN_Site), IfThenElse(IsUndefined(TARGET.GLIDEIN_Site), \ \ FileSystemDomain, FileSystemDomain, TARGET.GLIDEIN_Site), TARGET.GLIDEIN_Site), \ \ TARGET.GLIDEIN_ResourceName)])" TARGET.GLIDEIN_ResourceName)])" SUBMIT_EXPRS SUBMIT_EXPRS = = $(SUBMIT_EXPRS) $(SUBMIT_EXPRS) JOBGLIDEIN_ResourceName JOBGLIDEIN_ResourceName UCSD Jan 17th 2012 Condor Install 50

51 GRATIA configuration Essentially just tell GRATIA what name you have registered in with GOC Then enable it You also need to tell it where to find Condor # # /etc/gratia/condor/probeconfig /etc/gratia/condor/probeconfig SiteName="VOX_glidein_node1" SiteName="VOX_glidein_node1" EnableProbe="1" EnableProbe="1" # # add add this this line line to to allow allow user user jobs jobs # # without without a a proxy proxy MapUnknownToGroup="1" MapUnknownToGroup="1" # # /root/setup.sh /root/setup.sh source source /etc/profile.d/condor.sh /etc/profile.d/condor.sh UCSD Jan 17th 2012 Condor Install 51

52 The End UCSD Jan 17th 2012 Condor Install 52

53 Pointers The official glideinwms project Web page is glideinwms development team is reachable at Condor Home Page Condor support UCSD Jan 17th 2012 Condor Install 53

54 Acknowledgments The glideinwms is a CMS-led project developed mostly at FNAL, with contributions from UCSD and ISI The glideinwms factory operations at UCSD is sponsored by OSG The funding comes from NSF, DOE and the UC system UCSD Jan 17th 2012 Condor Install 54

glideinwms architecture by Igor Sfiligoi, Jeff Dost (UCSD)

glideinwms architecture by Igor Sfiligoi, Jeff Dost (UCSD) glideinwms architecture by Igor Sfiligoi, Jeff Dost (UCSD) Outline A high level overview of the glideinwms Description of the components 2 glideinwms from 10k feet 3 Refresher - HTCondor A Condor pool

More information

glideinwms UCSD Condor tunning by Igor Sfiligoi (UCSD) UCSD Jan 18th 2012 Condor Tunning 1

glideinwms UCSD Condor tunning by Igor Sfiligoi (UCSD) UCSD Jan 18th 2012 Condor Tunning 1 glideinwms Training @ UCSD Condor tunning by Igor Sfiligoi (UCSD) UCSD Jan 18th 2012 Condor Tunning 1 Regulating User Priorities UCSD Jan 18th 2012 Condor Tunning 2 User priorities By default, the Negotiator

More information

glideinwms Training Glidein Internals How they work and why by Igor Sfiligoi, Jeff Dost (UCSD) glideinwms Training Glidein internals 1

glideinwms Training Glidein Internals How they work and why by Igor Sfiligoi, Jeff Dost (UCSD) glideinwms Training Glidein internals 1 Glidein Internals How they work and why by Igor Sfiligoi, Jeff Dost (UCSD) Glidein internals 1 Refresher glidein_startup the glidein_startup script configures and starts Condor on the worker node Glidein

More information

Configuring a glideinwms factory

Configuring a glideinwms factory GlideinWMS Training @ UCSD Configuring a glideinwms factory by Igor Sfiligoi (UCSD) UCSD Nov 8th Factory configuration 1 Refresher Glidein factory The glidein factory knows about the sites and does the

More information

CCB The Condor Connection Broker. Dan Bradley Condor Project CS and Physics Departments University of Wisconsin-Madison

CCB The Condor Connection Broker. Dan Bradley Condor Project CS and Physics Departments University of Wisconsin-Madison CCB The Condor Connection Broker Dan Bradley dan@hep.wisc.edu Condor Project CS and Physics Departments University of Wisconsin-Madison Condor Connections Central Manager advertise negotiate run this job

More information

Look What I Can Do: Unorthodox Uses of HTCondor in the Open Science Grid

Look What I Can Do: Unorthodox Uses of HTCondor in the Open Science Grid Look What I Can Do: Unorthodox Uses of HTCondor in the Open Science Grid Mátyás Selmeci Open Science Grid Software Team / Center for High- Throughput Computing HTCondor Week 2015 More Than a Batch System

More information

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool D. Mason for CMS Software & Computing 1 Going to try to give you a picture of the CMS HTCondor/ glideinwms global pool What s the use case

More information

An update on the scalability limits of the Condor batch system

An update on the scalability limits of the Condor batch system An update on the scalability limits of the Condor batch system D Bradley 1, T St Clair 1, M Farrellee 1, Z Guo 1, M Livny 1, I Sfiligoi 2, T Tannenbaum 1 1 University of Wisconsin, Madison, WI, USA 2 University

More information

Introducing the HTCondor-CE

Introducing the HTCondor-CE Introducing the HTCondor-CE CHEP 2015 Presented by Edgar Fajardo 1 Introduction In summer 2012, OSG performed an internal review of major software components, looking for strategic weaknesses. One highlighted

More information

HTCondor overview. by Igor Sfiligoi, Jeff Dost (UCSD)

HTCondor overview. by Igor Sfiligoi, Jeff Dost (UCSD) HTCondor overview by Igor Sfiligoi, Jeff Dost (UCSD) Acknowledgement These slides are heavily based on the presentation Todd Tannenbaum gave at CERN in Feb 2011 https://indico.cern.ch/event/124982/timetable/#20110214.detailed

More information

Shooting for the sky: Testing the limits of condor. HTCondor Week May 2015 Edgar Fajardo On behalf of OSG Software and Technology

Shooting for the sky: Testing the limits of condor. HTCondor Week May 2015 Edgar Fajardo On behalf of OSG Software and Technology Shooting for the sky: Testing the limits of condor 21 May 2015 Edgar Fajardo On behalf of OSG Software and Technology 1 Acknowledgement Although I am the one presenting. This work is a product of a collaborative

More information

What s new in HTCondor? What s coming? HTCondor Week 2018 Madison, WI -- May 22, 2018

What s new in HTCondor? What s coming? HTCondor Week 2018 Madison, WI -- May 22, 2018 What s new in HTCondor? What s coming? HTCondor Week 2018 Madison, WI -- May 22, 2018 Todd Tannenbaum Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison

More information

Care and Feeding of HTCondor Cluster. Steven Timm European HTCondor Site Admins Meeting 8 December 2014

Care and Feeding of HTCondor Cluster. Steven Timm European HTCondor Site Admins Meeting 8 December 2014 Care and Feeding of HTCondor Cluster Steven Timm European HTCondor Site Admins Meeting 8 December 2014 Disclaimer Some HTCondor configuration and operations questions are more religion than science. There

More information

Networking and High Throughput Computing. Garhan Attebury HTCondor Week 2015

Networking and High Throughput Computing. Garhan Attebury HTCondor Week 2015 Networking and High Throughput Computing Garhan Attebury HTCondor Week 2015 A simpler time or at least blissfully ignorant time Host A magic network cable Host B What network communication? Single host,

More information

Flying HTCondor at 100gbps Over the Golden State

Flying HTCondor at 100gbps Over the Golden State Flying HTCondor at 100gbps Over the Golden State Jeff Dost (UCSD) HTCondor Week 2016 1 What is PRP? Pacific Research Platform: - 100 gbit network extending from Southern California to Washington - Interconnects

More information

OSG Lessons Learned and Best Practices. Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session

OSG Lessons Learned and Best Practices. Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session OSG Lessons Learned and Best Practices Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session Introduction Ziggy wants his supper at 5:30 PM Users submit most jobs at 4:59

More information

Tutorial 4: Condor. John Watt, National e-science Centre

Tutorial 4: Condor. John Watt, National e-science Centre Tutorial 4: Condor John Watt, National e-science Centre Tutorials Timetable Week Day/Time Topic Staff 3 Fri 11am Introduction to Globus J.W. 4 Fri 11am Globus Development J.W. 5 Fri 11am Globus Development

More information

Introduction to Distributed HTC and overlay systems

Introduction to Distributed HTC and overlay systems Introduction to Distributed HTC and overlay systems Tuesday morning session Igor Sfiligoi University of California San Diego Logistical reminder It is OK to ask questions - During

More information

CMS experience of running glideinwms in High Availability mode

CMS experience of running glideinwms in High Availability mode CMS experience of running glideinwms in High Availability mode I Sfiligoi 1, J Letts 1, S Belforte 2, A McCrea 1, K Larson 3, M Zvada 4, B Holzman 3, P Mhashilkar 3, D C Bradley 5, M D Saiz Santos 1, F

More information

Installing OSG in a VirtualBox Machine

Installing OSG in a VirtualBox Machine SPRACE-Brazil December 10, 2008 VirtualBox Using SUN xvm VirtualBox version 2.0.2. Guest Operational system installed using CentOS-5.2-x86_64-bin-DVD.iso 12 GB image called testserver1 To use only your

More information

Cloud Computing. Summary

Cloud Computing. Summary Cloud Computing Lectures 2 and 3 Definition of Cloud Computing, Grid Architectures 2012-2013 Summary Definition of Cloud Computing (more complete). Grid Computing: Conceptual Architecture. Condor. 1 Cloud

More information

glideinwms: Quick Facts

glideinwms: Quick Facts glideinwms: Quick Facts glideinwms is an open-source Fermilab Computing Sector product driven by CMS Heavy reliance on HTCondor from UW Madison and we work closely with them http://tinyurl.com/glideinwms

More information

Factory Ops Site Debugging

Factory Ops Site Debugging Factory Ops Site Debugging This talk shows detailed examples of how we debug site problems By Jeff Dost (UCSD) Factory Ops Site Debugging 1 Overview Validation Rundiff Held Waiting Pending Unmatched Factory

More information

Singularity in CMS. Over a million containers served

Singularity in CMS. Over a million containers served Singularity in CMS Over a million containers served Introduction The topic of containers is broad - and this is a 15 minute talk! I m filtering out a lot of relevant details, particularly why we are using

More information

Condor-G: HTCondor for grid submission. Jaime Frey (UW-Madison), Jeff Dost (UCSD)

Condor-G: HTCondor for grid submission. Jaime Frey (UW-Madison), Jeff Dost (UCSD) Condor-G: HTCondor for grid submission Jaime Frey (UW-Madison), Jeff Dost (UCSD) Acknowledgement These slides are heavily based on the presentation Jaime Frey gave at UCSD in Feb 2011 http://www.t2.ucsd.edu/twiki2/bin/view/main/glideinfactory1111

More information

First Principles Vulnerability Assessment. Motivation

First Principles Vulnerability Assessment. Motivation First Principles Vulnerability Assessment James A. Kupsch Barton P. Miller Computer Sciences Department University of Wisconsin Elisa Heymann Eduardo César Computer Architecture and Operating Systems Department

More information

Eurogrid: a glideinwms based portal for CDF data analysis - 19th January 2012 S. Amerio. (INFN Padova) on behalf of Eurogrid support group

Eurogrid: a glideinwms based portal for CDF data analysis - 19th January 2012 S. Amerio. (INFN Padova) on behalf of Eurogrid support group Eurogrid: a glideinwms based portal for CDF data analysis - 19th January 2012 S. Amerio (INFN Padova) on behalf of Eurogrid support group CDF computing model CDF computing model is based on Central farm

More information

Corral: A Glide-in Based Service for Resource Provisioning

Corral: A Glide-in Based Service for Resource Provisioning : A Glide-in Based Service for Resource Provisioning Gideon Juve USC Information Sciences Institute juve@usc.edu Outline Throughput Applications Grid Computing Multi-level scheduling and Glideins Example:

More information

Scalability and interoperability within glideinwms

Scalability and interoperability within glideinwms Journal of Physics: Conference Series Scalability and interoperability within glideinwms To cite this article: D Bradley et al 2010 J. Phys.: Conf. Ser. 219 062036 View the article online for updates and

More information

Containerized Cloud Scheduling Environment

Containerized Cloud Scheduling Environment University of Victoria Engineering & Computer Science Co-op Work Term Report Fall 2017 Containerized Cloud Scheduling Environment Department of Physics University of Victoria Victoria, BC Tahya Weiss-Gibbons

More information

Cloud Computing with HTCondor

Cloud Computing with HTCondor Cloud Computing with HTCondor Background At the University of Liverpool, the Advanced Research Computing team support a variety of research computing facilities including two HPC clusters and a high throughput

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

Interfacing HTCondor-CE with OpenStack: technical questions

Interfacing HTCondor-CE with OpenStack: technical questions Interfacing HTCondor-CE with OpenStack: technical questions Jose Caballero HTCondor Week 2017 Disclaimer facts: This work was done under the umbrella of OSG Technologies Investigations. So there were other

More information

Primer for Site Debugging

Primer for Site Debugging Primer for Site Debugging This talk introduces key concepts and tools used in the following talk on site debugging By Jeff Dost (UCSD) glideinwms training Primer for Site Debugging 1 Overview Monitoring

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Grid Compute Resources and Grid Job Management

Grid Compute Resources and Grid Job Management Grid Compute Resources and Job Management March 24-25, 2007 Grid Job Management 1 Job and compute resource management! This module is about running jobs on remote compute resources March 24-25, 2007 Grid

More information

A Virtual Comet. HTCondor Week 2017 May Edgar Fajardo On behalf of OSG Software and Technology

A Virtual Comet. HTCondor Week 2017 May Edgar Fajardo On behalf of OSG Software and Technology A Virtual Comet HTCondor Week 2017 May 3 2017 Edgar Fajardo On behalf of OSG Software and Technology 1 Working in Comet What my friends think I do What Instagram thinks I do What my boss thinks I do 2

More information

Securing A Basic HTCondor Pool

Securing A Basic HTCondor Pool Securing A Basic HTCondor Pool Basic Concepts You have an HTCondor pool Personal HTCondor (1 node) 1000 node cluster Who can use your pool? Basic Concepts Who can use it is really two concepts: The Who

More information

First evaluation of the Globus GRAM Service. Massimo Sgaravatto INFN Padova

First evaluation of the Globus GRAM Service. Massimo Sgaravatto INFN Padova First evaluation of the Globus GRAM Service Massimo Sgaravatto INFN Padova massimo.sgaravatto@pd.infn.it Draft version release 1.0.5 20 June 2000 1 Introduction...... 3 2 Running jobs... 3 2.1 Usage examples.

More information

What s new in Condor? What s c Condor Week 2010

What s new in Condor? What s c Condor Week 2010 What s new in Condor? What s c Condor Week 2010 Condor Project Computer Sciences Department University of Wisconsin-Madison What s new in Condor? What s coming up? Condor Week 2010 Condor Project Computer

More information

glideinwms experience with glexec

glideinwms experience with glexec Home Search Collections Journals About Contact us My IOPscience glideinwms experience with glexec This article has been downloaded from IOPscience. Please scroll down to see the full text article. 2012

More information

Andrej Filipčič

Andrej Filipčič Singularity@SiGNET Andrej Filipčič SiGNET 4.5k cores, 3PB storage, 4.8.17 kernel on WNs and Gentoo host OS 2 ARC-CEs with 700TB cephfs ARC cache and 3 data delivery nodes for input/output file staging

More information

NIC Chile Secondary DNS Service History and Evolution

NIC Chile Secondary DNS Service History and Evolution NIC Chile Secondary DNS Service History and Evolution Marco Díaz OARC Buenos Aires 2016 A little bit of history NIC Chile started to offer secondary service as a way to improve the local Internet. Launched

More information

Improvements to Configuration. John (TJ) Knoeller Condor Week 2014

Improvements to Configuration. John (TJ) Knoeller Condor Week 2014 Improvements to Configuration John (TJ) Knoeller Condor Week 2014 Usability Goals for 8.1/8.2 Backward compatible Small Less surprising More powerful 2 Backward compatible Old configuration files work

More information

Upgrade Guide. This document details the upgrade process for customers moving from the full version of OnApp Cloud v2.3.1 to v2.3.2.

Upgrade Guide. This document details the upgrade process for customers moving from the full version of OnApp Cloud v2.3.1 to v2.3.2. Upgrade Guide v2.3.2 This document details the upgrade process for customers moving from the full version of OnApp Cloud v2.3.1 to v2.3.2. It explains important changes to the backup system, network config

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Introduction to Programming and Computing for Scientists

Introduction to Programming and Computing for Scientists Oxana Smirnova (Lund University) Programming for Scientists Tutorial 4b 1 / 44 Introduction to Programming and Computing for Scientists Oxana Smirnova Lund University Tutorial 4b: Grid certificates and

More information

Singularity tests at CC-IN2P3 for Atlas

Singularity tests at CC-IN2P3 for Atlas Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Singularity tests at CC-IN2P3 for Atlas Vamvakopoulos Emmanouil Journées LCG-France, 22-24 Novembre 2017, LPC

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

MongoDB Security Checklist

MongoDB Security Checklist MongoDB Security Checklist Tim Vaillancourt Sr Technical Operations Architect, Percona Speaker Name `whoami` { name: tim, lastname: vaillancourt, employer: percona, techs: [ mongodb, mysql, cassandra,

More information

The EU DataGrid Fabric Management

The EU DataGrid Fabric Management The EU DataGrid Fabric Management The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 EDG Tutorial Overview Workload

More information

VMware AirWatch Content Gateway for Linux. VMware Workspace ONE UEM 1811 Unified Access Gateway

VMware AirWatch Content Gateway for Linux. VMware Workspace ONE UEM 1811 Unified Access Gateway VMware AirWatch Content Gateway for Linux VMware Workspace ONE UEM 1811 Unified Access Gateway You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Certificate Authorities: Information and Usage

Certificate Authorities: Information and Usage Certificate Authorities: Information and Usage Marcus Christie December 16, 2003 1 Introduction It is assumed that the reader knows, basically, what a certificate authority (CA) is. This document describes

More information

GLOBUS TOOLKIT SECURITY

GLOBUS TOOLKIT SECURITY GLOBUS TOOLKIT SECURITY Plamen Alexandrov, ISI Masters Student Softwarepark Hagenberg, January 24, 2009 TABLE OF CONTENTS Introduction (3-5) Grid Security Infrastructure (6-15) Transport & Message-level

More information

Frequently Asked Questions About Performance Monitor

Frequently Asked Questions About Performance Monitor APPENDIXA Frequently Asked Questions About Performance Monitor The following topics answer common questions about Performance monitor and contain troubleshooting tips: Installation, page A-1 Importing,

More information

Building Campus HTC Sharing Infrastructures. Derek Weitzel University of Nebraska Lincoln (Open Science Grid Hat)

Building Campus HTC Sharing Infrastructures. Derek Weitzel University of Nebraska Lincoln (Open Science Grid Hat) Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska Lincoln (Open Science Grid Hat) HCC: Campus Grids Motivation We have 3 clusters in 2 cities. Our largest (4400 cores) is

More information

Intellicus Cluster and Load Balancing- Linux. Version: 18.1

Intellicus Cluster and Load Balancing- Linux. Version: 18.1 Intellicus Cluster and Load Balancing- Linux Version: 18.1 1 Copyright 2018 Intellicus Technologies This document and its content is copyrighted material of Intellicus Technologies. The content may not

More information

NGFW Security Management Center

NGFW Security Management Center NGFW Security Management Center Release Notes 6.3.4 Revision A Contents About this release on page 2 System requirements on page 2 Build version on page 3 Compatibility on page 5 New features on page 5

More information

Monitoring Primer HTCondor Week 2017 Todd Tannenbaum Center for High Throughput Computing University of Wisconsin-Madison

Monitoring Primer HTCondor Week 2017 Todd Tannenbaum Center for High Throughput Computing University of Wisconsin-Madison Monitoring Primer HTCondor Week 2017 Todd Tannenbaum Center for High Throughput Computing University of Wisconsin-Madison Ad types in the condor_collector startd ads An ad for each slot on each machine

More information

Kerberos & HPC Batch systems. Matthieu Hautreux (CEA/DAM/DIF)

Kerberos & HPC Batch systems. Matthieu Hautreux (CEA/DAM/DIF) Kerberos & HPC Batch systems Matthieu Hautreux (CEA/DAM/DIF) matthieu.hautreux@cea.fr Outline Kerberos authentication HPC site environment Kerberos & HPC systems AUKS From HPC site to HPC Grid environment

More information

Things you may not know about HTCondor. John (TJ) Knoeller Condor Week 2017

Things you may not know about HTCondor. John (TJ) Knoeller Condor Week 2017 Things you may not know about HTCondor John (TJ) Knoeller Condor Week 2017 -limit not just for condor_history condor_q -limit Show no more than jobs. Ignored if Schedd is before 8.6 condor_status

More information

VMware AirWatch Content Gateway Guide For Linux

VMware AirWatch Content Gateway Guide For Linux VMware AirWatch Content Gateway Guide For Linux AirWatch v9.2 Have documentation feedback? Submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com. This product

More information

Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS Journal of Physics: Conference Series PAPER OPEN ACCESS Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS To cite this article: J Balcas et al 2017 J.

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

Condor and BOINC. Distributed and Volunteer Computing. Presented by Adam Bazinet

Condor and BOINC. Distributed and Volunteer Computing. Presented by Adam Bazinet Condor and BOINC Distributed and Volunteer Computing Presented by Adam Bazinet Condor Developed at the University of Wisconsin-Madison Condor is aimed at High Throughput Computing (HTC) on collections

More information

Grid Authentication and Authorisation Issues. Ákos Frohner at CERN

Grid Authentication and Authorisation Issues. Ákos Frohner at CERN Grid Authentication and Authorisation Issues Ákos Frohner at CERN Overview Setting the scene: requirements Old style authorisation: DN based gridmap-files Overview of the EDG components VO user management:

More information

VMware AirWatch Content Gateway Guide for Linux For Linux

VMware AirWatch Content Gateway Guide for Linux For Linux VMware AirWatch Content Gateway Guide for Linux For Linux Workspace ONE UEM v9.7 Have documentation feedback? Submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com.

More information

What s new in HTCondor? What s coming? European HTCondor Workshop June 8, 2017

What s new in HTCondor? What s coming? European HTCondor Workshop June 8, 2017 What s new in HTCondor? What s coming? European HTCondor Workshop June 8, 2017 Todd Tannenbaum Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison Release

More information

Venafi Platform. Architecture 1 Architecture Basic. Professional Services Venafi. All Rights Reserved.

Venafi Platform. Architecture 1 Architecture Basic. Professional Services Venafi. All Rights Reserved. Venafi Platform Architecture 1 Architecture Basic Professional Services 2018 Venafi. All Rights Reserved. Goals 1 2 3 4 5 Architecture Basics: An overview of Venafi Platform. Required Infrastructure: Services

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved.

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved. Configuring the Oracle Network Environment Objectives After completing this lesson, you should be able to: Use Enterprise Manager to: Create additional listeners Create Oracle Net Service aliases Configure

More information

Xrootd Monitoring for the CMS Experiment

Xrootd Monitoring for the CMS Experiment Journal of Physics: Conference Series Xrootd Monitoring for the CMS Experiment To cite this article: L A T Bauerdick et al 2012 J. Phys.: Conf. Ser. 396 042058 View the article online for updates and enhancements.

More information

OSGMM and ReSS Matchmaking on OSG

OSGMM and ReSS Matchmaking on OSG OSGMM and ReSS Matchmaking on OSG Condor Week 2008 Mats Rynge rynge@renci.org OSG Engagement VO Renaissance Computing Institute Chapel Hill, NC 1 Overview ReSS The information provider OSG Match Maker

More information

Client tools know everything

Client tools know everything Scheduling, clients Client tools know everything Application database Certificate Certificate Authorised users directory Certificate Policies Grid job management service Data Certificate Certificate Researcher

More information

What s new in Condor? What s coming? Condor Week 2011

What s new in Condor? What s coming? Condor Week 2011 What s new in Condor? What s coming? Condor Week 2011 Condor Project Computer Sciences Department University of Wisconsin-Madison The Condor Torpedo of TRUTH Talk Condor Project Computer Sciences Department

More information

Things you may not know about HTCondor. John (TJ) Knoeller Condor Week 2017

Things you may not know about HTCondor. John (TJ) Knoeller Condor Week 2017 Things you may not know about HTCondor John (TJ) Knoeller Condor Week 2017 -limit not just for condor_history condor_q -limit Show no more than jobs. Ignored if Schedd is before 8.6 condor_status

More information

AutoPyFactory: A Scalable Flexible Pilot Factory Implementation

AutoPyFactory: A Scalable Flexible Pilot Factory Implementation ATL-SOFT-PROC-2012-045 22 May 2012 Not reviewed, for internal circulation only AutoPyFactory: A Scalable Flexible Pilot Factory Implementation J. Caballero 1, J. Hover 1, P. Love 2, G. A. Stewart 3 on

More information

Managing External Identity Sources

Managing External Identity Sources CHAPTER 5 The Cisco Identity Services Engine (Cisco ISE) integrates with external identity sources to validate credentials in user authentication functions, and to retrieve group information and other

More information

How to install Condor-G

How to install Condor-G How to install Condor-G Tomasz Wlodek University of the Great State of Texas at Arlington Abstract: I describe the installation procedure for Condor-G Before you start: Go to http://www.cs.wisc.edu/condor/condorg/

More information

VMware AirWatch Content Gateway for Windows. VMware Workspace ONE UEM 1811 Unified Access Gateway

VMware AirWatch Content Gateway for Windows. VMware Workspace ONE UEM 1811 Unified Access Gateway VMware AirWatch Content Gateway for Windows VMware Workspace ONE UEM 1811 Unified Access Gateway You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Enabling Distributed Scientific Computing on the Campus

Enabling Distributed Scientific Computing on the Campus University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Computer Science and Engineering: Theses, Dissertations, and Student Research Computer Science and Engineering, Department

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side Troubleshooting Grid authentication from the client side By Adriaan van der Zee RP1 presentation 2009-02-04 Contents The Grid @NIKHEF The project Grid components and interactions X.509 certificates, proxies

More information

AsteriskNow IPTables Firewall Configuration

AsteriskNow IPTables Firewall Configuration AsteriskNow IPTables Firewall Configuration In a previous guide I discussed how to setup an AsteriskNow server with Polycom phone support. In this guide I will illustrate how to tighten up your server

More information

ALICE Grid Activities in US

ALICE Grid Activities in US ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions

More information

ESET Remote Administrator 6. Version 6.0 Product Details

ESET Remote Administrator 6. Version 6.0 Product Details ESET Remote Administrator 6 Version 6.0 Product Details ESET Remote Administrator 6.0 is a successor to ESET Remote Administrator V5.x, however represents a major step forward, completely new generation

More information

Red Hat Enterprise Linux 7 Getting Started with Cockpit

Red Hat Enterprise Linux 7 Getting Started with Cockpit Red Hat Enterprise Linux 7 Getting Started with Cockpit Getting Started with Cockpit Red Hat Enterprise Linux Documentation Team Red Hat Enterprise Linux 7 Getting Started with Cockpit Getting Started

More information

Leveraging Globus Identity for the Grid. Suchandra Thapa GlobusWorld, April 22, 2016 Chicago

Leveraging Globus Identity for the Grid. Suchandra Thapa GlobusWorld, April 22, 2016 Chicago Leveraging Globus Identity for the Grid Suchandra Thapa GlobusWorld, April 22, 2016 Chicago Open Science Grid Helps researchers speed up their research using high throughput computing methods Helps campus

More information

GROWL Scripts and Web Services

GROWL Scripts and Web Services GROWL Scripts and Web Services Grid Technology Group E-Science Centre r.j.allan@dl.ac.uk GROWL Collaborative project (JISC VRE I programme) between CCLRC Daresbury Laboratory and the Universities of Cambridge

More information

Version Installation Guide. 1 Bocada Installation Guide

Version Installation Guide. 1 Bocada Installation Guide Version 19.4 Installation Guide 1 Bocada Installation Guide Copyright 2019 Bocada LLC. All Rights Reserved. Bocada and BackupReport are registered trademarks of Bocada LLC. Vision, Prism, vpconnect, and

More information

Day 9: Introduction to CHTC

Day 9: Introduction to CHTC Day 9: Introduction to CHTC Suggested reading: Condor 7.7 Manual: http://www.cs.wisc.edu/condor/manual/v7.7/ Chapter 1: Overview Chapter 2: Users Manual (at most, 2.1 2.7) 1 Turn In Homework 2 Homework

More information

CIS 505: Software Systems

CIS 505: Software Systems CIS 505: Software Systems Fall 2017 Assignment 3: Chat server Due on November 3rd, 2017, at 10:00pm EDT 1 Overview For this assignment, you will implement a simple replicated chat server that uses multicast

More information

New Directions and BNL

New Directions and BNL New Directions and HTCondor @ BNL USATLAS TIER-3 & NEW COMPUTING DIRECTIVES William Strecker-Kellogg RHIC/ATLAS Computing Facility (RACF) Brookhaven National Lab May 2016 RACF Overview 2 RHIC Collider

More information

Teradici PCoIP Connection Manager 1.8 and Security Gateway 1.14

Teradici PCoIP Connection Manager 1.8 and Security Gateway 1.14 Teradici PCoIP Connection Manager 1.8 and Security Gateway 1.14 TER1502010/A-1.8-1.14 Contents Document History 4 Who Should Read This Guide? 5 PCoIP Connection Manager and PCoIP Security Gateway Overview

More information

Experiences with Building, Deploying and Running a remotecontrolled

Experiences with Building, Deploying and Running a remotecontrolled Corporate Technology Experiences with Building, Deploying and Running a remotecontrolled easily installable Network Sensor Bernd Grobauer, Siemens CERT Copyright Siemens AG 2007. All rights reserved. Imagine

More information

MSE System and Appliance Hardening Guidelines

MSE System and Appliance Hardening Guidelines MSE System and Appliance Hardening Guidelines This appendix describes the hardening of MSE, which requires some services and processes to be exposed to function properly. This is referred to as MSE Appliance

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document

More information

NGFW Security Management Center

NGFW Security Management Center NGFW Security Management Center Release Notes 6.5.3 Revision A Contents About this release on page 2 System requirements on page 2 Build number and checksums on page 4 Compatibility on page 5 New features

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information