An investigation of a high availability DPM-based Grid Storage Element

Size: px
Start display at page:

Download "An investigation of a high availability DPM-based Grid Storage Element"

Transcription

1 An investigation of a high availability DPM-based Grid Storage Element Kwong Tat Cheung August 17, 2017 MSc in High Performance Computing with Data Science The University of Edinburgh Year of Presentation: 2017

2 Abstract As the data volume of scientific experiments continues to increase, there is an increasing need for Grid Storage Elements to provide a reliable and robust storage solution. This work investigates the limitation of the single point of failure in DPM s architecture, and identifies the components which prevent the inclusion of using redundant head nodes to provide higher availability. This work also contributes a prototype of a novel high availability DPM architecture, designed using the findings of our investigation.

3 Contents 1 Introduction Big data in science Storage on the grid The problem Challenges in availability Limitations in DPM legacy components Aim Project scope Report structure Background DPM and the Worldwide LHC Computing Grid DPM architecture DPM head node DPM disk node DPM evolution DMLite Disk Operations Manager Engine Trade-offs in distributed systems Implication of CAP Theorem on DPM Concluding Remarks Setting up a legacy-free DPM testbed Infrastructure Initial testbed architecture Testbed specification Creating the VMs Setting up a certificate authority Create a CA Create the host certificates Create the user certificate Nameserver HTTP frontend DMLite adaptors Database and Memcached i

4 3.10 Creating a VO Establishing trust between the nodes Setting up the file systems and disk pool Verifying the testbed Problems encountered and lessons learned Investigation Automating the failover mechanism Implementation Database Metadata and operation status Issues Analysis Options Recommendation DOME in-memory queues Issues Options Recommendation DOME metadata cache Issues Options Recommendation Recommended architecture for High Availability DPM Failover Important considerations Evaluation Durability Methodology Findings Performance Methodology Findings Conclusions 48 7 Future work 50 A Software versions and configurations 51 A.1 Core testbed components A.2 Test tools A.3 Example domehead.conf A.4 Example domedisk.conf A.5 Example dmlite.conf ii

5 A.6 Example domeadapter.conf A.7 Example mysql.conf A.8 Example Galera cluster configuration B Plots 55 iii

6 List of Tables 3.1 Network identifiers of VMs in testbed iv

7 List of Figures 2.1 Current DPM architecture DMLite architecture Simplified view of DOME in head node Simplified view of DOME in disk node Simplified view of architecture of initial testbed Failover using keepalived Synchronising records with Galera cluster Remodeled work flow of the task queues using replicated Redis caches Remodeled work flow of the metadata cache using replicated Redis caches Recommended architecture for High Availability DPM Plots of average rate of operations compared to number of threads B.1 Average rate for a write operation B.2 Average rate for a stat operation B.3 Average rate for a read operation B.4 Average rate for a delete operation v

8 Acknowledgements First and foremost, I would like to express my gratitude to Dr Nicholas Johnson for supervising and arranging the budget for this project. Without the guidance and motivation he has provided, the quality of this work would certainly have suffered. I would also like to thank Dr Fabrizio Furano from the DPM development team for putting up with the stream of s I have bombarded him with, and for answering my queries on the inner-workings of DPM.

9 Chapter 1 Introduction 1.1 Big data in science Big data has become a well-known phenomenon in the age of social media. The vast amount of user generated contents has undeniably influenced the research and advancement in modern distributed computing paradigms [1][2]. However, even before the advent of social media websites, researchers in several scientific fields already faced similar challenges in dealing with a massive amount of data generated by experiments. One such field is high energy physics, including the Large Hadron Collider (LHC) experiments based at the European Organization for Nuclear Research (CERN). In 2016 alone, it is estimated that 50 petabytes of data were gathered by the LHC detectors postfiltering [3]. Since the financial resources required to host an infrastructure that is able to process, store, and analyse the data is far too great for any single organisation, the experiments turned to the grid computing approach. Grid computing, which is mostly developed and used in academia, follows the same principle of its commercial counterpart - cloud computing, where computing resources are provided to end-users remotely and on-demand. Similarly, the physical location of the sites which provide the resource, as well as the infrastructure is abstracted away from the users. From the end-users perspective, they just have to submit their jobs to an appropriate job management system without any knowledge of where the jobs will be run or where the data are physically stored. In grid computing, these computing resources are often distributed across multiple locations, where a site that provides data storage capacity is called a Storage Element, and one that provides computation capacity is called a Compute Element. 1

10 1.2 Storage on the grid Grid storage elements have to support some unique requirements found in grid environment. For example, the grid relies on the concept of Virtual Organisations (VO) for resource allocation and accounting. A VO represents a group of users, not necessary from the same organisation but usually involved in the same experiment, and manages their membership. Resources on the grid (i.e. storage space provided by a site) are allocated to specific VOs instead of individual users. Storage elements also have to support file transfer protocols that are not commonly used outside of the grid environment, such as GridFTP [4] and xrootd [5]. Various storage management systems were developed for grid storage elements to fulfil these requirements, and one such system is the Disk Pool Manager (DPM) [6]. DPM is a storage management system developed by CERN. It is currently the most widely deployed storage system on tier 2 sites, providing the Worldwide LHC Computing Grid (WLCG) around 73 petabytes of storage across 160 instances [7]. The main functionalities of DPM are to provide a straightforward, low maintenance solution to create a disk-based grid storage element, and to support remote file and meta-data operations using multiple protocols commonly used in grid environment. 1.3 The problem This section presents the main challenges for DPM, the specific limitations that motivate this work, and outlines the project s aim Challenges in availability Due to limitations in the DPM architecture, the current deployment model supports only one meta-data server and command node. This deployment model exposes a single point of failure in a DPM-based storage element. There are several scenarios where this deployment model could affect the availability of a site: Hardware failure in the host Software/OS update that results in the host being offline Retirement or replacement of machines If any of the scenario listed above happens to the command node, the entire storage element will become inaccessible, which ultimately means expensive downtime for the site. 2

11 1.3.2 Limitations in DPM legacy components Some components in DPM were first developed over 20 years ago. The tightly-coupled natural of these components have limited the extensibility of the DPM system and makes it impractical to modify DPM into a multi-servers system. As the grid evolves, the number of users and storage demand have also increased. New software practices and designs have also emerged that could better fulfil the requirements of a high load storage element. In light of this, the DPM development team have put in considerable amount of effort into modernising the system in the past few years, which resulted in some new components that could bypass some limitations of the legacy stack. The extensibility of these new components has opened up an opportunity to modify the current deployment model, which this work aims to explore. 1.4 Aim The aim of this work is to explore the possibility of increasing the availability of a DPM-based grid storage element by modifying its current architecture and components. Specifically, this work includes: An investigation into the availability limitations of the current DPM deployment model. Our experience on setting up and configuring a legacy-free DPM instance including a step-by-step guide. An in-depth analysis of the challenges in enabling a highly available DPM instance, and provides potential solutions. A recommended architecture for high availability DPM storage element based on findings of our investigation, along with a prototype testbed for evaluation. 1.5 Project scope A complete analysis, redesign, modification of the entire DPM system and the access protocol frontends DPM supports would not realistically fit into the time frame of this project. As such, this project aims to act as a preliminary study towards the overall effort in implementing a high availability DPM system. As part of the effort in promoting a wider adoption of the HTTP ecosystem in grid environment, this project will focus on providing a high availability solution for the HTTP frontend. However, compatibility with the other access frontends will also be taken into consideration in the design process, when possible. 3

12 1.6 Report structure The remainder of the report is structured as follow. Chapter 2 presents the background of DPM, including its deployment environment and information on the components and services which forms the DPM system. The evolution of DPM and its current development direction are also discussed. Chapter 3 describes our experience in setting up and configuring a legacy-free DPM instance on our testbed, including a step-by-step guide. Chapter 4 provides an in-depth investigation on current DPM components which prohibit a high availability deployment model, and describes our suggested modifications. Chapter 5 evaluates the performance and failover behaviour of our prototype high availability testbed. Chapter 6 presents the conclusions of this work, summarising the findings of our investigations and recommendations. Chapter 7 describes some of the future work that is anticipated after the completed of this project. 4

13 Chapter 2 Background DPM is a complex system which includes a number of components and services. As such, before examining potential ways to improve the availability and scalability of a DPM storage element, the architecture and components of DPM must first be understood. This chapter presents an in-depth analysis of DPM, including its architecture, history and evolution, as well as the functionalities of each component that makes up a DPM system. Common scenarios which could affect the availability of a distributed system, and the trade-offs in a highly available distributed system are also discussed in this chapter. 2.1 DPM and the Worldwide LHC Computing Grid As mentioned in Chapter 1, DPM is designed specifically to allow the setup and management of storage elements on the WLCG. As such, to gain a better understanding of DPM, one must also be familiar with the environment DPM is deployed in. The WLCG is a global e-infrastructure that provides compute and data storage facilities to support the LHC experiments (Alice, Atlas, CMS, LHCb). Currently, the WLCG is formed by more than 160 sites, and is organised into three main tiers: Tier 0 - The main data centre at the European Organisation for Nuclear Research (CERN), where raw data gathered by the detectors are processed and kept on tape. Tier 1 - Thirteen large-scale data centres holding a subset of the raw data. Tier 1 sites also handle the distribution of data to Tier 2 sites. Tier 2 - Around 150 universities and smaller scientific institutes providing storage for reconstructed data, and computational support for analysis. Since DPM supports only disk and not tape storage, it is mostly used in tier 2 storage elements, storing data files that are used in analysis jobs submitted by physicists. For redundancy and accessibility purposes, popular files often have copies and are distributed 5

14 Disk nodes SOAP Head node Back ends Disks DMLite DMLite XRootD GridFTP SRM DMLite httpd Front ends RFIO DPM DMLite XRootD DMLite GridFTP DPNS DMLite httpd DMLite XRootD Disks DMLite GridFTP DMLite httpd RFIO DB Figure 2.1: Current DPM architecture across different sites, in grid terminology these copies are called replicas. These replicas are stored in filesystems on the DPM disk nodes, where a collective of filesystems forms a DPM disk pool. 2.2 DPM architecture DPM is a distributed system composes of two types of node: the head node and the disk node. A high-level view of the typical DPM architecture used in most of the DPM storage elements is shown in Figure DPM head node The head node is the entry point to a DPM storage element; it is responsible for handling file meta-data and file access requests that come into the cluster. The head node contains decision making logic regarding load balancing, authorisation and authentication, space quota management, file system status, and physical location of the replicas it manages. In the DPM system, the head node acts as the brain of the cluster and maintains a logical view of the entire file system. 6

15 A DPM head node contains a number of components providing different services. The components can be grouped into two categories: frontends which facilitate access by different protocols, and backends which provide the underlying functionality. Protocol frontends Httpd - DPM uses the Apache HTTP server to allow meta-data and file operations through HTTP and WebDAV. SRM - The Storage Resource Manager [8] daemon that is traditional used to provide dynamic space allocation and file management to grid sites. GridFTP, xrootd, RFIO - These frontends provide access to the DPM system by some of the other popular protocols used in grid environment. Backends DPM - The DPM daemon (not to be confused with the actual DPM system), handles client file access requests, manages the asynchronous operation queues, and interact with the data access frontends. DPNS - The DPM namesever daemon, handles file and directory related metadata operations. For example, adding or renaming a replica. MySQL - Two importance databases vital to DPM operations are stored in the MySQL backend. The cns_db database contains all the file meta-data, replicas and their locations in the cluster, as well as information on groups and VOs. The dpm_db database stores information of the filesystems on the disk servers, space quotas, and the status of ongoing and pending file access requests. The database can be deploy either on the same host as the head node, or remotely on another host, depending on expected load. Memcached - Memcached [9] is a in-memory cache for key-value pairs. In DPM, it is an optional layer that can be set up in front of the MySQL backend to reduce query load to the databases DPM disk node Disk nodes in a DPM storage element host the actual file replicas and provide remote access to meta-data and file access requests from clients. Once authenticated and authorized, clients are redirected to the relevant disk nodes by the head node, and never access the disk nodes directly. A disk node will typically contain all the data access frontends supported by the DPM system (e.g. httpd, GridFTP, xrootd, RFIO). 7

16 Front ends HTTP/WebDAV XROOTD GridFTP RFIO DMLite Framework Namespace Management Pool Management Pool Driver I/O Legacy DPM Legacy DPM Legacy DPM Legacy DPM MySQL MySQL Hadoop Hadoop Memcached S3 S3 2.3 DPM evolution Figure 2.2: DMLite architecture Since DPM was first developed in the early 2000s, it has gone through several rounds of major refactoring and enhancements. The historical components of DPM, for example, the DPM and DPNS daemons, were written a long time ago and extensibility was not one of the design goals. The daemons also introduced several bottlenecks such as excessive inter-processes communications and threading limitations. As such, a lot of effort has been directed to bypassing the so-called legacy components: DPM (daemon) DPNS SRM RFIO Other security and configurations helpers (e.g. CSec) The most significant changes in the recent iterations are the development of the DMLite framework [10] and the Disk Operations Manager Engine (DOME) [11] DMLite DMLite is a plugin-based library that is now at the core of a DPM system. DMLite provides a layer of abstraction above the database, pool management, and I/O access. The architecture of DMLite is shown in Figure

17 DOME Requests to Disk nodes Timed logic (tickers) Apache httpd Checksum queue Filepull queue Workers AuthN mod_proxy_fcgi DMLite /domehead/... mod_lcgdm_dav /dpm/... External stat Task Executor Request logic DB Figure 2.3: Simplified view of DOME in head node By providing an abstraction to the underlying layers, additional plugins can be implemented to support other storage types, such as S3 and HDFS. Perhaps more importantly, DMLite also allows a caching layer to be loaded in front of the database backend by using the Memcahced plugin, which could significantly reduce query load to the databases Disk Operations Manager Engine DOME is arguably the most important recent addition to the DPM system because it represents a new direction in DPM development. DOME is run on both the head and disk nodes as a FastCGI daemon, it exposes a set of RESTful APIs which provides the core coordination functions and uses HTTP and JSON to communicate with both clients and other nodes in the cluster. By implementing the functionalities of the legacy daemons and handling inter-cluster communication itself, the legacy components are now, in theory, optional in a DOME enabled DPM instance. The simplified views of a DOME enabled head node and disk node are shown in Figure 2.3 and Figure 2.4, respectively. The heavy use of in-memory queues and inter-processes communication in the legacy components would have made any attempt to modify the single head node deployment model impractical. However, the introduction of DOME has now opened up the possibility of deploying multiple head nodes in a single DPM instance, which will be explored in the next chapter. 9

18 DOME Requests to Head node Timed logic (tickers) Apache httpd Workers mod_proxy_fcgi /domedisk/... AuthN External file pull Internal checksum Task Executor Request logic DMLite mod_lcgdm_dav /<diskpaths>/... Disks Figure 2.4: Simplified view of DOME in disk node 2.4 Trade-offs in distributed systems Eric Brewer introduced an idea in 2000 which is now widely known as the CAP Theorem. The CAP Theorem states that in distributed systems, there is a fundamental trade-off between consistency, availability, and partition tolerance [12]. A distributed system can guarantee at most two of the three properties. An informal definition of each these guarantees are listed as follow. Consistency - A read operation should return the most up-to-date result regardless of which node receives the request. Availability - In the event of node failure, the system should still be able to function, meaning each request will receive a response within a reasonable amount of time. Partition tolerance - In the event of a network partition, the system will continue to function Implication of CAP Theorem on DPM Since we cannot have all three guarantees as stated by the CAP Theorem, we need to carefully consider which guarantee we are willing to discard based on our requirements. Availability is our highest priority since our ultimate aim is to design a DPM architecture that is resilient to head node failure. This would mean deploying multiple head nodes to increase the availability of the DPM system. DPM relies on records in both the database and cache to function. In a multiple head nodes architecture, these data would likely have to be synchronised on all the head nodes. As such, to ensure operation correctness, consistency is also one of our requirements. Although any network partition happening in a distributed system is less than ideal. 10

19 Realistically, as DPM is mostly deployed on machines in close proximity, for instance, in a single data centre as opposed to over WAN, network partition is less of an issue. Any network issues that happen in a typical DPM environment would likely affect all the nodes in the system. Based on the reasoning listed above, we believe our architecture should prioritise consistency and availability. 2.5 Concluding Remarks In this chapter, the architecture and core components of DPM were examined. Limitations of the legacy components in DPM and the motivation behind recent refactoring effort were explained. With the addition of DMLite and DOME, it is now worthwhile to explore whether a multiple head nodes deployment is viable with a legacy-free DPM instance. Lastly, we have explained the reasoning behind choosing consistency and availability over partition tolerance as a priority in our new architecture. 11

20 Chapter 3 Setting up a legacy-free DPM testbed As DPM composes of a number of components and services with many opportunities of misconfiguration that would result in a non-functioning system, manual configuration is discouraged. Instead, DPM storage elements are usually set up by using supplied puppet manifest templates with the puppet configuration manager. However, since this project aims to explore the possibility of the novel DOME-only multiple head nodes DPM deployment model, some of the components have to be compiled from source then installed and configured manually. The testbed will serve three purposes. Firstly, we want to find out if DPM would function correctly if we exclude all the legacy components, meaning that our DPM instance will only include DMLite, DOME, MySQL (MariaDB on Centos 7), and the Apache HTTP frontend. Secondly, once we have verified our legacy-free testbed is functional and have redesigned some of the components, the testbed will serve as a foundation for us to incorporate additional head nodes and the necessary changes in DPM to facilitate this new deployment model. Lastly, the testbed will be used to evaluate the performance impact of the new design. As DOME has only recently gone into production and no other grid site have yet adopted a DOME-only approach, to the best of our knowledge no one has attempted this outside of the DPM development team. As such, we believe our experience in setting up the testbed will be valuable to both the grid sites that may potentially upgrade to DOME later on, and as feedback for the DPM developers. The remaining of this chapter describes the steps that were taken to successfully set up a DOME-only DPM testbed, including details on the infrastructure, specifications, and configurations. Major issues encountered during the process are also discussed. 12

21 3.1 Infrastructure For ease of testing and deployment, virtual machines (VM) were used instead of physical machines. This decision will certainly impact the performance of the cluster and will be taken into account during performance evaluation. All VMs used in the testbed are hosted by the University s OpenStack instance. 3.2 Initial testbed architecture As mentioned earlier in this chapter, our first objective is to verify the functionality of a legacy-free DPM instance. As such, our initial testbed will only have one head node. Ultimately, redundant head nodes will be included in the testbed once we have verified the functionality of the single head node instance. The testbed will also include two disk nodes for proper testing of file systems and pool management. DPM provides the options to host the database server either locally on the head node, or remotely on another machine. The remote hosting option will remain open to storage elements but in our design, we will also try to accommodate the local database use-case. We will also incorporate our own Domain Name System (DNS) server in the testbed. The rationale behind this is, firstly, we want to evaluate our testbed in isolation. By having our private DNS server, we will be able to monitor the load on the DNS service and examine if it becomes a bottleneck in our tests. Secondly, having full control of the DNS service opens up the possibility to hot-swap the head nodes by changing the IP address mappings in DNS configuration. The initial architecture of the testbed is shown in Testbed specification After consulting with the DPM development team, it was decided that VMs with 4 virtual CPUs (VCPU) and 8GB RAM is sufficient for the purpose of this project. Among the VM flavours offer by OpenStack, the m1.large flavour provides 4 VCPUs, 8GB of RAM and 80GB of disk space, which fits our needs perfectly. The nameserver requires minimal disk space and CPU, as such, we have chosen to use the m1.small flavour which provides 1 VCPU, 2GB of RAM and 20GB of disk space. All VMs in the testbed runs the Centos 7 operating system. A detailed list of software used in the testbed and their versions can be found in Appendix A. 13

22 Head node Back ends Front ends DMLite httpd DNS DOME adaptor MariaDB DOME Disk node 1 Disk node 2 Disks DMLite DOME adaptor httpd Disks DMLite DOME adaptor httpd DOME DOME Figure 3.1: Simplified view of architecture of initial testbed 3.4 Creating the VMs Four VM instances were created using OpenStack in the nova availability zone (.novalocal domain). We then assigned a unique floating IP address to each of these instances so that they can be accessed from outside of the private network. The hostnames and IPs of these instances will be referenced throughout this chapter and are shown in Table 3.1. Hostname FQDN Private IP Floating IP dpm-nameserver dpm-nameserver.novalocal dpm-head-1 dpm-head-1.novalocal dpm-disk-1 dpm-disk-1.novalocal dpm-disk-2 dpm-disk-2.novalocal Table 3.1: Network identifiers of VMs in testbed The fully qualified domain name (FQDN) of these nodes are important as they need to be included in the head and disk nodes host certificate exactly as they appear, otherwise the host will not be trusted by other nodes. Since DPM and most of the other grid middleware packages are located in the Extra Packages for Enterprise Linux (EPEL) repository, we need to install the repository on each of these VM. 14

23 sudo yum install epel-release sudo yum update Then enable EPEL testing repository for the latest version of DOME and DMLite. sudo yum-config-manager --enable epel-testing Install DOME and its dependencies: On head node: sudo yum install dmlite-dpmhead-dome On disk node: sudo yum install dmlite-dpmdisk-dome Make sure SELinux is disabled on all the nodes, at in sometimes interfere with DPM operations. This is done by setting SELINUX=disabled in /etc/sysconfig/selinux. Before we can further configure the nodes, we need to acquire a host certificate for each of the nodes to be used for SSL communication. 3.5 Setting up a certificate authority DPM requires valid grid host certificate installed on all its nodes for authentication reasons. Since we do not know how many VMs we will end up using, and to avoid going through the application process to a real CA every time we have to spin up a new VM in the testbed, we decided to set up our own CA to do the signing. It does not matter which host does the signing as long as the CA is installed on that host and it has the private key of the CA. In our testbed we used the nameserver to sign certificate requests. To set up a grid CA, install the globus-gsi-cert-utils-progs and globus-simple-ca packages from the Globus Toolkit. These packages can be found in the EPEL repository Create a CA First we use the grid-ca-create command to create a CA with the X.509 distinguished name (DN) "/O=Grid/OU=DPM-Testbed/OU=dpmCA.novalocal/CN=DPM Testbed CA". This will be the CA we use to sign host certificates with for all the nodes in the cluster. 15

24 Our new CA will have to be installed in every node in the cluster before the nodes will trust any certificate signed by it. To simplify the process, our CA can be packaged into an RPM by using the grid-ca-package command, which will give us an RPM package containing our CA and its signing policy that can be distributed and installed on the nodes using yum localinstall Create the host certificates Each of the nodes in the cluster will need its own host certificate. Since we have control of both the CA and the nodes, we can issue all the requests on the nameserver on behalf of all the nodes. grid-cert-request -host <FQDN of host> will generated a both a private key (hostkey.pem) for that host and a certificate request that we have to sign by our CA using the grid-ca-sign command. grid-ca-sign -in certreq.pem -out hostcert.pem The hostkey.pem and hostcert.pem files will then have to be transferred to the VM that correspond to the FQDN, and stored in the /etc/grid-security/ directory with the correct permission. sudo chmod 400 /etc/grid-security/hostkey.pem sudo chmod 444 /etc/grid-security/hostcert.pem The certificate and private key also need to be placed in a location used by DPM: sudo mkdir /etc/grid-security/dpmmgr sudo cp /etc/grid-security/hostcert.pem /etc/grid-security/dpmmgr/dpmcert.pem sudo cp /etc/grid-security/hostkey.pem /etc/grid-security/dpmmgr/dpmkey.pem Make sure the files are owned by the dpmmgr user: sudo groupadd -g 151 dpmmgr sudo useradd -c "DPM manager" -g dpmmgr -u 151 -r -m dpmmgr sudo chown -R dpmmgr.dpmmgr /etc/grid-security/dpmmgr 16

25 3.5.3 Create the user certificate We also need to generate a grid user certificate for communicating with the testbed as a client. This certificate will be used during testing, for instance, supplied to stress testing tools. For testing purposes, we will generate a user certificate without a password to make the testing process easier. This is done by using grid-cert-request with the -nodes switch. Our user certificate has the DN: "/O=Grid/OU=DPM-Testbed/OU=dpmCA.novalocal/OU=DPM Testbed CA/CN=Eric Cheung" 3.6 Nameserver For our nameserver, we chose the popular BIND DNS server [13]. We will discuss the configurations of the DNS server in detail because it is related to how we plan on hot-swapping the head nodes. As a result, it is very important to note how the FQDN of the head node is mapped to its IP address in the configurations. sudo yum install bind bind-utils In /etc/named.conf, add all the nodes in our cluster that will use our DNS server to the trusted ACL group: acl "trusted" { ; // this nameserver ; // dpm-head ; // dpm-disk ; // dpm-disk-2 }; Modify the options block: listen-on port 53 { ; ; }; #listen-on-v6 port 53 { ::1; }; Change allow-query to our trusted group of nodes: allow-query { trusted; }; Finally, add this to the end of the file: 17

26 include "/etc/named/named.conf.local"; Now set up the forward zone for our domain in /etc/named/named.conf.local: zone "novalocal" { type master; file "/etc/named/zones/db.novalocal"; # zone file path }; Then we can create the forward zone file where we can map the FQDNs in our zone to their IP addresses. sudo chmod 755 /etc/named sudo mkdir /etc/named/zones sudo vim /etc/named/zones/db.novalocal $TTL IN SOA dpm-nameserver.novalocal. admin.novalocal. ( 1 ; Serial ; Refresh ; Retry ; Expire ) ; Negative Cache TTL ; ; name servers - NS records IN NS dpm-nameserver.novalocal. ; name servers - A records dpm-nameserver.novalocal. IN A ; /16 - A records dpm-head-1.novalocal. IN A dpm-disk-1.novalocal. IN A dpm-disk-2.novalocal. IN A The two most important things to note in this configuration is the IP address in the trusted group in named.conf, and the A records of the nodes in db.novalocal. In theory, if we spin up an additional head node with the same FQDN, then we can simply substitute its IP address in place of the IP of the old head node to redirect any inter-cluster communication and client requests toward the new head node, as illustrated in Figure x. In a production site, we recommend setting up a backup DNS server as well as the reverse zone file so that the lookup of FQDNs using IPs is also possible. For the purpose of this project, since we are not studying the availability of the nameserver, nor do we plan on doing reverse lookup, the configurations listed above should suffice. 18

27 3.7 HTTP frontend The httpd server and a few other modules are required to allow access to DPM through HTTP and the WebDAV extension. Some key configurations include ensuring the mod_gridsite module and the mod_lcgdm_dav module are installed, which handles authentication and WebDAV access, respectively. 3.8 DMLite adaptors The DMLite framework uses plugins to communicate with the underlying backend services. A traditional DPM instance would use the adaptor plugin to route requests to the DPM and DPNS daemons. Since we do not have those legacy daemons on the testbed, we need to replace the plugin with the DOME adaptor, so the requests are routed to DOME instead. This is done by editing dmlite.conf and load the dome_adaptor library instead of the old adaptor, and removing the adaptor.conf file. 3.9 Database and Memcached DPM works with MySQL compatible database management systems (DBMS), on our testbed we used the default relational DBMS on Centos 7 which is MariaDB. The configuration process is mostly identical to a legacy DPM instance which involves importing the schema of the cns_db and dpm_db, as well as granting access privileges to the DPM process. However, we initially had some troubles with getting our database backend to work in a legacy-free instance. We discovered that the issue was caused by DMLite loading some mysql plugins that are no longer needed in our scenario. We managed to resolve the issue by ensuring DMLite to only load the namespace plugin for mysql related operations. In /etc/dmlite.conf.d/mysql.conf, make sure only the namespace plugin is loaded. LoadPlugin plugin_mysql_ns /usr/lib64/dmlite/plugin_mysql.so Since DOME now includes an internal metadata cache which fulfils the same purposes of the Memcached layer in a legacy setup, Memcached is not installed on the testbed Creating a VO Storage elements on the grid use the grid-mapfile to maps all the users from VOs that are supported on the site. For testing purposes, we will use our own VO and directly 19

28 map our local users to the testbed by using a local grid-mapfile. This is done so that we can bypass the Virtual Organization Management Service (VOMS). The conventional VO name for development is dteam, and we will use that on our testbed. To create the mapfile, add this line to /etc/lcgdm-mkgridmap.conf: gmf_local /etc/lcgdm-mapfile-local Then create and edit the /etc/lcgdm-mapfile-local file, enter the DN-VO pairs for each users we would like to support. "/O=Grid/OU=DPM-Testbed/OU=dpmCA.novalocal/OU=DPM Testbed CA/CN=Eric Cheung" dteam Run the supplied script manually to generate the mapfile. In production site this will be set as a cron job so the mapfile stays up-to-date. /usr/libexec/edg-mkgridmap/edg-mkgridmap.pl --conf=/etc/lcgdm-mkgridmap.conf --output=/etc/lcgdm-mapfile --safe 3.11 Establishing trust between the nodes Oh the head node, edit the /etc/domehead.conf file and add the DNs of the disk nodes to the list of authorised DNs. glb.auth.authorizedn[]: "CN=dpm-disk-1.novalocal,OU=dpmCA.novalocal,OU=DPM-Testbed,O=Grid", "CN=dpm-disk-2.novalocal,OU=dpmCA.novalocal,OU=DPM-Testbed,O=Grid" On the disk nodes, edit the /etc/domedisk.conf file and add the DN of the head node to the list of authorised DNs. glb.auth.authorizedn[]: "CN=dpm-head-1.novalocal,OU=dpmCA.novalocal,OU=DPM-Testbed,O=Grid" 3.12 Setting up the file systems and disk pool During the configuration process, we had encountered some issues with the dmliteshell, which is used as an administration tool on the head node. In a normal deployment, DPM would be configured by puppet, which would create the skeleton directory tree in 20

29 the DPM namespace by inserting the necessary entries into the dns_db database. Since we are manually configuring the system, we have to carry out this step ourselves. The key record is the / entry, which acts as the root of the logical view of the file system. On the head node: mysql -u root >use cns_db >INSERT INTO Cns_file_metadata (parent_fileid, name, owner_uid, gid) VALUES (0, /, 0, 0); Then start the dmlite-shell, remove the entry we just added by using unlink -f, then create the entry again but this time using mkdir so that all the required fields are properly set. Once that is done we can also set up the basic directory tree using the shell. sudo dmlite-shell > unlink -f / > mkdir / > mkdir /dpm > mkdir /dpm/novalocal(our domain) > mkdir /dpm/novalocal/home Add a directory for our VO and set the appropriate ACL: > mkdir /dpm/novaloacl/home/dteam(our VO) > cd /dpm/novalocal/home/dteam > chmod dteam 775 > groupadd dteam > chgrp dteam dteam > acl dteam d:u::rwx,d:g::rwx,d:o::r-x,u::rwx,g::rwx,o::r-x set Add a volatile disk pool to our testbed: > pooladd pool_01 filesystem V One we have a disk pool we can add a file system to our newly created disk pool. This has to be done on all disk nodes that wish to participate in the pool. In the normal shell, create a directory which DPM can use as a file system mount point and make sure it is owned by DPM so it can write to it. sudo mkdir /home/dpmmgr/data sudo chown -R dpmmgr.dpmmgr dpmmgr/data Then we can add the file systems on both disk nodes to our pool. On the head node, 21

30 inside dmlite-shell: > fsadd /home/dpmmgr/data pool_01 dpm-disk-1.novalocal > fsadd /home/dpmmgr/data pool_01 dpm-disk-2.novalocal Verify our disk pool (it may take a few seconds before DOME registers the new file systems): > poolinfo pool_01 (filesystem) freespace: GB poolstatus: 0 filesystems: status: 0 freespace: 77.58GB fs: /home/dpmmgr/data physicalsize: 79.99GB server: dpm-disk-1.novalocal status: 0 freespace: 77.56GB fs: /home/dpmmgr/data physicalsize: 79.99GB server: dpm-disk-2.novalocal s_type: 8 physicalsize: GB defsize: 1.00MB One last thing we need to do before we can test the instance is to create a space token for our VO, so that we can write to the disk pool. > quotatokenset /dpm/novalocal/home/dteam pool pool_01 size 10GB desc test_quota groups dteam Quotatoken written. poolname: pool_01 t_space: u_token: test_quota 3.13 Verifying the testbed At this stage, we should have a functional legacy-free testbed that is able to begin serving client requests. To verify the testbed s functionality, we used the Davix HTTP client to issue a series of requests toward the head node. The operations we performed include uploading and downloading replicas, listing contents of directories and deleting replicas. 22

31 The outcome of the requests was verified against the log files, database entries, and on the disk nodes file systems. For example, listing contents of the home directory of our dteam VO: ~]$ davix-ls --cert ~/dpmuser_cert/usercert.pem --key ~/dpmuser_cert/userkey_no_pw.pem --capath /etc/grid-security/certificates/ -l drwxrwxr-x :18:12 hammer -rw-rw-r :03:49 testfile_001.root -rw-rw-r :21:31 testfile_002.root -rw-rw-r :23:04 testfile_003.root -rw-rw-r :23:17 testfile_004.root -rw-rw-r :41:59 testfile_005.root -rw-rw-r :59:33 testfile_006.root -rw-rw-r :06:41 testfile_007.root -rw-rw-r :33:20 testfile_008.root Reading contents of the helloworld.txt file and print to stdout: [centos@dpm-nameserver ~]$ davix-get --cert ~/dpmuser_cert/usercert.pem --key ~/dpmuser_cert/userkey_no_pw.pem --capath /etc/grid-security/certificates/ Hello world! 3.14 Problems encountered and lessons learned Many of the issues we have encountered setting up the testbed were due to our initial lack of knowledge of the grid environment. For instance, we were unaware of the X.509 extensions that are used in signing grid certificates and did not understand why our certificates signed using plain OpenSSL were being rejected. We were also unfamiliar with how members of a VO are authenticated by the DPM system, which resulted in a lot of time spent in log monitoring and debugging before the testbed can even be tested. Perhaps most importantly, there are many services and plugins that need to be configured correctly in a DPM instance. One single incorrect setting in one of the many configuration files would result in a non-functional system. During the setup process, there were many occasions where we had to maximise the log level in DMlite, DOME, httpd and MariaDB then analyse the log files in order to diagnose the source of misconfiguration. 23

32 Chapter 4 Investigation As mentioned in Chapter 2, DMlite and DOME were designed to replace the legacy components of DPM and aim to bypass some of the limitations imposed by the old stack. However, neither DOME or DMlite was designed to run on more than a single head node. In order to successfully design a functional high availability DPM architecture, we must first identify the all the limiting factors in DMlite and DOME that would prevent us from deploying redundant head nodes, and redesign them when possible. A function high availability DPM architecture must contain the follow attributes: Resilient to head node failure. Meaning that the system must continue to function and serve client requests should the primary head node fail. Automatic recovery. In the event of head node failure, a DPM instance using the new architecture must failover automatically in a transparent manner to the clients. Strong data consistency. The redundant head nodes must have access to the most up-to-date information about the file system and status of the cluster. Ultimately, providing availability to DPM would mean increasing the number of head nodes, and therefore turning DPM into an even more distributed system. Providing any distributed system availability and consistency guarantee will likely have performance implications, which we must also keep in mind in our design. This rest of this chapter describes the findings of our investigation and the recommended redesign of the relevant components to allow for a high availability DPM architecture. 4.1 Automating the failover mechanism Ideally, when a head node fails, the system should automatically reroute client requests to one of the redundant head nodes in a way that is transparent to the clients. One of the 24

33 Head node 1 keepalived DPM Disk node 1 Normal path Public floating IP Head node 2 Clients Failover path keepalived DPM Disk node Figure 4.1: Failover using keepalived options to achieve this failover mechanism is to use a floating IP address that is shared between all the head nodes, combined with a tool that facilitates nodes monitoring and automatic assignment of this floating IP. Keepalived [14] is a routing software that is designed for this use-case, it offers both load balancing and high availability functionalities to Linux-based infrastructure. In keepalived, high availability is provided by the Virtual Router Redundancy Protocol(VRRP), which is used to dynamically elect a master and backup virtual routers. Figure 4.1 illustrates how keepalived can be used to provide automatic failover to a DPM system. In this topology, all client requests are directed at the floating IP address. If the primary head node fails, the keepalived instances on the redundant head nodes will elect a new master based on the priority value of the servers in the configuration file. The new master would then reconfigure the network setting of its node and bind the floating IP to its interface. From the clients perspective, their requests continue to be served using the same IP address even though they are now fulfilled by a different head node. If the primary head node rejoins the network, keepalived would simply promote the primary node as master again if its server has the highest priority value. With this topology, we can use a single DNS entry in the nameserver for all the head nodes in the cluster, since they would all have the same FQDN and use the same floating IP address, thus further simplifying the configuration process of the system Implementation Based on our research keepalived would be the ideal solution for head nodes failover. Unfortunately, after spending a considerable amount of effort in configuring keepalived, we discovered that in order to setup keepalived successfully on our testbed, we would require administrative privileges on the OpenStack instance level (to configure Open- Stack Neutron), which we do not have. 25

34 However, on a production site this should not be an issue, especially when the site has full control of its network and deployed DPM on physical machines instead of VMs. 4.2 Database Some grid sites prefer to host the database backend locally on the head node for performance reasons, and we would like to preserve this use-case. The first step toward achieving this goal is to fully understand what is stored in the databases, and what their roles are in the DPM system Metadata and operation status Information stored in the DPM database backend can be categorised into two groups, metadata and operation status. Metadata Metadata kept by DPM include information that is critical for DPM to function correctly, for example, for validating a user s DN or to translate the logical file name of a replica to its physical file name. The different groups of data kept are summarised as follow. File system infomation - what file system are available on the disk nodes and which disk pool their belong to. Pool infomation - Size, status, and policies of disk pools. Space reserve - Space tokens of supported VOs, describes the available and used storage space of a disk pool a VO has access to. File metadata - Information on each unique file and directory managed by the system, including POSIX file data such as size, type, and ACL. Replica metadata - Information on replicas of file, including which disk pool the replica belongs to, which file system the replica is located, and which disk node is hosting that file system. User and group information - Including DN of users, privilege levels, and ID mappings that are used internally. Operation status If a request (read, write, copy) cannot be immediately fulfilled by DPM, for instance, if the requests replica has to be pulled from an external site or because of scheduling 26

35 by some job management system, the request is recorded in the dpm_db database as pending. Information recorded including protocol used in the request, DN and host name of the client, number of retries, error messages, requested resource, and status of the request Issues DPM cannot function without having access to the information stored in the databases. As our aim is to increase the availability of a DPM storage element, that means we have to provide a certain degree of data redundancy to the database backend. For sites that wish to use a dedicated server to host their database service, they will be responsible for providing redundancy to the service and can choose from a number of options that are likely already built-in to the database. Since we also want to support the local database backend use-case, we have to implement a way to share database records across multiple head nodes. Grid sites are recommended to install DPM on physical hardware instead of VM for performance considerations. As such, simply starting another VM with the latest snapshot is not a viable solution, not to mention the new head node would not have to most up-to-date data when it is swapped in, which would leave the system in an inconsistent state. NoSQL solutions are also deemed not acceptable because we require the ACID properties provided by a transactional database Analysis There are already a number of technologies which aims to increase the availability of relational database services. The differences in these technologies are which parallel architecture, types of replication, and node management mode they support. A brief overview of these differences is presented as follow. Parallel database architecture Parallel database management system is typically based on two architectures: shared nothing or shared something. For shared something architecture they may be shared memory, shared disk, or a combination of both. Below is a brief overview of each of these architectures. Shared nothing: As the name implies, in a shared nothing architecture each node maintains a private view of its memory and disk. Because nothing is shared between the nodes, distributed database system using this architecture is often easier to achieve higher availability and extensibility, at the cost of increased data partition complexity for load-balancing reasons [15]. 27

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization In computer science, storage virtualization uses virtualization to enable better functionality

More information

dcache Introduction Course

dcache Introduction Course GRIDKA SCHOOL 2013 KARLSRUHER INSTITUT FÜR TECHNOLOGIE KARLSRUHE August 29, 2013 dcache Introduction Course Overview Chapters I, II and Ⅴ christoph.anton.mitterer@lmu.de I. Introduction To dcache Slide

More information

Introduction to SciTokens

Introduction to SciTokens Introduction to SciTokens Brian Bockelman, On Behalf of the SciTokens Team https://scitokens.org This material is based upon work supported by the National Science Foundation under Grant No. 1738962. Any

More information

Linux Administration

Linux Administration Linux Administration This course will cover all aspects of Linux Certification. At the end of the course delegates will have the skills required to administer a Linux System. It is designed for professionals

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

VMware AirWatch Content Gateway for Linux. VMware Workspace ONE UEM 1811 Unified Access Gateway

VMware AirWatch Content Gateway for Linux. VMware Workspace ONE UEM 1811 Unified Access Gateway VMware AirWatch Content Gateway for Linux VMware Workspace ONE UEM 1811 Unified Access Gateway You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

VMware AirWatch Content Gateway Guide For Linux

VMware AirWatch Content Gateway Guide For Linux VMware AirWatch Content Gateway Guide For Linux AirWatch v9.2 Have documentation feedback? Submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com. This product

More information

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES 1 THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES Vincent Garonne, Mario Lassnig, Martin Barisits, Thomas Beermann, Ralph Vigne, Cedric Serfon Vincent.Garonne@cern.ch ph-adp-ddm-lab@cern.ch XLDB

More information

VMware AirWatch Content Gateway Guide for Linux For Linux

VMware AirWatch Content Gateway Guide for Linux For Linux VMware AirWatch Content Gateway Guide for Linux For Linux Workspace ONE UEM v9.7 Have documentation feedback? Submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com.

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

Remote power and console management in large datacenters

Remote power and console management in large datacenters Remote power and console management in large datacenters A Horváth IT department, CERN, CH-1211 Genève 23, Switzerland E-mail: Andras.Horvath@cern.ch Abstract. Today s datacenters are often built of a

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information

The DMLite Rucio Plugin: ATLAS data in a filesystem

The DMLite Rucio Plugin: ATLAS data in a filesystem Journal of Physics: Conference Series OPEN ACCESS The DMLite Rucio Plugin: ATLAS data in a filesystem To cite this article: M Lassnig et al 2014 J. Phys.: Conf. Ser. 513 042030 View the article online

More information

Linux Virtual Machine (VM) Provisioning on the Hyper-V Platform

Linux Virtual Machine (VM) Provisioning on the Hyper-V Platform I T S E R V I C E S Son Truong Systems & Operations Unix Technical Lead November 2016 SERVER PROVISIONING: Linux Virtual Machine (VM) Provisioning on the Hyper-V Platform Standard Linux VM Introduction

More information

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear

More information

SoftNAS Cloud Performance Evaluation on Microsoft Azure

SoftNAS Cloud Performance Evaluation on Microsoft Azure SoftNAS Cloud Performance Evaluation on Microsoft Azure November 30, 2016 Contents SoftNAS Cloud Overview... 3 Introduction... 3 Executive Summary... 4 Key Findings for Azure:... 5 Test Methodology...

More information

Sentinet for Microsoft Azure SENTINET

Sentinet for Microsoft Azure SENTINET Sentinet for Microsoft Azure SENTINET Sentinet for Microsoft Azure 1 Contents Introduction... 2 Customer Benefits... 2 Deployment Topologies... 3 Cloud Deployment Model... 3 Hybrid Deployment Model...

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

Outline. ASP 2012 Grid School

Outline. ASP 2012 Grid School Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage

More information

Metadaten Workshop 26./27. März 2007 Göttingen. Chimera. a new grid enabled name-space service. Martin Radicke. Tigran Mkrtchyan

Metadaten Workshop 26./27. März 2007 Göttingen. Chimera. a new grid enabled name-space service. Martin Radicke. Tigran Mkrtchyan Metadaten Workshop 26./27. März Chimera a new grid enabled name-space service What is Chimera? a new namespace provider provides a simulated filesystem with additional metadata fast, scalable and based

More information

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID The WLCG Motivation and benefits Container engines Experiments status and plans Security considerations Summary and outlook STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID SWISS EXPERIENCE

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

BeBanjo Infrastructure and Security Overview

BeBanjo Infrastructure and Security Overview BeBanjo Infrastructure and Security Overview Can you trust Software-as-a-Service (SaaS) to run your business? Is your data safe in the cloud? At BeBanjo, we firmly believe that SaaS delivers great benefits

More information

Document Sub Title. Yotpo. Technical Overview 07/18/ Yotpo

Document Sub Title. Yotpo. Technical Overview 07/18/ Yotpo Document Sub Title Yotpo Technical Overview 07/18/2016 2015 Yotpo Contents Introduction... 3 Yotpo Architecture... 4 Yotpo Back Office (or B2B)... 4 Yotpo On-Site Presence... 4 Technologies... 5 Real-Time

More information

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti S3IT: Service and Support for Science IT Cloud middleware Part2: Let s pick one cloud IaaS middleware: OpenStack Sergio Maffioletti S3IT: Service and Support for Science IT, University of Zurich http://www.s3it.uzh.ch/

More information

Important DevOps Technologies (3+2+3days) for Deployment

Important DevOps Technologies (3+2+3days) for Deployment Important DevOps Technologies (3+2+3days) for Deployment DevOps is the blending of tasks performed by a company's application development and systems operations teams. The term DevOps is being used in

More information

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 The CORAL Project Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 Outline CORAL - a foundation for Physics Database Applications in the LHC Computing Grid (LCG)

More information

Grid Computing with Voyager

Grid Computing with Voyager Grid Computing with Voyager By Saikumar Dubugunta Recursion Software, Inc. September 28, 2005 TABLE OF CONTENTS Introduction... 1 Using Voyager for Grid Computing... 2 Voyager Core Components... 3 Code

More information

F5 BIG-IQ Centralized Management: Local Traffic & Network. Version 5.2

F5 BIG-IQ Centralized Management: Local Traffic & Network. Version 5.2 F5 BIG-IQ Centralized Management: Local Traffic & Network Version 5.2 Table of Contents Table of Contents BIG-IQ Local Traffic & Network: Overview... 5 What is Local Traffic & Network?... 5 Understanding

More information

Migrating a Business-Critical Application to Windows Azure

Migrating a Business-Critical Application to Windows Azure Situation Microsoft IT wanted to replace TS Licensing Manager, an application responsible for critical business processes. TS Licensing Manager was hosted entirely in Microsoft corporate data centers,

More information

ARCHER Data Services Service Layer

ARCHER Data Services Service Layer ARCHER 1.0 ARCHER Data Services Service Layer System Administrator s Guide ICAT & MCAText Installation Configuration Maintenance ARCHER Data Services Service Layer... 1 About ARCHER Data Services Service

More information

F5 BIG-IQ Centralized Management: Local Traffic & Network Implementations. Version 5.4

F5 BIG-IQ Centralized Management: Local Traffic & Network Implementations. Version 5.4 F5 BIG-IQ Centralized Management: Local Traffic & Network Implementations Version 5.4 Table of Contents Table of Contents Managing Local Traffic Profiles...7 How do I manage LTM profiles in BIG-IQ?...7

More information

SoftNAS Cloud Performance Evaluation on AWS

SoftNAS Cloud Performance Evaluation on AWS SoftNAS Cloud Performance Evaluation on AWS October 25, 2016 Contents SoftNAS Cloud Overview... 3 Introduction... 3 Executive Summary... 4 Key Findings for AWS:... 5 Test Methodology... 6 Performance Summary

More information

Application Guide. Connection Broker. Advanced Connection and Capacity Management For Hybrid Clouds

Application Guide. Connection Broker. Advanced Connection and Capacity Management For Hybrid Clouds Application Guide Connection Broker Advanced Connection and Capacity Management For Hybrid Clouds Version 9.0 June 2018 Contacting Leostream Leostream Corporation 271 Waverley Oaks Rd Suite 206 Waltham,

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Surveillance Dell EMC Isilon Storage with Video Management Systems

Surveillance Dell EMC Isilon Storage with Video Management Systems Surveillance Dell EMC Isilon Storage with Video Management Systems Configuration Best Practices Guide H14823 REV 2.0 Copyright 2016-2018 Dell Inc. or its subsidiaries. All rights reserved. Published April

More information

Azure Development Course

Azure Development Course Azure Development Course About This Course This section provides a brief description of the course, audience, suggested prerequisites, and course objectives. COURSE DESCRIPTION This course is intended

More information

VMware Identity Manager Cloud Deployment. DEC 2017 VMware AirWatch 9.2 VMware Identity Manager

VMware Identity Manager Cloud Deployment. DEC 2017 VMware AirWatch 9.2 VMware Identity Manager VMware Identity Manager Cloud Deployment DEC 2017 VMware AirWatch 9.2 VMware Identity Manager You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

VMware Identity Manager Cloud Deployment. Modified on 01 OCT 2017 VMware Identity Manager

VMware Identity Manager Cloud Deployment. Modified on 01 OCT 2017 VMware Identity Manager VMware Identity Manager Cloud Deployment Modified on 01 OCT 2017 VMware Identity Manager You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The

More information

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide You can find the most up-to-date technical

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Xcalar Installation Guide

Xcalar Installation Guide Xcalar Installation Guide Publication date: 2018-03-16 www.xcalar.com Copyright 2018 Xcalar, Inc. All rights reserved. Table of Contents Xcalar installation overview 5 Audience 5 Overview of the Xcalar

More information

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002 Maximum Availability Architecture: Overview An Oracle White Paper July 2002 Maximum Availability Architecture: Overview Abstract...3 Introduction...3 Architecture Overview...4 Application Tier...5 Network

More information

CloudLink SecureVM. Administration Guide. Version 4.0 P/N REV 01

CloudLink SecureVM. Administration Guide. Version 4.0 P/N REV 01 CloudLink SecureVM Version 4.0 Administration Guide P/N 302-002-056 REV 01 Copyright 2015 EMC Corporation. All rights reserved. Published June 2015 EMC believes the information in this publication is accurate

More information

TANDBERG Management Suite - Redundancy Configuration and Overview

TANDBERG Management Suite - Redundancy Configuration and Overview Management Suite - Redundancy Configuration and Overview TMS Software version 11.7 TANDBERG D50396 Rev 2.1.1 This document is not to be reproduced in whole or in part without the permission in writing

More information

Example Azure Implementation for Government Agencies. Indirect tax-filing system. By Alok Jain Azure Customer Advisory Team (AzureCAT)

Example Azure Implementation for Government Agencies. Indirect tax-filing system. By Alok Jain Azure Customer Advisory Team (AzureCAT) Example Azure Implementation for Government Agencies Indirect tax-filing system By Alok Jain Azure Customer Advisory Team (AzureCAT) June 2018 Example Azure Implementation for Government Agencies Contents

More information

Discover CephFS TECHNICAL REPORT SPONSORED BY. image vlastas, 123RF.com

Discover CephFS TECHNICAL REPORT SPONSORED BY. image vlastas, 123RF.com Discover CephFS TECHNICAL REPORT SPONSORED BY image vlastas, 123RF.com Discover CephFS TECHNICAL REPORT The CephFS filesystem combines the power of object storage with the simplicity of an ordinary Linux

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

A scalable storage element and its usage in HEP

A scalable storage element and its usage in HEP AstroGrid D Meeting at MPE 14 15. November 2006 Garching dcache A scalable storage element and its usage in HEP Martin Radicke Patrick Fuhrmann Introduction to dcache 2 Project overview joint venture between

More information

TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1

TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1 TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1 ABSTRACT This introductory white paper provides a technical overview of the new and improved enterprise grade features introduced

More information

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

VMware Identity Manager Connector Installation and Configuration (Legacy Mode) VMware Identity Manager Connector Installation and Configuration (Legacy Mode) VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until

More information

and the GridKa mass storage system Jos van Wezel / GridKa

and the GridKa mass storage system Jos van Wezel / GridKa and the GridKa mass storage system / GridKa [Tape TSM] staging server 2 Introduction Grid storage and storage middleware dcache h and TSS TSS internals Conclusion and further work 3 FZK/GridKa The GridKa

More information

Developing Microsoft Azure Solutions (MS 20532)

Developing Microsoft Azure Solutions (MS 20532) Developing Microsoft Azure Solutions (MS 20532) COURSE OVERVIEW: This course is intended for students who have experience building ASP.NET and C# applications. Students will also have experience with the

More information

Interoute Use Case. SQL 2016 Always On in Interoute VDC. Last updated 11 December 2017 ENGINEERED FOR THE AMBITIOUS

Interoute Use Case. SQL 2016 Always On in Interoute VDC. Last updated 11 December 2017 ENGINEERED FOR THE AMBITIOUS Interoute Use Case SQL 2016 Always On in Interoute VDC Last updated 11 December 2017 ENGINEERED FOR THE AMBITIOUS VERSION HISTORY Version Date Title Author 1 11 / 12 / 17 SQL 2016 Always On in Interoute

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP THE EUROPEAN ORGANISATION FOR PARTICLE PHYSICS RESEARCH (CERN) 2 THE LARGE HADRON COLLIDER THE LARGE HADRON COLLIDER

More information

Scientific data management

Scientific data management Scientific data management Storage and data management components Application database Certificate Certificate Authorised users directory Certificate Certificate Researcher Certificate Policies Information

More information

Introduction to SRM. Riccardo Zappi 1

Introduction to SRM. Riccardo Zappi 1 Introduction to SRM Grid Storage Resource Manager Riccardo Zappi 1 1 INFN-CNAF, National Center of INFN (National Institute for Nuclear Physic) for Research and Development into the field of Information

More information

Polarion 18.2 Enterprise Setup

Polarion 18.2 Enterprise Setup SIEMENS Polarion 18.2 Enterprise Setup POL005 18.2 Contents Overview........................................................... 1-1 Terminology..........................................................

More information

Polarion 18 Enterprise Setup

Polarion 18 Enterprise Setup SIEMENS Polarion 18 Enterprise Setup POL005 18 Contents Terminology......................................................... 1-1 Overview........................................................... 2-1

More information

Red Hat OpenStack Platform 10 Product Guide

Red Hat OpenStack Platform 10 Product Guide Red Hat OpenStack Platform 10 Product Guide Overview of Red Hat OpenStack Platform OpenStack Team Red Hat OpenStack Platform 10 Product Guide Overview of Red Hat OpenStack Platform OpenStack Team rhos-docs@redhat.com

More information

PROPOSAL OF WINDOWS NETWORK

PROPOSAL OF WINDOWS NETWORK PROPOSAL OF WINDOWS NETWORK By: Class: CMIT 370 Administering Windows Servers Author: Rev: 1.0 Date: 01.07.2017 Page 1 of 10 OVERVIEW This is a proposal for Ear Dynamics to integrate a Windows Network

More information

IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage

IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage Silverton Consulting, Inc. StorInt Briefing 2017 SILVERTON CONSULTING, INC. ALL RIGHTS RESERVED Page 2 Introduction Unstructured data has

More information

A Guide to Architecting the Active/Active Data Center

A Guide to Architecting the Active/Active Data Center White Paper A Guide to Architecting the Active/Active Data Center 2015 ScaleArc. All Rights Reserved. White Paper The New Imperative: Architecting the Active/Active Data Center Introduction With the average

More information

Utilizing Linux Kernel Components in K42 K42 Team modified October 2001

Utilizing Linux Kernel Components in K42 K42 Team modified October 2001 K42 Team modified October 2001 This paper discusses how K42 uses Linux-kernel components to support a wide range of hardware, a full-featured TCP/IP stack and Linux file-systems. An examination of the

More information

/ Cloud Computing. Recitation 6 October 2 nd, 2018

/ Cloud Computing. Recitation 6 October 2 nd, 2018 15-319 / 15-619 Cloud Computing Recitation 6 October 2 nd, 2018 1 Overview Announcements for administrative issues Last week s reflection OLI unit 3 module 7, 8 and 9 Quiz 4 Project 2.3 This week s schedule

More information

The Microsoft Large Mailbox Vision

The Microsoft Large Mailbox Vision WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more email has many advantages. Large mailboxes

More information

How to Scale MongoDB. Apr

How to Scale MongoDB. Apr How to Scale MongoDB Apr-24-2018 About me Location: Skopje, Republic of Macedonia Education: MSc, Software Engineering Experience: Lead Database Consultant (since 2016) Database Consultant (2012-2016)

More information

Polarion Enterprise Setup 17.2

Polarion Enterprise Setup 17.2 SIEMENS Polarion Enterprise Setup 17.2 POL005 17.2 Contents Terminology......................................................... 1-1 Overview...........................................................

More information

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Benchmarking third-party-transfer protocols with the FTS

Benchmarking third-party-transfer protocols with the FTS Benchmarking third-party-transfer protocols with the FTS Rizart Dona CERN Summer Student Programme 2018 Supervised by Dr. Simone Campana & Dr. Oliver Keeble 1.Introduction 1 Worldwide LHC Computing Grid

More information

Developing Microsoft Azure Solutions (70-532) Syllabus

Developing Microsoft Azure Solutions (70-532) Syllabus Developing Microsoft Azure Solutions (70-532) Syllabus Cloud Computing Introduction What is Cloud Computing Cloud Characteristics Cloud Computing Service Models Deployment Models in Cloud Computing Advantages

More information

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3. Installing and Configuring VMware Identity Manager Connector 2018.8.1.0 (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.3 You can find the most up-to-date technical documentation on

More information

Dynamic Federations. Seamless aggregation of standard-protocol-based storage endpoints

Dynamic Federations. Seamless aggregation of standard-protocol-based storage endpoints Dynamic Federations Seamless aggregation of standard-protocol-based storage endpoints Fabrizio Furano Patrick Fuhrmann Paul Millar Daniel Becker Adrien Devresse Oliver Keeble Ricardo Brito da Rocha Alejandro

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

VMware AirWatch Content Gateway for Windows. VMware Workspace ONE UEM 1811 Unified Access Gateway

VMware AirWatch Content Gateway for Windows. VMware Workspace ONE UEM 1811 Unified Access Gateway VMware AirWatch Content Gateway for Windows VMware Workspace ONE UEM 1811 Unified Access Gateway You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

BIG-IP Access Policy Manager : Secure Web Gateway. Version 13.0

BIG-IP Access Policy Manager : Secure Web Gateway. Version 13.0 BIG-IP Access Policy Manager : Secure Web Gateway Version 13.0 Table of Contents Table of Contents BIG-IP APM Secure Web Gateway Overview...9 About APM Secure Web Gateway... 9 About APM benefits for web

More information

DOWNLOAD PDF SQL SERVER 2012 STEP BY STEP

DOWNLOAD PDF SQL SERVER 2012 STEP BY STEP Chapter 1 : Microsoft SQL Server Step by Step - PDF Free Download - Fox ebook Your hands-on, step-by-step guide to building applications with Microsoft SQL Server Teach yourself the programming fundamentals

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

<Insert Picture Here> Oracle NoSQL Database A Distributed Key-Value Store

<Insert Picture Here> Oracle NoSQL Database A Distributed Key-Value Store Oracle NoSQL Database A Distributed Key-Value Store Charles Lamb The following is intended to outline our general product direction. It is intended for information purposes only,

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

VMware Integrated OpenStack Quick Start Guide

VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack 1.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

VMware AirWatch Content Gateway Guide for Windows

VMware AirWatch Content Gateway Guide for Windows VMware AirWatch Content Gateway Guide for Windows AirWatch v9.1 Have documentation feedback? Submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com. This product

More information

MySQL Group Replication. Bogdan Kecman MySQL Principal Technical Engineer

MySQL Group Replication. Bogdan Kecman MySQL Principal Technical Engineer MySQL Group Replication Bogdan Kecman MySQL Principal Technical Engineer Bogdan.Kecman@oracle.com 1 Safe Harbor Statement The following is intended to outline our general product direction. It is intended

More information

Monitoring of large-scale federated data storage: XRootD and beyond.

Monitoring of large-scale federated data storage: XRootD and beyond. Monitoring of large-scale federated data storage: XRootD and beyond. J Andreeva 1, A Beche 1, S Belov 2, D Diguez Arias 1, D Giordano 1, D Oleynik 2, A Petrosyan 2, P Saiz 1, M Tadel 3, D Tuckett 1 and

More information

OpenNebula on VMware: Cloud Reference Architecture

OpenNebula on VMware: Cloud Reference Architecture OpenNebula on VMware: Cloud Reference Architecture Version 1.2, October 2016 Abstract The OpenNebula Cloud Reference Architecture is a blueprint to guide IT architects, consultants, administrators and

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information