LINBIT DRBD Proxy Configuration Guide on CentOS 6. Matt Kereczman 1.2,

Size: px
Start display at page:

Download "LINBIT DRBD Proxy Configuration Guide on CentOS 6. Matt Kereczman 1.2,"

Transcription

1 LINBIT DRBD Proxy Configuration Guide on CentOS 6 Matt Kereczman 1.2,

2 Table of Contents 1. About this guide Two Node Cluster using DRBD Proxy Assumptions Installing DRBD and DRBD Proxy Configure the DRBD Resource using Proxy Create DRBD Metadata Connect the Nodes Start the Initial Sync Mount and Test the DRBD Device Conclusion Three Node Stacked Cluster using DRBD Proxy Assumptions Install DRBD on Charlie Install DRBD Proxy on All Nodes Configuring the Stacked Resource Testing the Disaster Recovery Node Failing Over to a Disaster Recovery Node Install and Configure Pacemaker on DR node Simulate a Failure of Alpha and Bravo Resuming Services on the DR Node Failback to Primary from DR site Starting Alpha and Bravo Manually Start and Connect DRBD Resume Services on HA cluster Tuning DRBD Proxy Determining a Proxy Buffer Size Configuring ko-count and timeout Configuring Congestion Policies Configure Compression with DRBD Proxy Speed up Initial Connection Example configurations Conclusion Feedback Additional information and resources

3 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 1. About this guide 1. About this guide This guide covers two common DRBD Proxy implementations. The first method uses DRBD proxy in a newly configured two-node DRBD cluster to replicate data to a peer over a high latency WAN connection; this method is common for customers looking for a disaster recovery solution. The second method demonstrates adding a third off site (stacked) node, to a preexisting high availability cluster using DRBD Proxy; this method is very common for customers who are already using DRBD for high availability, and would like to add disaster recovery capabilities. We will also cover procedures for failing-over and failing-back to a DR (Disaster Recovery) site; using our three-node cluster as an example. Finally, we will conclude the guide with a section on how to begin tuning DRBD and DRBD Proxy for different workloads and WAN connections. 2. Two Node Cluster using DRBD Proxy DRBD Proxy makes data replication over a high latency, low bandwidth, WAN connection possible. Without DRBD Proxy (even with DRBD s protocol A), our network throughput and latency will become our disk s throughput and latency. DRBD Proxy uses a configurable amount of system memory to act as a send buffer for our replicated writes; this allows writes to be buffered and flushed to the peer as fast as the replication link allows, without reducing local disk performance Assumptions This guide assumes the following: This is a new two node cluster with no preexisting data Both nodes have freshly installed CentOS 6.x operating system The node s hostnames are alice and bob Both nodes have two network interface: eth0 on alice has an IP address of: eth1 on alice has an IP address of: eth0 on bob has an IP address of: eth1 on bob has an IP address of: Both nodes have separate storage for their root filesystem and DRBD replicated data /dev/vda holds the root filesystem on both nodes /dev/vdb is a freshly installed drive with no existing data that will be used for DRBD replicated data Both nodes have their firewalls configured to allow traffic on TCP ports 7788, 7789, and Installing DRBD and DRBD Proxy Configure DRBD s Yum Repository We will begin our installation by setting up LINBIT s yum repositories. As a customer, we have access to all of LINBIT s certified binaries using the distributions package manager or direct download through LINBIT s customer portal. If you are not a customer, 30-day trial licenses are available; visit linbit.com for contact information. To enable LINBIT s repositories, create /etc/yum.repos.d/linbit.repo with the following contents, replacing <HASH> with the custom hash value received from LINBIT. 1

4 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 2.3. Configure the DRBD Resource using Proxy [LINBIT-DRBD] baseurl= gpgcheck=1 [LINBIT-DRBD-Proxy] baseurl= gpgcheck=1 Before installing packages from LINBIT s repositories, we must import the public key used to sign LINBIT s RPM packages. Use the following commands to do so: # wget # rpm --import gpg-pubkey-53b3b b6e23.asc # rm gpg-pubkey-53b3b b6e23.asc DRBD and DRBD Proxy Package Installation Use the commands below to install the drbd, kmod-drbd, and drbd-proxy-3.0 packages, load DRBD s kernel module, and ensure DRBD Proxy starts at boot on both nodes. # yum install drbd kmod-drbd drbd-proxy-3.0 # modprobe drbd # chkconfig drbdproxy on At the time of writing this document, choosing the drbd-proxy-3.0 package will install the latest version of DRBD Proxy 3.x. This naming convention may change in future releases be be more precise. We can use yum to search for all currently available DRBD Proxy packages to ensure we are downloading the most current one using the following command: # yum search drbdproxy The kmod-drbd package might not be for the exact kernel version we are running. As long as kmod-drbd is the closest version available from LINBIT s repository to the version of our kernel without going over, everything should work as expected. The Packages in LINBIT s repository are tested on systems running the latest kernel available from the vendor before being released. We were provided with a license file for DRBD Proxy. We must copy or move this file into the /etc/ directory, and change it s ownership to drbdpxy, before DRBD Proxy will start. Since we are using proxy on both nodes, these steps should be completed on both nodes. # cp drbd-proxy.license /etc/drbd-proxy.license # chown drbdpxy: /etc/drbd-proxy.license For the examples in this guide, we will install DRBD Proxy on the cluster nodes. This is the most common way to install Proxy, but it should be noted that it is possible to install DRBD Proxy on a separate machine outside of the cluster Configure the DRBD Resource using Proxy Now that we have DRBD and DRBD Proxy installed, we can begin creating our first resource using DRBD Proxy. Our DRBD configuration files will be stored in the /etc/drbd.d/ directory, and have the.res extension. Any resource files within this directory with the.res extension will be activated automatically at boot time. We recommend using 2

5 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 2.4. Create DRBD Metadata naming conventions for consistency between resource configuration files and the resources defined within them. In our example we will define a resource r0 in the resource configuration file r0.res. Matching a resource s name with that of its configuration file is not a requirement. Also, it is possible to define more than one resource in a single resource file, though it is highly recommended to keep these resource files as granular as possible, for simplicity and sanity. The exception to this rule is stacked resources; stacked resource configurations should be placed in the same configuration file as the lower-level resource. Below is the configuration of the resource r0, which will be stored in the resource configuration file /etc/drbd.d/r0.res. Notice that the proxy on stanza for each node has an inside and an outside interface; since we are running proxy locally on both nodes, the inside interface will be the loopback, while the outside interface is the address ping-able by the peer. The configuration also includes a proxy stanza; inside this stanza is where one would configure the proxy memory buffer, and the compression plugin. The example below is using zlib compression at the maximum level of 9, and has a memlimit (send buffer) of 100 megabytes. Selecting an appropriate memlimit depends upon the expected write workloads, throughput of our connections, and acceptable maximum amount of data that may be lost in the event of a complete site loss. It is suggested to configure an initial value higher than we think is needed, and dial it back later if monitoring shows it s not needed. All nodes in the cluster must have the exact same resource configuration files. Any modification made to a resource file, needs to be made on both nodes. resource r0 { protocol A; device minor 0; disk /dev/vdb; meta-disk internal; proxy { memlimit 100M; plugin { zlib level 9; on alice { address :7789; proxy on alice { inside :7788; outside :7790; on bob { address :7789; proxy on bob { inside :7788; outside :7790; 2.4. Create DRBD Metadata Now that the resource has been defined, we need to create the metadata used by DRBD to keep track of the written blocks on both nodes. Since the DRBD backing device, /dev/vdb, contains no preexisting data, it is safe to simply 3

6 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 2.5. Connect the Nodes create the metadata letting DRBD determine the amount of space needed and write it to the end of the disk. Create the metadata by running the following command on both nodes: # drbdadm create-md r0 On production systems with preexisting data some steps must be taken to make room at the end of the backing device for DRBD s metadata. There are equations in the DRBD User s guide for determining exactly how much space is needed for metadata; this depends on the size of our backing disk (DRBD User s Manual). In most practical applications there is a rule of thumb that can be used to determine the required amount of space: 32MB for metadata per 1TB of storage Connect the Nodes Both nodes have created their metadata and are ready for the initial sync. Before beginning the sync, we need to start the drbdproxy service and bring up r0 on both nodes. # service drbdproxy start # drbdadm up r0 We can now check the status of our DRBD device. If all is working as expected, we should see output similar to that below when executing the following commands: # cat /proc/drbd version: (api:1/proto:86-101) GIT-hash: 8ca20467bb9e10aa ac1d65f2b54b4 build by buildsystem@linbit 0: cs:connected ro:secondary/secondary ds:inconsistent/inconsistent A r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos: # drbd-proxy-ctl -c 'show hconnections' Name Status LAN WAN Compr Up since r0-bob-alice Up 1.0K 3.6K Fri Jun 27 14:10: connection(s) listed. If we see cs:wfconnection instead of the expected cs:connected, be sure the firewall is not blocking port DRBD Proxy can take a short period of time to establish a connection depending on the replication link. We can use the command watch -n1 cat /proc/drbd to monitor the status of the connection state Start the Initial Sync If our devices are in the Connected state, we can now choose a node to become Primary; enabling us to use our DRBD device. Since this guide assumes there is no preexisting data that needs preservation, this choice is arbitrary. I will choose alice as the Primary in my cluster with the following command: In a production environment where data preservation is required, the node where the data currently exists MUST be selected as the Primary to avoid data loss. # drbdadm primary --force r0 4

7 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 2.7. Mount and Test the DRBD Device That will start the initial sync. The initial sync is a full sync and can take a long time depending on the size of our device. We can view the rate and progress of the sync by using the watch -n1 cat /proc/drbd command (here done on the Secondary): version: (api:1/proto:86-101) GIT-hash: 8ca20467bb9e10aa ac1d65f2b54b4 build by buildsystem@linbit 0: cs:synctarget ro:secondary/primary ds:inconsistent/uptodate A r----- ns:0 nr: dw: dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos: [===>...] sync'ed: 20.8% ( / )K finish: 0:01:26 speed: 19,144 (13,100) want: 16,800 K/sec The device can be used on the Primary node during all sync processes. By default, DRBD dynamically throttles the IO available to the sync process in preference of incoming application IO. We do not need to wait for the initial sync to complete to proceed - on the contrary, if the mkfs tool that s being used sends "Trim" (alternatively called "Discard") requests for creating the filesystem, DRBD can use that information to shorten the initial sync! So, the ext4 filesystem being created later on will be synchronized to the remote within a very short time - nearly independent of the filesystem size. We can also watch the DRBD Proxy buffer fill and flush using the watch -n1 "drbd-proxy-ctl -c 'show memusage'" command on the Primary node: b-alice normal prio 38 of 104 MiB used, 0 persistent [***************...] / bytes [...] 1024/ bytes 2.7. Mount and Test the DRBD Device We now have a usable DRBD device. This section of the guide will have us format and mount the new device to run some tests verifying the cluster is performing as expected Mount the DRBD Device On the Primary node we should see that there is a new device, /dev/drbd0. To use this new device, we will format it as ext4, create a mount-point, and mount the DRBD device to it: We would not want to create a filesystem on a device that already contains data. mkfs.ext4 should only be run on new, previously empty, resources. # mkfs.ext4 /dev/drbd0 # mkdir /mnt/drbd_mount # mount -t ext4 -o noatime,nodiratime,discard /dev/drbd0 /mnt/drbd_mount Running the command, mount, should now show that /dev/drbd0 is mounted on /mnt/drbd_mount as a read/write ext4 filesystem. # mount < snipped output > /dev/drbd0 on /mnt/drbd_mount type ext4 (rw,noatime,nodiratime,discard) 5

8 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 2.8. Conclusion Be sure to create the mountpoint on both nodes Test Write Speeds To test that the new resource is writing at speeds roughly equal (less than 3% slower) to that of it s backing device, we can perform a write test with the DRBD devices connected, then again while disconnected. This will prove that the DRBD Device is not being affected by the latency/throughput of our replication link. Run the command below to write a 100MB file to the mounted DRBD device, and note the rate that is output when the dd command completes: # dd if=/dev/zero of=/mnt/drbd_mount/dd_test_file bs=1m count=100 oflag=direct records in records out bytes (105 MB) copied, s, 43.5 MB/s Now we will disconnect the DRBD device from the Primary node putting it into the StandAlone state. This will write directly to the DRBD device, without replicating any data over DRBD Proxy, and we should see that we are achieving the same speed as we were when the devices were connected. # drbdadm disconnect r0 # dd if=/dev/zero of=/mnt/drbd_mount/dd_test_file bs=1m count=100 oflag=direct records in records out bytes (105 MB) copied, s, 46.4 MB/s The systems used to write this guide are Virtual Machines with limited resources on shared hardware. Performance on physical hardware should be much better. The above demonstrates that our network speed is not limiting our disk speed. Reconnect the device using the following command on the Primary node: # drbdadm connect r Conclusion With a few easy steps we ve set up a data replication solution for disaster recovery. As a final tip: Repeat the command from above, to see whether DRBD Proxy is really compressing the data (which should be very efficient, as only NULL bytes have been written by dd: # drbd-proxy-ctl -c 'show hconnections' Name Status LAN WAN Compr Up since r0-bob-alice Up 204.2M 82.1K Fri Jun 27 14:40: connection(s) listed. 3. Three Node Stacked Cluster using DRBD Proxy This section of the guide covers adding a third (disaster recovery) node to an already existing high availability cluster. This is a common procedure, but it needs to be executed carefully as there is preexisting data that needs to remain 6

9 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.1. Assumptions intact. Also, in order to add DRBD Proxy to a preexisting HA cluster, there will be a brief period of downtime when we reconfigure HA services to use the new stacked resource. It is recommended to perform dry-runs of this procedure in test environments (when possible) to limit the amount of downtime required to make the necessary changes Assumptions This guide assumes the following: All systems are running CentOS 6.x All systems have a separate device for DRBD replicated storage The existence of a local high availability NFS cluster One DRBD resource, formated ext4, mounted to /mnt/drbd_data/, exported via NFS Cluster contains data which must be preserved Hostnames: Alpha and Bravo Alpha and Bravo have a service IP of /24 for NFS DRBD 8.4.4, Pacemaker, and Heartbeat, are already installed and configured A third, remote node named Charlie, is to be added to the cluster as a disaster recovery node Charlie has a new drive /dev/vdb that will be used for the replicated data All three nodes use LVM as their DRBD backing devices, and have room available to grow All nodes have their firewalls configured to allow cluster related traffic All nodes have nfs-utils installed The primary site and DR site are connected via a Layer 2 WAN link Install DRBD on Charlie Configure LINBIT s Yum Repository To enable LINBIT s repositories, create /etc/yum.repos.d/linbit.repo with the following contents, replacing <HASH> with the custom hash value we received from LINBIT. [LINBIT-DRBD] baseurl= gpgcheck=1 [LINBIT-DRBD-Proxy] baseurl= gpgcheck=1 [LINBIT-Pacemaker] baseurl= gpgcheck=1 Before installing packages from LINBIT s repositories, we must import the public key used to sign LINBIT s RPM packages. Use the following commands to do so: # wget # rpm --import gpg-pubkey-53b3b b6e23.asc # rm gpg-pubkey-53b3b b6e23.asc 7

10 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.3. Install DRBD Proxy on All Nodes Install the drbd, and kmod-drbd packages from the LINBIT repositories, load the kernel module, and make sure DRBD is set to start at boot. # yum install drbd kmod-drbd # modprobe drbd # chkconfig drbd on 3.3. Install DRBD Proxy on All Nodes Install DRBD Proxy on all three nodes using yum, and ensure it is set to start at boot time only on charlie; we will use Pacemaker to start and stop DRBD Proxy on alpha and bravo. On all three nodes: # yum install drbd-proxy-3.0 # chkconfig drbdproxy off Then on charlie only: # chkconfig drbdproxy on We were provided with a license file for DRBD Proxy. We must copy or move this file into the /etc/ directory, and change it s ownership to drbdpxy, before DRBD Proxy will start. Since we are using proxy on all three nodes, these steps should be completed on all three nodes. # cp drbd-proxy.license /etc/drbd-proxy.license # chown drbdpxy: /etc/drbd-proxy.license 3.4. Configuring the Stacked Resource Three node clusters require resource stacking. Resource stacking is exactly what it sounds like, creating a DRBD resource on top of another DRBD resource. When adding a third node to a previously existing two node cluster, we will configure a new resource that uses the already existing DRBD device as its backing disk. We will need to stop services using the existing DRBD device while configuring our stacked resource, as we never want to access the backing disk of a DRBD resource [1: Because DRBD wouldn t see that data changes, and so couldn t replicate to the other node.]. Once our configuration is in place, we will need to reconfigure our services to use the new stacked resource before starting them, ensuring changes are replicated to all three nodes. In our example, we are adding a DR node to an already existing HA NFS cluster managed by Pacemaker. In order to make the needed changes, we will need stop all these services that require the DRBD device or filesystems on top of it, leaving only the Master/Slave set for the DRBD device running; then put the cluster into maintenance mode, so it doesn t attempt to manage any cluster services until we re done Putting Pacemaker into Maintenance Mode In our simple NFS cluster comprised of alpha and bravo, the crm_mon utility shows that the group, g_nfs, contains all our clustered services: 8

11 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.4. Configuring the Stacked Resource Last updated: Fri Jun 27 10:48: Last change: Fri Jun 27 16:11: via crm_attribute on alpha Stack: heartbeat Current DC: bravo (8001d23c-c0ef-462e-82b3-c983967df5b3) - partition with quorum Version: hg bf8451.el6-9bf Nodes configured, unknown expected votes 6 Resources configured. Online: [ alpha bravo ] Resource Group: g_nfs p_fs_drbd (ocf::heartbeat:filesystem): Started alpha p_lsb_nfsserver (lsb:nfs): Started alpha p_exportfs_drbd (ocf::heartbeat:exportfs): Started alpha p_virtip_nfs (ocf::heartbeat:ipaddr2): Started alpha Master/Slave Set: ms_drbd_r0 [p_drbd_r0] Masters: [ alpha ] Slaves: [ bravo ] Running crm configure show shows us the currently running cluster configuration. Our example configuration looks like this: 9

12 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.4. Configuring the Stacked Resource node $id="32e7f01c a34-97e6-efec5733ed1d" alpha node $id="8001d23c-c0ef-462e-82b3-c983967df5b3" bravo primitive p_drbd_r0 ocf:linbit:drbd \ params drbd_resource="r0" \ op monitor interval="29" role="master" \ op monitor interval="30" role="slave" primitive p_exportfs_drbd ocf:heartbeat:exportfs \ params fsid="1" directory="/mnt/drbd_data" \ options="rw,sync,mountpoint" clientspec=" / " \ wait_for_leasetime_on_stop="false" \ unlock_on_stop="true" \ op monitor interval="30s" primitive p_fs_drbd ocf:heartbeat:filesystem \ params device="/dev/drbd0" directory="/mnt/drbd_data" \ options="noatime,nodiratime,discard" fstype="ext4" \ op monitor interval="10s" primitive p_lsb_nfsserver lsb:nfs \ op monitor interval="30s" primitive p_virtip_nfs ocf:heartbeat:ipaddr2 \ params ip=" " cidr_netmask="24" \ op monitor interval="10s" timeout="20s" group g_nfs p_fs_drbd p_lsb_nfsserver p_exportfs_drbd p_virtip_nfs ms ms_drbd_r0 p_drbd_r0 \ meta master-max="1" master-node-max="1" clone-max="2" \ clone-node-max="1" notify="true" colocation c_nfs_with_drbd inf: g_nfs ms_drbd_r0:master order o_drbd_before_nfs inf: ms_drbd_r0:promote g_nfs:start property $id="cib-bootstrap-options" \ stonith-enabled="false" \ no-quorum-policy="ignore" \ dc-version=" c7312c689715e096b716419e2ebc12b " \ cluster-infrastructure="heartbeat" \ last-lrm-refresh=" " rsc_defaults $id="rsc-options" \ resource-stickiness="200" In the example Pacemaker configuration above, there are no STONITH devices configured and the stonith-enabled cluster property is disabled. However, in a production cluster, STONITH should always be configured and enabled. Looking at the configuration we can confirm that the group g_nfs holds all our cluster services, including our filesystem, so that is what we must stop within Pacemaker. To stop the group, issue the command below at a shell prompt on either cluster node: # crm resource stop g_nfs Put the cluster into maintenance mode by issuing the command below at a shell prompt on either cluster node: # crm configure property maintenance-mode=true 10

13 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.4. Configuring the Stacked Resource Preparing Disks for Stacked Resource s Meta-data Since we are stacking a new DRBD device on top of an existing one, we need to make room for the stacked devices' metadata on alpha and bravo, then create a logical volume of the same size on charlie. On alpha and bravo we need to extend the logical volume enough to fit the new metadata. Though there are equations to determine the exact number of sectors needed for metadata, in most practical implementations it is satisfactory to use the rule of thumb: 32MB for metadata per 1TB on the backing disk. This rule scales 1:1; 32KB per 1GB. Please remember to always round the size up, never down. After extending the logical volume on both alpha and bravo, we need to resize the DRBD device we will be stacking on top of so we replicate that metadata between the lower level DRBD devices. We must use internal metadata with the stacked resource since we must have the metadata replicated if the resource switches from one HA node to the other, the new Primary needs to know which data hasn t reached the third node yet. In our example, we are using a logical volume named drbd_data in the volume group VolGroup00. VolGroup00 has 1GB free. drbd_data is currently 1GB in size, requiring us to grow the logical volume by only 32KB. We will grow the logical volume by 1 physical extent (4MB) on alpha and bravo to accommodate the metadata using the following command: # lvextend -l +1 /dev/volgroup00/drbd_data Then, on the current Secondary node, we resize the DRBD resource using the following command: # drbdadm resize r0 We need to know the new number of extents used by drbd_data on alpha and bravo, so we can create a logical volume of the same size on charlie. To do this we can run the following command: # lvdisplay /dev/volgroup00/drbd_data grep "Current LE" Current LE 257 We now know that we need to create a LV on charlie that is 257 extents in size. We will create VolGroup00 on the new disk /dev/vdb on charlie, as well as the appropriately sized drbd_data logical volume using the following commands: # pvcreate /dev/vdb # vgcreate VolGroup00 /dev/vdb # lvcreate -n drbd_data -l 257 /dev/volgroup Configure the Stacked DRBD Resource File For reference, here is the previously configured DRBD resource r0, defined in the file r0.res, which we will be stacking on top of: 11

14 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.4. Configuring the Stacked Resource resource r0 { protocol C; device /dev/drbd0; meta-disk internal; on alpha { disk /dev/volgroup00/drbd_data; address :7788; on bravo { disk /dev/volgroup00/drbd_data; address :7788; Convention is to define the new stacked resource with -u appended to the name of the resource we will stack on top of [2: -u for upper.]. We will define the new stacked resource r0-u within the same file as the lower level resource [3: since we are stacking on top of this resource, we will now refer to r0 as the lower level resource] r0, defined in the r0.res resource file. Following this convention will keep things as simple as possible and becomes more important when there are many resources. The contents of r0.res will now look like this: 12

15 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.4. Configuring the Stacked Resource resource r0 { protocol C; device /dev/drbd0; meta-disk internal; on alpha { disk /dev/volgroup00/drbd_data; address :7788; on bravo { disk /dev/volgroup00/drbd_data; address :7788; resource r0-u { protocol A; device /dev/drbd10; meta-disk internal; proxy { memlimit 100M; stacked-on-top-of r0 { address :7789; proxy on alpha bravo { inside :7799; outside :7779; on charlie { disk /dev/volgroup00/drbd_data; address :7789; proxy on charlie { inside :7799; outside :7779; Looking at the new configuration we can see that our new stacked device will take the device name /dev/drbd10, while our original device is still /dev/drbd0. These minor numbers can be whatever we would like, however for manageability, we should follow the convention of using a separate number range, e.g. an unused tens or hundreds range above all the non-stacked resources Create Metadata for the Stacked Device Now that we ve made room for the stacked device s metadata and configured the DRBD resource, we can create the metadata. On the node where the lower level device is Primary, we can run the following commands to initialize the resource: # drbdadm create-md --stacked r0-u # drbdadm up --stacked r0-u # service drbdproxy start 13

16 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.4. Configuring the Stacked Resource Since we ve made space for the new metadata in previous steps, there should be no errors when issuing the drbdadm create-md --stacked r0-u command. If we receive a message about overwriting existing data, we ll need to ensure we ve made enough room for the new metadata [5: before proceeding, failing to do so could result in overwriting our production data. On charlie, we will run the following commands: # drbdadm create-md r0-u # drbdadm up r0-u # service drbdproxy start Now, if we cat /proc/drbd on charlie we should see something like this: version: (api:1/proto:86-101) GIT-hash: 8ca20467bb9e10aa ac1d65f2b54b4 build by buildsystem@linbit 10: cs:wfconnection ro:secondary/unknown ds:inconsistent/dunknown A r----s ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos: On the lower level Primary there should be a similar output, with lines for the minor device 0 added. The resource will remain in the WFConnection connection state, until we resume the virtual IP in Pacemaker that we stopped earlier. Before we can reconfigure and resume services in Pacemaker using the new stacked resource, we need to first promote the resource to Primary on alpha or bravo, whichever is currently the lower level Primary, with the stacked resource running. Run the drbdadm primary command using the --stacked and --force directives as shown below to accomplish this: # drbdadm primary --stacked r0-u --force We should now see that we have an UpToDate connection state on the peer selected as the sync source in the command above Modify the Pacemaker Configuration and Restart Services We must now configure Pacemaker to manage the stacked resource. This involves adding a new Master/Slave definition, as well as modifying the current filesystem definition to mount the stacked resource instead of the lower level DRBD resource. Failing to modify the current filesystem definition will result in no data replication to the new peer, as we would be writing to the stacked resources backing disk. We also need to configure the DRBD Proxy resource. Depending upon the network in a particular environment, it may be necessary to add another floating IP via an IPaddr2 resource in order to connect to the disaster recovery node. However, this is outside the scope of this document. To add the new DRBD and DRBD Proxy resources to the current Pacemaker configuration, run the command crm configure to drop into a crm shell, then define the new primitives, master/slave resource, colocation, and ordering constraint: 14

17 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.4. Configuring the Stacked Resource # crm configure crm(live)configure# primitive p_drbd_r0-u ocf:linbit:drbd \ params drbd_resource="r0-u" \ op monitor interval="31s" role="master" \ op monitor interval="29s" role="slave" crm(live)configure# ms ms_drbd_r0-u p_drbd_r0-u \ meta master-max="1" master-node-max="1" clone-max="1" \ clone-node-max="1" notify="true" crm(live)configure# primitive p_lsb_proxy lsb:drbdproxy \ op monitor interval="30s" timeout="30s" crm(live)configure# colocation c_drbd0_upper_with_lower \ inf: ms_drbd_r0-u ms_drbd_r0:master crm(live)configure# order o_drbd_r0_lower_before_upper \ inf: ms_drbd_r0:promote ms_drbd_r0-u:start Now we need to edit the filesystem definition so that we re mounting the stacked DRBD device instead of the lower level device. We also need to modify the colocation and ordering constraints for the NFS resources so they start with the stacked DRBD device and start DRBD Proxy after the virtual IP resource. To do that, we will drop into edit mode by entering the edit command from the crm configure prompt. crm(live)configure# edit This will drop us into vi where we can modify the filesystem definition, location constraint, and ordering constraint so that they look like this: primitive p_fs_drbd ocf:heartbeat:filesystem \ params device="/dev/drbd10" directory="/mnt/drbd_data" \ options="noatime,nodiratime,discard" fstype="ext4" \ op monitor interval="10s" group g_nfs p_fs_drbd p_lsb_nfsserver p_exportfs_drbd p_virtip_nfs p_lsb_proxy colocation c_nfs_with_drbd inf: g_nfs ms_drbd_r0-u:master order o_drbd_before_nfs inf: ms_drbd_r0-u:promote g_nfs:start Write the changes to the Pacemaker configuration and quit vi. Use the show command from the crm configure prompt to show the current configuration. It should look similar to this: 15

18 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.4. Configuring the Stacked Resource node $id="32e7f01c a34-97e6-efec5733ed1d" alpha node $id="8001d23c-c0ef-462e-82b3-c983967df5b3" bravo primitive p_drbd_r0 ocf:linbit:drbd \ params drbd_resource="r0" \ op monitor interval="29" role="master" \ op monitor interval="30" role="slave" primitive p_drbd_r0-u ocf:linbit:drbd \ params drbd_resource="r0-u" \ op monitor interval="31s" role="master" \ op monitor interval="29s" role="slave" primitive p_exportfs_drbd ocf:heartbeat:exportfs \ params fsid="1" directory="/mnt/drbd_data" \ options="rw,sync,mountpoint" \ clientspec=" / " \ wait_for_leasetime_on_stop="false" unlock_on_stop="true" \ op monitor interval="30s" primitive p_fs_drbd ocf:heartbeat:filesystem \ params device="/dev/drbd10" directory="/mnt/drbd_data" \ options="noatime,nodiratime,discard" fstype="ext4" \ op monitor interval="10s" primitive p_lsb_nfsserver lsb:nfs \ op monitor interval="30s" primitive p_lsb_proxy lsb:drbdproxy \ op monitor interval="30s" timeout="30s" primitive p_virtip_nfs ocf:heartbeat:ipaddr2 \ params ip=" " cidr_netmask="24" \ op monitor interval="10s" timeout="20s" group g_nfs p_fs_drbd p_lsb_nfsserver p_exportfs_drbd p_virtip_nfs p_lsb_proxy ms ms_drbd_r0 p_drbd_r0 \ meta master-max="1" master-node-max="1" clone-max="2" \ clone-node-max="1" notify="true" ms ms_drbd_r0-u p_drbd_r0-u \ meta master-max="1" master-node-max="1" clone-max="1" \ clone-node-max="1" notify="true" colocation c_drbd0_upper_with_lower inf: ms_drbd_r0-u ms_drbd_r0:master colocation c_nfs_with_drbd inf: g_nfs ms_drbd_r0:master order o_drbd_before_nfs inf: ms_drbd_r0:promote g_nfs:start order o_drbd_r0_lower_before_upper inf: ms_drbd_r0:promote ms_drbd_r0-u:start property $id="cib-bootstrap-options" \ stonith-enabled="false" \ no-quorum-policy="ignore" \ dc-version=" c7312c689715e096b716419e2ebc12b " \ cluster-infrastructure="heartbeat" \ last-lrm-refresh=" " rsc_defaults $id="rsc-options" \ resource-stickiness="200" Once we have verified everything looks correct, we can bring the cluster out of maintenance mode and commit our changes: crm(live)configure# property maintenance-mode=false crm(live)configure# commit Running the crm_mon command should now show us that the new Master/Slave Set, the DRBD Proxy resource, as well 16

19 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 3.5. Testing the Disaster Recovery Node as all the previously configured services, have started. It should now look similar to this: Last updated: Fri Jun 27 08:35: Last change: Fri Jun 27 16:34: via crmd on bravo Stack: heartbeat Current DC: bravo (8001d23c-c0ef-462e-82b3-c983967df5b3) - partition with quorum Version: hg bf8451.el6-9bf Nodes configured, unknown expected votes 7 Resources configured. Online: [ alpha bravo ] Resource Group: g_nfs p_fs_drbd (ocf::heartbeat:filesystem): Started alpha p_lsb_nfsserver (lsb:nfs): Started alpha p_exportfs_drbd (ocf::heartbeat:exportfs): Started alpha p_virtip_nfs (ocf::heartbeat:ipaddr2): Started alpha p_lsb_proxy (lsb:drbdproxy): Started alpha Master/Slave Set: ms_drbd_r0 [p_drbd_r0] Masters: [ alpha ] Slaves: [ bravo ] Master/Slave Set: ms_drbd_r0-u [p_drbd_r0-u] Masters: [ alpha ] Now that the virtual IP is running, DRBD Proxy should be connected, or trying to connect. We can check the status of DRBD by looking at the contents of /proc/drbd. Once Proxy has established a connection, the full synchronization will begin. We can monitor this process using watch -n1 cat /proc/drbd/: Every 1.0s: cat /proc/drbd version: (api:1/proto:86-101) GIT-hash: 8ca20467bb9e10aa ac1d65f2b54b4 build by buildsystem@linbit 10: cs:synctarget ro:secondary/primary ds:inconsistent/uptodate A r----- ns:0 nr: dw: dr:0 al:0 bm:0 lo:0 pe:5 ua:0 ap:0 ep:1 wo:f oos:98560 [==============>...] sync'ed: 77.8% (98560/439896)K finish: 0:02:53 speed: 560 (540) want: 760 K/sec 3.5. Testing the Disaster Recovery Node Now that we have our Disaster Recovery node up and connected via DRBD Proxy, we will want to test that it is working as it should. To do this, mount our NFS share on a client machine, and write some test data using the dd utility. We will also need to create a directory inside the mountpoint where nfsnobody has write access (assuming we run the tests from the client as root), since root_squash is a default NFS option. While we are performing these write tests, we can monitor proxy on the node that is currently the Primary for the stacked resource, using drbd-proxyctl -c "show memusage" with the watch utility. Use the command below to accomplish this: On the current Primary for the stacked resource: # mkdir /mnt/drbd_data/world # chmod 777 /mnt/drbd_data/world # watch -n1 'drbd-proxy-ctl -c "show memusage"' 17

20 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 4. Failing Over to a Disaster Recovery Node The output of the above command will update every second, and will look similar to the output below when there is no activity on the device. This is what an empty DRBD Proxy buffer looks like: r0-u-charlie-alpha_bravo 0 of 104 MiB used, 0 persistent normal [...] 1024/ bytes prio [...] 1024/ bytes Leave the watch command running in one terminal. Open a new terminal on a client system or the Secondary cluster node and mount the NFS share. Then write some test data to the mount point using dd: # mkdir /mnt/abc_proxy # mount -t nfs :/mnt/drbd_data /mnt/abc_proxy # dd if=/dev/zero of=/mnt/abc_proxy/world/dd_test_file bs=1m count=50 oflag=direct 50+0 records in 50+0 records out bytes (52 MB) copied, s, 15.6 MB/s We should see that our test file was written as quickly as our disks would allow, and was not limited by the throughput of our replication link. Without using DRBD Proxy, we would see that our disk throughput would drop to match that of our WAN s network throughput. On our test systems, the WAN replication link is 1MB/s, but our disk throughput is still 15.6MB/s. Perform the test again, this time pay attention to the watch command running on the Primary node s terminal. Notice the memory buffer fill and flush after the dd has already finished. This is what a partially full DRBD Proxy buffer looks like: r0-u-charlie-alpha_bravo 62 of 104 MiB used, 0 persistent normal [************************...] / bytes prio [...] 1024/ bytes 4. Failing Over to a Disaster Recovery Node Making the decision to failover to a disaster recovery node is something that we recommend be performed manually. Typically, we would not want to use automated logic to determine when to perform failover because of the inherent unreliability of most WAN links. Consider the following: a routing issue temporarily affects the route between our two data centers, interrupting the communications between the Primary and Secondary node. The Primary node is still being accessed by systems taking a different network path than the replication/monitoring traffic. Since the Secondary can t reach the Primary, it believes the primary site is down; so it becomes Primary as well, creating a splitbrain. Using multiple out-of-band links between the two sites can overcome such issues, but it is typically still recommended to use manual failover; it is best practice to have a human confirm that a site is in fact down, before failing over services Since we are using Pacemaker to manage clustered services in the local HA cluster, we can easily configure a standalone Pacemaker cluster at the DR site to make bringing up services as simple as issuing a single command. We will outline the general steps to accomplish such a configuration in the sections following this introduction Install and Configure Pacemaker on DR node Since we ve already configured yum to pull from LINBIT s Pacemaker repositories, we can run the following commands to install Pacemaker, Heartbeat, and the CRM shell: # yum install pacemaker-linbit pacemaker-linbit-crmsh heartbeat 18

21 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 4.1. Install and Configure Pacemaker on DR node Once the installation completes, we will need to configure Heartbeat. Create the file, /etc/ha.d/ha.cf with the following contents on charlie: udpport 694 mcast eth autojoin none warntime 5 deadtime 15 initdead 60 keepalive 2 node charlie pacemaker respawn Configure the authentication key for charlie by creating the file /etc/ha.d/authkeys with the following contents: auth 1 1 sha1 ThisIsWhereWePutOurSuperSecretKeyFile Set the authkeys file s permissions so only root has read and write access, and start Heartbeat and Pacemaker: # chmod 600 /etc/ha.d/authkeys # /etc/init.d/heartbeat start After a few moments, our crm_mon should look like this: Last updated: Fri Jun 27 10:20: Last change: Fri Jun 27 09:16: via crmd on charlie Stack: heartbeat Current DC: charlie (00212d60-e5e b55-48a3bf685ba8) - partition with quorum Version: hg bf8451.el6-9bf Nodes configured 0 Resources configured Online: [ charlie ] Now we can begin preparing our standalone Pacemaker configuration for use as our HA cluster s DR solution. We will start by disabling STONITH and placing the cluster into maintenance-mode by issuing the following commands: # crm configure property stonith-enabled="false" # crm configure property maintenance-mode="true" We must set stonith-enabled="false" in a standalone Pacemaker configuration, as there will be no peer. We set maintenance-mode="true", so that services do not start until we want them to; we will set maintenance-mode="false" to start all services on the DR node in the event that we need to failover to the DR site. Configuring Pacemaker from this point forward, should be as simple as copy-and-pasting the configuration from our HA cluster s Pacemaker configuration. We will omit anything that does not apply to the DR site; in our example this includes the lower-level DRBD resource, master/slave definition for the lower-level DRBD resource, DRBD Proxy service, and all colocation constraints. We must also remove or edit any references to the removed primitives from all 19

22 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 4.1. Install and Configure Pacemaker on DR node resource groups, ordering constraints, etc. Use the command, crm configure edit to modify the Pacemaker configuration, paste the standalone configuration below into the editor beneath the configurations already present. Once everything is in place, write the configuration and quit. primitive p_drbd_r0-u ocf:linbit:drbd \ params drbd_resource="r0-u" \ op monitor interval="31s" role="master" \ op monitor interval="29s" role="slave" \ op start interval="0" timeout="240s" \ op stop interval="0" timeout="100s" primitive p_exportfs_drbd ocf:heartbeat:exportfs \ params fsid="1" directory="/mnt/drbd_data" \ options="rw,sync,mountpoint" clientspec=" / " \ wait_for_leasetime_on_stop="false" unlock_on_stop="true" \ op start interval="0" timeout="40s" \ op stop interval="0" timeout="15s" \ op monitor interval="30s" timeout="20s" primitive p_fs_drbd ocf:heartbeat:filesystem \ params device="/dev/drbd10" directory="/mnt/drbd_data" fstype="ext4" \ options="noatime,nodiratime,discard" \ op start interval="0" timeout="60s" \ op stop interval="0" timeout="60s" \ op monitor interval="10s" timeout="40s" primitive p_lsb_nfsserver lsb:nfs \ op start interval="0" timeout="30s" \ op stop interval="0" timeout="30s" \ op monitor interval="30s" timeout="20s" primitive p_virtip_nfs ocf:heartbeat:ipaddr2 \ params ip=" " cidr_netmask="24" \ op start interval="0" timeout="20s" \ op stop interval="0" timeout="20s" \ op monitor interval="10s" timeout="20s" group g_nfs p_fs_drbd p_lsb_nfsserver p_exportfs_drbd p_virtip_nfs \ meta target-role="started" ms ms_drbd_r0-u p_drbd_r0-u \ meta master-max="1" master-node-max="1" clone-max="1" clone-node-max="1" notify="true" order o_drbd_before_nfs inf: ms_drbd_r0-u:promote g_nfs:start Once we ve saved our configuration, our crm_mon output should look similar to this: Last updated: Fri Jun 27 10:20: Last change: Fri Jun 27 09:16: via crmd on charlie Stack: heartbeat Current DC: charlie (00212d60-e5e b55-48a3bf685ba8) - partition with quorum Version: hg bf8451.el6-9bf Nodes configured 5 Resources configured Online: [ charlie ] Master/Slave Set: ms_drbd_r0-u [p_drbd_r0-u] (unmanaged) p_drbd_r0-u (ocf::linbit:drbd): Slave charlie (unmanaged) 20

23 LINBIT DRBD Proxy Configuration Guide on CentOS 6: 4.2. Simulate a Failure of Alpha and Bravo 4.2. Simulate a Failure of Alpha and Bravo To simulate a failure of alpha and bravo, there are many things we can do including simply pulling the power plugs on our nodes. This section of the guide will walk us through accomplishing that using Linux s /proc/sysrq-trigger. Echo ing values to the sysrq-trigger allows us to do a number of useful things on Linux systems; for the purposes of this guide I will cover only one, the poweroff option. Using the command, echo o > /proc/sysrq-trigger, will immediately poweroff the node. This is one of the best ways to simulate a failure of the primary site, since the system will not automatically come back online and attempt to resume cluster services. In a three node cluster similar to the one configured in the last section, we could poweroff the current Primary, which will fail services over to the Secondary. Once services are migrated to the Secondary, poweroff the new Primary, then test our failover procedure to the third (DR) node. To begin our testing, poweroff the current Primary using the following command: # echo o > /proc/sysrq-trigger We can watch the services failover to the Secondary using the crm_mon utility. We should see that all services failover to the Secondary, and DRBD Proxy reconnects the new Primary node to the DR node. On the new Primary, and last node remaining in the HA cluster, run the same command we ran above. At this point, only the DR node should be responsive. Run the following command to view the status of DRBD on the DR node, it should look similar to the output below: # cat /proc/drbd version: (api:1/proto:86-101) GIT-hash: 8ca20467bb9e10aa ac1d65f2b54b4 build by buildsystem@linbit 10: cs:wfconnection ro:secondary/unknown ds:uptodate/dunknown A r----- ns:0 nr: dw: dr:1689 al:15 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos: Resuming Services on the DR Node Since we have a standalone Pacemaker cluster with all our clustered services defined, just waiting in maintenancemode at the DR site, we simply need to login to charlie and run the following command to resume our services: # crm configure propery maintenance-mode="false" If we watch the output of crm_mon, we can see our services starting up on the DR node. They should all start within a few seconds and clients can continue using services on the DR node. The output of crm_mon will look like this if everything started successfully: 21

HA NFS Cluster using Pacemaker and DRBD on RHEL/CentOS 7. Matt Kereczman Version 1.5,

HA NFS Cluster using Pacemaker and DRBD on RHEL/CentOS 7. Matt Kereczman Version 1.5, HA NFS Cluster using Pacemaker and DRBD on RHEL/CentOS 7 Matt Kereczman Version 1.5, 2018-02-26 Table of Contents 1. Abstract..............................................................................................

More information

Highly Available NFS Storage with DRBD and Pacemaker

Highly Available NFS Storage with DRBD and Pacemaker Florian Haas, Tanja Roth Highly Available NFS Storage with DRBD and Pacemaker SUSE Linux Enterprise High Availability Extension 11 SP4 October 04, 2017 www.suse.com This document describes how to set up

More information

MySQL High Availability on the Pacemaker cluster stack. Brian Hellman, Florian Haas

MySQL High Availability on the Pacemaker cluster stack. Brian Hellman, Florian Haas MySQL High Availability on the Pacemaker cluster stack Brian Hellman, Florian Haas Table of Contents 1. Introduction...........................................................................................

More information

Building Elastix-1.3 High Availability Clusters with Redfone fonebridge2, DRBD and Heartbeat

Building Elastix-1.3 High Availability Clusters with Redfone fonebridge2, DRBD and Heartbeat Building Elastix-1.3 High Availability Clusters with Redfone fonebridge2, DRBD and Heartbeat Disclaimer DRBD and Heartbeat are not programs maintained or supported by Redfone Communications LLC. Do not

More information

Highly available iscsi storage with DRBD and Pacemaker. Brian Hellman & Florian Haas Version 1.2

Highly available iscsi storage with DRBD and Pacemaker. Brian Hellman & Florian Haas Version 1.2 Highly available iscsi storage with DRBD and Pacemaker Brian Hellman & Florian Haas Version 1.2 Table of Contents 1. Introduction...........................................................................................

More information

============================================== ==============================================

============================================== ============================================== Elastix High Availability ( DRBD + Heartbeat ) ============================================== this is How to configure Documentation :) ============================================== Before we Start :

More information

High Availability and Disaster Recovery

High Availability and Disaster Recovery High Availability and Disaster Recovery ScienceLogic version 8.4.0 rev 2 Table of Contents High Availability & Disaster Recovery Overview 4 Overview 4 Disaster Recovery 4 High Availability 4 Differences

More information

High-AvailabilityoVirt-

High-AvailabilityoVirt- High-AvailabilityoVirt- ClusterwithiSCSI-Storage Benjamin Alfery , Philipp Richter 1. Introduction... 1 1.1. Goal of this guide... 2 1.2. Limitations...

More information

Highly Available NFS Storage with DRBD and Pacemaker

Highly Available NFS Storage with DRBD and Pacemaker Highly Available NFS Storage with DRBD and Pacemaker SUSE Linux Enterprise High Availability Extension 12 SP3 Tanja Roth and Thomas Schraitle This document describes how to set up highly available NFS

More information

ClusteringQuickInstallationGuide. forpacketfenceversion6.2.1

ClusteringQuickInstallationGuide. forpacketfenceversion6.2.1 ClusteringQuickInstallationGuide forpacketfenceversion6.2.1 ClusteringQuickInstallationGuide byinverseinc. Version6.2.1-Jul2016 Copyright 2015Inverseinc. Permissionisgrantedtocopy,distributeand/ormodifythisdocumentunderthetermsoftheGNUFreeDocumentationLicense,Version

More information

Clustering Quick Installation Guide. for PacketFence version 6.5.0

Clustering Quick Installation Guide. for PacketFence version 6.5.0 Clustering Quick Installation Guide for PacketFence version 6.5.0 Clustering Quick Installation Guide by Inverse Inc. Version 6.5.0 - Jan 2017 Copyright 2015 Inverse inc. Permission is granted to copy,

More information

MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager. Yves Trudeau November 2013

MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager. Yves Trudeau November 2013 MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager Yves Trudeau November 2013 Agenda Geo-DR problems and challenges Introduction to Corosync Introduction to Pacemaker

More information

Configuring Control Center 1.0.x for HA

Configuring Control Center 1.0.x for HA The Zenoss Enablement Series: Configuring Control Center 1.0.x for HA Document Version 500-P4 Zenoss, Inc. www.zenoss.com Copyright 2015-2016 Zenoss, Inc. 11305 Four Points Drive, Bldg. 1 - Suite 300,

More information

RedHat Cluster (Pacemaker/Corosync)

RedHat Cluster (Pacemaker/Corosync) RedHat Cluster (Pacemaker/Corosync) Chapter 1:- Introduction and basic difference from previous RHEL cluster and latest RHEL Cluster. Red hat cluster allows you to configure and manage group of resources

More information

ELASTIX HIGH AVAILABILITY (HA) MODULE

ELASTIX HIGH AVAILABILITY (HA) MODULE ELASTIX HIGH AVAILABILITY (HA) MODULE Authors: Alfio Muñoz, Alberto Santos Version: 1.1 Date: March 28, 2016 2016 PaloSanto Solutions All rights reserved. This documentation is public and its intellectual

More information

Linux-HA Clustering, no SAN. Prof Joe R. Doupnik Ingotec, Univ of Oxford, MindworksUK

Linux-HA Clustering, no SAN. Prof Joe R. Doupnik Ingotec, Univ of Oxford, MindworksUK Linux-HA Clustering, no SAN Prof Joe R. Doupnik jrd@netlab1.oucs.ox.ac.uk Ingotec, Univ of Oxford, MindworksUK Ingotec offers IT to charities A charity is receiving a new server, having GroupWise, POSIX

More information

SpycerBox High Availability Administration Supplement

SpycerBox High Availability Administration Supplement Supplement: High Availability (Version 1.0) SpycerBox High Availability Administration Supplement Supplement for the SpycerBox Ultra/Flex hardware guide: High Availability Document Version 1.0 Copyright

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 High Availability Add-On Administration Configuring and Managing the High Availability Add-On Last Updated: 2018-02-08 Red Hat Enterprise Linux 7 High Availability Add-On Administration

More information

Upgrading a HA System from to

Upgrading a HA System from to Upgrading a HA System from 6.12.65 to 10.13.66 Due to various kernel changes, this upgrade process may result in an unexpected restart of Asterisk. There will also be a short outage as you move the services

More information

Adding a block devices and extending file systems in Linux environments

Adding a block devices and extending file systems in Linux environments Adding a block devices and extending file systems in Linux environments In this exercise we simulate situation where user files partition /home fills up and needs to be extended. Also we migrate from static

More information

The Zenoss Enablement Series:

The Zenoss Enablement Series: The Zenoss Enablement Series: Zenoss Service Dynamics Impact and Event Management on Red Hat Cluster Suite Configuration Guide Document Version 424-P1 Zenoss, Inc. www.zenoss.com Copyright 2013 Zenoss,

More information

Linux-HA Version 3. Open Source Data Center April 30 th, 2009

Linux-HA Version 3. Open Source Data Center April 30 th, 2009 MultiNET Services GmbH Linux-HA Version 3 Open Source Data Center April 30 th, 2009 Dr. Michael Schwartzkopff, MultiNET Services GmbH MultiNET Services GmbH, OSDC, 09/04/30: LinuxHAv2: page 1 Contents

More information

Load Balancing Web Proxies / Filters / Gateways. Deployment Guide v Copyright Loadbalancer.org

Load Balancing Web Proxies / Filters / Gateways. Deployment Guide v Copyright Loadbalancer.org Load Balancing Web Proxies / Filters / Gateways Deployment Guide v1.6.5 Copyright Loadbalancer.org Table of Contents 1. About this Guide...4 2. Loadbalancer.org Appliances Supported...4 3. Loadbalancer.org

More information

Linux Administration

Linux Administration Linux Administration This course will cover all aspects of Linux Certification. At the end of the course delegates will have the skills required to administer a Linux System. It is designed for professionals

More information

Installation Instructions for Xorcom TwinStar Plus Servers

Installation Instructions for Xorcom TwinStar Plus Servers Document version: 1.0 Overview Installation Instructions for Xorcom TwinStar Plus Servers This document describes the configuration process which must be performed at the customer site for pre-configured

More information

Red Hat.Actualtests.EX200.v by.Dixon.22q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator (RHCSA) Exam

Red Hat.Actualtests.EX200.v by.Dixon.22q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator (RHCSA) Exam Red Hat.Actualtests.EX200.v2014-12-02.by.Dixon.22q Number: EX200 Passing Score: 800 Time Limit: 120 min File Version: 14.5 http://www.gratisexam.com/ Exam Code: EX200 Exam Name: Red Hat Certified System

More information

Load Balancing Bloxx Web Filter. Deployment Guide v Copyright Loadbalancer.org

Load Balancing Bloxx Web Filter. Deployment Guide v Copyright Loadbalancer.org Load Balancing Bloxx Web Filter Deployment Guide v1.3.5 Copyright Loadbalancer.org Table of Contents 1. About this Guide...4 2. Loadbalancer.org Appliances Supported...4 3. Loadbalancer.org Software Versions

More information

EX200.exam.35q. Number: EX200 Passing Score: 800 Time Limit: 120 min. EX200. Red Hat Certified System Administrator RHCSA

EX200.exam.35q. Number: EX200 Passing Score: 800 Time Limit: 120 min.   EX200. Red Hat Certified System Administrator RHCSA EX200.exam.35q Number: EX200 Passing Score: 800 Time Limit: 120 min EX200 Red Hat Certified System Administrator RHCSA Exam A QUESTION 1 Configure the verification mode of your host account and the password

More information

COS 318: Operating Systems. NSF, Snapshot, Dedup and Review

COS 318: Operating Systems. NSF, Snapshot, Dedup and Review COS 318: Operating Systems NSF, Snapshot, Dedup and Review Topics! NFS! Case Study: NetApp File System! Deduplication storage system! Course review 2 Network File System! Sun introduced NFS v2 in early

More information

Reference Design and How-To For

Reference Design and How-To For Reference Design and How-To For High Availability 2-Node XenServer Pool Provides Full Functionality with Live Migration Without External Shared Storage for HA-Lizard Version 1.3.1 1 P a g e The information

More information

EX200.Lead2pass.Exam.24q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator RHCSA. Version 14.0

EX200.Lead2pass.Exam.24q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator RHCSA. Version 14.0 EX200.Lead2pass.Exam.24q Number: EX200 Passing Score: 800 Time Limit: 120 min File Version: 14.0 http://www.gratisexam.com/ Exam Code: EX200 Exam Name: Red Hat Certified System Administrator RHCSA Version

More information

Vendor: RedHat. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator - RHCSA. Version: Demo

Vendor: RedHat. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator - RHCSA. Version: Demo Vendor: RedHat Exam Code: EX200 Exam Name: Red Hat Certified System Administrator - RHCSA Version: Demo EX200 Exam A QUESTION NO: 1 CRECT TEXT Configure your Host Name, IP Address, Gateway and DNS. Host

More information

An introduction to Logical Volume Management

An introduction to Logical Volume Management An introduction to Logical Volume Management http://distrowatch.com/weekly.php?issue=20090309 For users new to Linux, the task of switching operating systems can be quite daunting. While it is quite similar

More information

vsphere Replication for Disaster Recovery to Cloud vsphere Replication 8.1

vsphere Replication for Disaster Recovery to Cloud vsphere Replication 8.1 vsphere Replication for Disaster Recovery to Cloud vsphere Replication 8.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments

More information

EX200.redhat

EX200.redhat EX200.redhat Number: EX200 Passing Score: 800 Time Limit: 120 min Exam A QUESTION 1 Configure the verification mode of your host account and the password as LDAP. And it can login successfully through

More information

Creating a Multi-data Center (MDC) System

Creating a Multi-data Center (MDC) System , page 1 About Multi-data Centers The Multi-data Center (MDC) licensed feature is available in version 2.5 and higher. It allows two CWMS systems to be joined into a single MDC system. One license must

More information

iscsi-ha: High Availability 2-Node XenServer Pool How-To For use with XenServer and Xen Cloud Platform

iscsi-ha: High Availability 2-Node XenServer Pool How-To For use with XenServer and Xen Cloud Platform Reference Design and How-To For High Availability 2-Node XenServer Pool Provides Full Functionality with Live Migration Without External Shared Storage for HA-Lizard Version 1.3.7 1 P a g e The information

More information

Failover Clustering failover node cluster-aware virtual server one

Failover Clustering failover node cluster-aware virtual server one Failover Clustering Microsoft Cluster Service (MSCS) is available for installation on Windows 2000 Advanced Server, Windows 2000 Datacenter Server, and Windows NT Enterprise Edition with Service Pack 5

More information

The Google File System

The Google File System October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single

More information

Seltestengine EX200 24q

Seltestengine EX200 24q Seltestengine EX200 24q Number: EX200 Passing Score: 800 Time Limit: 120 min File Version: 22.5 http://www.gratisexam.com/ Red Hat EX200 Red Hat Certified System AdministratorRHCSA Nicely written Questions

More information

Configuring a High Availability Database with QVD. QVD DOCUMENTATION

Configuring a High Availability Database with QVD. QVD DOCUMENTATION Configuring a High Availability Database with QVD QVD DOCUMENTATION September 20, 2018 Configuring a High Availability Database with QVD i Contents I Implementation 1 1 Prerequisites

More information

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

Actual4Test.   Actual4test - actual test exam dumps-pass for IT exams Actual4Test http://www.actual4test.com Actual4test - actual test exam dumps-pass for IT exams Exam : RH-302 Title : Red Hat Certified Engineer on Redhat Enterprise Linux 5 (Labs) Vendors : RedHat Version

More information

This section describes the procedures needed to add a new disk to a VM. vmkfstools -c 4g /vmfs/volumes/datastore_name/vmname/xxxx.

This section describes the procedures needed to add a new disk to a VM. vmkfstools -c 4g /vmfs/volumes/datastore_name/vmname/xxxx. Adding a New Disk, page 1 Mounting the Replication Set from Disk to tmpfs After Deployment, page 3 Manage Disks to Accommodate Increased Subscriber Load, page 5 Adding a New Disk This section describes

More information

Introduction to Network Operating Systems

Introduction to Network Operating Systems File Systems In a general purpose operating system the local file system provides A naming convention A mechanism for allocating hard disk space to files An method for identifying and retrieving files,

More information

Please choose the best answer. More than one answer might be true, but choose the one that is best.

Please choose the best answer. More than one answer might be true, but choose the one that is best. Introduction to Linux and Unix - endterm Please choose the best answer. More than one answer might be true, but choose the one that is best. SYSTEM STARTUP 1. A hard disk master boot record is located:

More information

Load Balancing Censornet USS Gateway. Deployment Guide v Copyright Loadbalancer.org

Load Balancing Censornet USS Gateway. Deployment Guide v Copyright Loadbalancer.org Load Balancing Censornet USS Gateway Deployment Guide v1.0.0 Copyright Loadbalancer.org Table of Contents 1. About this Guide...3 2. Loadbalancer.org Appliances Supported...3 3. Loadbalancer.org Software

More information

If you had a freshly generated image from an LCI instructor, make sure to set the hostnames again:

If you had a freshly generated image from an LCI instructor, make sure to set the hostnames again: Storage Node Setup A storage node (or system as your scale) is a very important unit for an HPC cluster. The computation is often about the data it produces and keeping that data safe is important. Safe

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung December 2003 ACM symposium on Operating systems principles Publisher: ACM Nov. 26, 2008 OUTLINE INTRODUCTION DESIGN OVERVIEW

More information

SAP HANA Disaster Recovery with Asynchronous Storage Replication

SAP HANA Disaster Recovery with Asynchronous Storage Replication Technical Report SAP HANA Disaster Recovery with Asynchronous Storage Replication Using the Snap Creator SAP HANA Plug-in Nils Bauer, Bernd Herth, NetApp October 2016 TR-4279 Abstract This document provides

More information

vsphere Replication for Disaster Recovery to Cloud

vsphere Replication for Disaster Recovery to Cloud vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Linux Command Line Primer. By: Scott Marshall

Linux Command Line Primer. By: Scott Marshall Linux Command Line Primer By: Scott Marshall Draft: 10/21/2007 Table of Contents Topic Page(s) Preface 1 General Filesystem Background Information 2 General Filesystem Commands 2 Working with Files and

More information

Synology High Availability (SHA)

Synology High Availability (SHA) Synology High Availability (SHA) Based on DSM 5.1 Synology Inc. Synology_SHAWP_ 20141106 Table of Contents Chapter 1: Introduction... 3 Chapter 2: High-Availability Clustering... 4 2.1 Synology High-Availability

More information

Exam LFCS/Course 55187B Linux System Administration

Exam LFCS/Course 55187B Linux System Administration Exam LFCS/Course 55187B Linux System Administration About this course This four-day instructor-led course is designed to provide students with the necessary skills and abilities to work as a professional

More information

EX200 EX200. Red Hat Certified System Administrator RHCSA

EX200 EX200. Red Hat Certified System Administrator RHCSA EX200 Number: EX200 Passing Score: 800 Time Limit: 120 min File Version: 14.0 http://www.gratisexam.com/ EX200 Red Hat Certified System Administrator RHCSA EX200 QUESTION 1 Configure your Host Name, IP

More information

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017 ECE 550D Fundamentals of Computer Systems and Engineering Fall 2017 The Operating System (OS) Prof. John Board Duke University Slides are derived from work by Profs. Tyler Bletsch and Andrew Hilton (Duke)

More information

High Availability Guide

High Availability Guide Juniper Secure Analytics Release 2014.1 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA 408-745-2000 www.juniper.net Published: 2014-11-27 Copyright Notice Copyright 2014 Juniper

More information

Disks, Filesystems Todd Kelley CST8177 Todd Kelley 1

Disks, Filesystems Todd Kelley CST8177 Todd Kelley 1 Disks, Filesystems Todd Kelley kelleyt@algonquincollege.com CST8177 Todd Kelley 1 sudo and PATH (environment) disks partitioning formatting file systems: mkfs command checking file system integrity: fsck

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems Operating System Concepts 99h Edition DM510-14 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation

More information

INSTALLATION. Security of Information and Communication Systems

INSTALLATION. Security of Information and Communication Systems Security of Information and Communication Systems INSTALLATION Table of contents 1.Introduction...2 2.Installation...3 2.1.Hardware requirement...3 2.2.Installation of the system...3 2.3.Installation of

More information

Staggeringly Large Filesystems

Staggeringly Large Filesystems Staggeringly Large Filesystems Evan Danaher CS 6410 - October 27, 2009 Outline 1 Large Filesystems 2 GFS 3 Pond Outline 1 Large Filesystems 2 GFS 3 Pond Internet Scale Web 2.0 GFS Thousands of machines

More information

The Google File System

The Google File System The Google File System By Ghemawat, Gobioff and Leung Outline Overview Assumption Design of GFS System Interactions Master Operations Fault Tolerance Measurements Overview GFS: Scalable distributed file

More information

vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.5

vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.5 vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments

More information

7. Try shrinking / -- what happens? Why? Cannot shrink the volume since we can not umount the / logical volume.

7. Try shrinking / -- what happens? Why? Cannot shrink the volume since we can not umount the / logical volume. OPS235 Lab 4 [1101] Sample/suggested Answers/notes (Please ask your professor if you need any clarification or more explanation on concepts you don't understand.) Investigation 1: How are LVMs managed

More information

DRBD 9. Lars Ellenberg. Linux Storage Replication. LINBIT HA Solutions GmbH Vienna, Austria

DRBD 9. Lars Ellenberg. Linux Storage Replication. LINBIT HA Solutions GmbH Vienna, Austria DRBD 9 Linux Storage Replication Lars Ellenberg LINBIT HA Solutions GmbH Vienna, Austria What this talk is about What is replication Why block level replication Why replication What do we have to deal

More information

Journaling. CS 161: Lecture 14 4/4/17

Journaling. CS 161: Lecture 14 4/4/17 Journaling CS 161: Lecture 14 4/4/17 In The Last Episode... FFS uses fsck to ensure that the file system is usable after a crash fsck makes a series of passes through the file system to ensure that metadata

More information

Notices Carbonite Availability for Linux User's Guide Version 8.1.1, Thursday, April 5, 2018 If you need technical assistance, you can contact

Notices Carbonite Availability for Linux User's Guide Version 8.1.1, Thursday, April 5, 2018 If you need technical assistance, you can contact Notices Carbonite Availability for Linux User's Guide Version 8.1.1, Thursday, April 5, 2018 If you need technical assistance, you can contact CustomerCare. All basic configurations outlined in the online

More information

Avaya Aura System Manager 5.2 HA and CLI Restore

Avaya Aura System Manager 5.2 HA and CLI Restore Avaya Aura System Manager 5.2 HA and CLI Restore Version: 1.0 June 22nd, 2010 Table of Contents 1. Introduction...3 2. HA...3 2.1 Overview...3 2.2 How to setup Failure cluster...4 3. CLI Restore...14 3.1

More information

Trixbox High-Availability with fonebridge Tutorial

Trixbox High-Availability with fonebridge Tutorial Trixbox High-Availability with fonebridge Tutorial REDFONE Communications Table of Contents i Table of Contents 1 Introduction 1.1 Overview... 1 1.1.1 Core components & requirements... 1 1.1.2 Operational

More information

v5: How to restore a backup image

v5: How to restore a backup image This article describes how to restore a backup image Restoring a backup image is very simple using Macrium Reflect. If the image contains only data, it is a matter of restoring it back to its original

More information

OPERATING SYSTEM. Chapter 12: File System Implementation

OPERATING SYSTEM. Chapter 12: File System Implementation OPERATING SYSTEM Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management

More information

At course completion. Overview. Audience profile. Course Outline. : 55187B: Linux System Administration. Course Outline :: 55187B::

At course completion. Overview. Audience profile. Course Outline. : 55187B: Linux System Administration. Course Outline :: 55187B:: Module Title Duration : 55187B: Linux System Administration : 4 days Overview This four-day instructor-led course is designed to provide students with the necessary skills and abilities to work as a professional

More information

example.com index.html # vim /etc/httpd/conf/httpd.conf NameVirtualHost :80 <VirtualHost :80> DocumentRoot /var/www/html/

example.com index.html # vim /etc/httpd/conf/httpd.conf NameVirtualHost :80 <VirtualHost :80> DocumentRoot /var/www/html/ example.com index.html # vim /etc/httpd/conf/httpd.conf NameVirtualHost 192.168.0.254:80 DocumentRoot /var/www/html/ ServerName station.domain40.example.com

More information

Course 55187B Linux System Administration

Course 55187B Linux System Administration Course Outline Module 1: System Startup and Shutdown This module explains how to manage startup and shutdown processes in Linux. Understanding the Boot Sequence The Grand Unified Boot Loader GRUB Configuration

More information

Using VMware vsphere Replication. vsphere Replication 6.5

Using VMware vsphere Replication. vsphere Replication 6.5 Using VMware vsphere Replication 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation, submit your

More information

ExpressCluster for Linux Version 3 Web Manager Reference. Revision 6us

ExpressCluster for Linux Version 3 Web Manager Reference. Revision 6us ExpressCluster for Linux Version 3 Web Manager Reference Revision 6us EXPRESSCLUSTER is a registered trademark of NEC Corporation. Linux is a trademark or registered trademark of Linus Torvalds in the

More information

vsphere Replication for Disaster Recovery to Cloud

vsphere Replication for Disaster Recovery to Cloud vsphere Replication for Disaster Recovery to Cloud vsphere Replication 5.6 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Chapter 11: Implementing File

Chapter 11: Implementing File Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Lab #9: Configuring A Linux File Server

Lab #9: Configuring A Linux File Server Lab #9 Page 1 of 6 Theory: Lab #9: Configuring A Linux File Server The Network File System (NFS) feature provides a means of sharing Linux file systems and directories with other Linux and UNIX computers

More information

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux TECHNICAL WHITE PAPER Using Stateless Linux with Veritas Cluster Server Linux Pranav Sarwate, Assoc SQA Engineer Server Availability and Management Group Symantec Technical Network White Paper Content

More information

RH202. Redhat Certified Technician on Redhat Enterprise Linux 4 (Labs) Exam.

RH202. Redhat Certified Technician on Redhat Enterprise Linux 4 (Labs) Exam. REDHAT RH202 Redhat Certified Technician on Redhat Enterprise Linux 4 (Labs) Exam TYPE: DEMO http://www.examskey.com/rh202.html Examskey REDHAT RH202 exam demo product is here for you to test the quality

More information

Changing user login password on templates

Changing user login password on templates Changing user login password on templates 1. Attach an ISO via the cloudstack interface and boot the VM to rescue mode. Click on attach iso icon highlighted below: A popup window appears from which select

More information

FILE SYSTEMS. CS124 Operating Systems Winter , Lecture 23

FILE SYSTEMS. CS124 Operating Systems Winter , Lecture 23 FILE SYSTEMS CS124 Operating Systems Winter 2015-2016, Lecture 23 2 Persistent Storage All programs require some form of persistent storage that lasts beyond the lifetime of an individual process Most

More information

"Charting the Course... MOC B: Linux System Administration. Course Summary

Charting the Course... MOC B: Linux System Administration. Course Summary Description Course Summary This four-day instructor-led course is designed to provide students with the necessary skills and abilities to work as a professional Linux system administrator. The course covers

More information

ForeScout CounterACT Resiliency Solutions

ForeScout CounterACT Resiliency Solutions ForeScout CounterACT Resiliency Solutions User Guide CounterACT Version 7.0.0 About CounterACT Resiliency Solutions Table of Contents About CounterACT Resiliency Solutions... 5 Comparison of Resiliency

More information

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition Chapter 11: Implementing File Systems Operating System Concepts 9 9h Edition Silberschatz, Galvin and Gagne 2013 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory

More information

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures GFS Overview Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures Interface: non-posix New op: record appends (atomicity matters,

More information

Table of Contents 1 V3 & V4 Appliance Quick Start V4 Appliance Reference...3

Table of Contents 1 V3 & V4 Appliance Quick Start V4 Appliance Reference...3 Table of Contents 1 V & V4 Appliance Quick Start...1 1.1 Quick Start...1 1.2 Accessing Appliance Menus...1 1. Updating Appliance...1 1.4 Webmin...1 1.5 Setting Hostname IP Address...2 1.6 Starting and

More information

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Distributed Systems Lec 10: Distributed File Systems GFS Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung 1 Distributed File Systems NFS AFS GFS Some themes in these classes: Workload-oriented

More information

VMware Mirage Getting Started Guide

VMware Mirage Getting Started Guide Mirage 5.8 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

Synology High Availability (SHA)

Synology High Availability (SHA) Synology High Availability (SHA) Based on DSM 6 Synology Inc. Synology_SHAWP_ 20170807 Table of Contents Chapter 1: Introduction... 3 Chapter 2: High-Availability Clustering... 4 2.1 Synology High-Availability

More information

Cost-Effective Virtual Petabytes Storage Pools using MARS. LCA 2018 Presentation by Thomas Schöbel-Theuer

Cost-Effective Virtual Petabytes Storage Pools using MARS. LCA 2018 Presentation by Thomas Schöbel-Theuer Cost-Effective Virtual Petabytes Storage Pools using MARS LCA 2018 Presentation by Thomas Schöbel-Theuer 1 Virtual Petabytes Storage Pools: Agenda Storage Architectures Scalability && Costs HOWTO Background

More information

Distributed File Systems

Distributed File Systems Distributed File Systems Today l Basic distributed file systems l Two classical examples Next time l Naming things xkdc Distributed File Systems " A DFS supports network-wide sharing of files and devices

More information

Resource Manager Collector RHCS Guide

Resource Manager Collector RHCS Guide The Zenoss Enablement Series: Resource Manager Collector RHCS Guide Document Version 424-D1 Zenoss, Inc. www.zenoss.com Copyright 2014 Zenoss, Inc., 275 West St., Suite 204, Annapolis, MD 21401, U.S.A.

More information

Polarion Enterprise Setup 17.2

Polarion Enterprise Setup 17.2 SIEMENS Polarion Enterprise Setup 17.2 POL005 17.2 Contents Terminology......................................................... 1-1 Overview...........................................................

More information

Cross-compilation with Buildroot

Cross-compilation with Buildroot Instituto Superior de Engenharia do Porto Mestrado em Engenharia Eletrotécnica e de Computadores Arquitetura de Computadores Cross-compilation with Buildroot Introduction Buildroot is a tool that can be

More information

Version Double-Take Availability for Linux User's Guide

Version Double-Take Availability for Linux User's Guide Version 8.0.0 Double-Take Availability for Linux User's Guide Notices Double-Take Availability for Linux User's Guide Version 8.0, Check your service agreement to determine which updates and new releases

More information

vsphere Availability Update 1 ESXi 5.0 vcenter Server 5.0 EN

vsphere Availability Update 1 ESXi 5.0 vcenter Server 5.0 EN Update 1 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent

More information

Da-Wei Chang CSIE.NCKU. Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University

Da-Wei Chang CSIE.NCKU. Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University Chapter 11 Implementing File System Da-Wei Chang CSIE.NCKU Source: Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University Outline File-System Structure

More information

Configuring High Availability (HA)

Configuring High Availability (HA) 4 CHAPTER This chapter covers the following topics: Adding High Availability Cisco NAC Appliance To Your Network, page 4-1 Installing a Clean Access Manager High Availability Pair, page 4-3 Installing

More information

Notices Carbonite Move for Linux User's Guide Version 8.1.1, Wednesday, January 31, 2018 If you need technical assistance, you can contact

Notices Carbonite Move for Linux User's Guide Version 8.1.1, Wednesday, January 31, 2018 If you need technical assistance, you can contact Notices Carbonite Move for Linux User's Guide Version 8.1.1, Wednesday, January 31, 2018 If you need technical assistance, you can contact CustomerCare. All basic configurations outlined in the online

More information