Office and Express Print Release High Availability Setup Guide

Similar documents
Equitrac Office and Express DCE High Availability White Paper

Office and Express Print Submission High Availability for DRE Setup Guide

Upgrading the Secure Access Unified ID System to Equitrac Office Equitrac Corporation

Performance Monitors Setup Guide

Integrated for Océ Setup Guide

Equitrac Embedded for Ricoh Basic. Setup Guide Equitrac Corporation

Equitrac Embedded for Sharp OSA. Setup Guide Equitrac Corporation

Equitrac Embedded for Sharp OSA

Equitrac Integrated for Océ

Embedded for Sharp OSA Setup Guide

Equitrac Embedded for Kyocera Mita. Setup Guide Equitrac Corporation Equitrac Corporation

Equitrac Integrated for Konica Minolta. Setup Guide Equitrac Corporation

Embedded Connector for IKON DocSend Setup Guide

NBC-IG Installation Guide. Version 7.2

Equitrac Integrated for Konica Minolta

Load Balancing Nuance Equitrac. Deployment Guide v Copyright Loadbalancer.org

Deploying DSR in a Microsoft Windows Server 2003 or 2008 Environment

Integrated for Konica Minolta Setup Guide

Deployment in a Microsoft Windows Server 2003, 2008, and 2012 Environment

Embedded for Xerox EPA-EIP Setup Guide

Equitrac Embedded Connector for EFI SendMe. Setup Guide Equitrac Corporation Equitrac Corporation

Equitrac Office and Express 5.5 SUSE Linux iprint Server Guide

Office and Express Cluster Deployment Guide

Equitrac Office/Express. Cluster Deployment Guide for Windows Server Equitrac Corporation

Software Version 5.3 August P Xerox Secure Access Unified ID System 5.3 Installation Guide

TANDBERG Management Suite - Redundancy Configuration and Overview

Parallels Virtuozzo Containers 4.6 for Windows

Creating and Managing a Content Server Cluster

Equitrac Office/Express 5.5. Red Hat Print Server Guide

WhatsUp Gold 2016 Installation and Configuration Guide

Equitrac Office/Express. SUSE Linux OES2 iprint Server Guide Equitrac Corporation

Technical White Paper HP Access Control Upgrade Guide. August 2017

Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5

Veeam Cloud Connect. Version 8.0. Administrator Guide

Load Balancing Microsoft Remote Desktop Services. Deployment Guide v Copyright Loadbalancer.org

Configuring ApplicationHA in VMware SRM 5.1 environment

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Deployment Guide: Routing Mode with No DMZ

Dell EMC Ready Architectures for VDI

Appliance Quick Start Guide. v7.5

DEPLOYMENT GUIDE DEPLOYING F5 WITH ORACLE ACCESS MANAGER

Device Registration Service. Installation Guide

Ekran System High Availability Deployment Guide

Loadbalancer.org Virtual Appliance quick start guide v6.3

Copyright Jetro Platforms, Ltd. All rights reserved.

VMware Skyline Collector Installation and Configuration Guide. VMware Skyline 1.4

dctrack Quick Setup Guide Virtual Machine Requirements Requirements Requirements Preparing to Install dctrack

Getting Started with ESX Server 3i Embedded ESX Server 3i version 3.5 Embedded and VirtualCenter 2.5

vrealize Orchestrator Load Balancing

Cisco Expressway Cluster Creation and Maintenance

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

PAN 802.1x Connector Application Installation Guide

Installing and Configuring vcloud Connector

Parallels Containers for Windows 6.0

DameWare Server. Administrator Guide

Dell EMC vsan Ready Nodes for VDI

Getting Started with VMware View View 3.1

Viewing System Status, page 404. Backing Up and Restoring a Configuration, page 416. Managing Certificates for Authentication, page 418

Firewall Enterprise epolicy Orchestrator

VMware vfabric Data Director Installation Guide

Cisco TelePresence VCS Cluster Creation and Maintenance

vrealize Orchestrator Load Balancing

OpenManage Integration for VMware vcenter Quick Installation Guide for vsphere Web Client Version 3.2

McAfee Firewall Enterprise epolicy Orchestrator Extension

ForeScout CounterACT Resiliency Solutions

Deploying VMware Identity Manager in the DMZ. JULY 2018 VMware Identity Manager 3.2

VMware Mirage Getting Started Guide

Link Gateway Initial Configuration Manual

SOA Software Intermediary for Microsoft : Install Guide

6.1. Getting Started Guide

StarWind Virtual SAN Compute and Storage Separated with Windows Server 2012 R2

Office and Express Planning Guide

Proficy* Workflow. Powered by Proficy SOA GETTING STARTED

Horizon DaaS Platform 6.1 Service Provider Installation - vcloud

Troubleshooting Cisco APIC-EM Single and Multi-Host

StarWind Virtual SAN Compute and Storage Separated with Windows Server 2016

Configure the Cisco DNA Center Appliance

Set Up Cisco ISE in a Distributed Environment

StarWind Virtual SAN Installing and Configuring SQL Server 2017 Failover Cluster Instance on Windows Server 2016

Virtual Recovery Assistant user s guide

StarWind Virtual SAN. HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2. One Stop Virtualization Shop MARCH 2018

Quick Start Guide TABLE OF CONTENTS COMMCELL ARCHITECTURE OVERVIEW COMMCELL SOFTWARE DEPLOYMENT INSTALL THE COMMSERVE SOFTWARE

akkadian Global Directory 3.0 System Administration Guide

Backup using Quantum vmpro with Symantec Backup Exec release 2012

Installation Guide McAfee Firewall Enterprise (Sidewinder ) on Riverbed Services Platform

Installation and Configuration Guide

VMware Skyline Collector Installation and Configuration Guide. VMware Skyline Collector 2.0

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2016

StarWind Virtual SAN Configuring HA SMB File Server in Windows Server 2016

App Orchestration 2.0

Remote Desktop Services. Deployment Guide

Configuring Network Load Balancing

Configuring High Availability (HA)

Installing AX Server with PostgreSQL (multi-server)

EMC SourceOne Discovery Manager Version 6.5

ForeScout CounterACT. Resiliency Solutions. CounterACT Version 8.0

StarWind Virtual SAN Installing and Configuring SQL Server 2019 (TP) Failover Cluster Instance on Windows Server 2016

Configuring the SMA 500v Virtual Appliance

StarWind Virtual SAN. Installing and Configuring SQL Server 2014 Failover Cluster Instance on Windows Server 2012 R2. One Stop Virtualization Shop

Data Protection Guide

Transcription:

Office and Express Print Release High Availability Setup Guide Version 1.0 2017 EQ-HA-DCE-20170512

Print Release High Availability Setup Guide Document Revision History Revision Date May 12, 2017 September 15, 2016 Revision List Updated Couchbase Installation Requirements Initial Release 2017 Nuance Communications. All rights reserved. All rights to this document, domestic and international, are reserved by Nuance Communications. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise) without prior written permission of Nuance. Trademarks Equitrac, Equitrac Express, Equitrac Office, and Follow-You Printing are registered trademarks of Nuance Communications, Inc. All other brands and their products are trademarks or registered trademarks of their respective holders, and should be noted as such. Symbols Used In This Guide The following symbols are used in the margins of this guide: Note Caution Warning The accompanying text provides cross-reference links, tips, or general information that can add to your understanding of the topic. The accompanying text provides key information about a step or action that might produce unexpected results if not followed precisely. Read the accompanying text carefully. This text can help you avoid making errors that might negatively affect program behavior. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 2

Print Release High Availability Topics What is Print Release High Availability? Print Release HA Configuration Workflow Network Load Balancer Couchbase Server for DCE Installing Couchbase Installing DCE in an High Availability Setup Creating Couchbase Indexes on DCE Monitoring DCE Health in a Cluster Overview This guide details print release high availability for Equitrac Office and Express. This guide highlights the installation and configuration of a highly available printer release environment via Equitrac DCE and Couchbase server. This guide assumes that you have already created, configured and tested your Network Load Balancer (NLB) appliance. For information on setting up NLB, refer to your NLB vendor documentation. DCE HA only applies to HP OXPd, Lexmark and Ricoh devices, and may not work with other Equitrac supported embedded clients. What is Print Release High Availability? High Availability (HA) print environments are designed to provide full-time system availability. High availability systems typically have redundant hardware and software that makes the system available in the event of a failure, and help distribute workload. To ensure that HA systems avoid having single points-of-failure, any hardware or software component that can fail has a redundant component of the same type. When failures occur, the processes performed by the failed component are moved (or failed over) to the redundant or backup component. This process resets system-wide resources, recovers partial or failed transactions, and restores the system to normal as quickly and as seamlessly as possible. A highly available system is almost transparent to the users. HA for Print Release allows users to release jobs at the MFP even in the event of a failure between the MFP and DCE. The user can still release jobs at an MFP when DCE or CAS fails after the user has already authenticated at the device. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 3

Print Release HA Configuration Workflow To set up highly available print release solution, do the following: 1 Configure a virtual service on your NLB appliance for DCE with a Virtual IP (VIP). See Configuring a Virtual Service on page 5. 2 Create a DNS record that resolves to the VIP for the NLB. See Creating a DNS Record on page 6. 3 Add a loopback adapter to each DCE in your HA DCE deployment configured with the same IP Address as the VIP for the NLB. See Adding a Loopback Adapter on page 6. NOTE: If DRE is installed on the same server as DCE, also see Adding the IP Address Variable on page 7. 4 Install and configure Couchbase on multiple remote servers and setup as a Couchbase cluster. See Installing Couchbase on page 12. 5 Install DCE on multiple servers in DCE HA mode. See Installing DCE in an High Availability Setup on page 16. Network Load Balancer With a load balancing solution, such as Windows Server Network Load Balancing, several VMs are configured identically and the load balancer distributes service requests across each of the VMs fairly evenly. This process reduces the risk of any single VM becoming overloaded. Using load balancing is an effective way to eliminate VM downtime because VMs can be individually rotated and serviced without taking the service offline. However, load balancing only works with identical VMs, which have no shared or centralized data. Load balancing is an effective way of increasing the availability of critical applications. When server failure instances are detected, they are seamlessly replaced when the traffic is automatically redistributed to servers that are still running. Not only does load balancing lead to high availability it also facilitates incremental scalability. Network load balancing facilitates higher levels of fault tolerance within service applications. The configuration for the Network Load Balancer (NLB) appliance varies depending on the vendor and type of appliance, and must be managed by the end customer s internal IT administrator. The generic requirements for each protocol configured on the NLB appliance are outlined. Load Balancing for Print Release In a highly available print release environment, the workflow is uninterrupted by having multiple DCEs distributed across different servers connected to a Network Load Balancer (NLB) to distribute the workload. Network load balancing uses multiple independent servers that have the same setup, but do not work together as in a cluster setup. The NLB forwards requests to either one DCE server or another, but one server does not use the other server s resources. Also, one resource does not share its state with other resources. In an NLB setup, all resources run at the same time, and a management layer distributes the workload across them. This process reduces the risk of any single server becoming overloaded. The recommended load balancing method used is Layer 4 in a direct server return (DSR)/N-Path/direct routing configuration. Layer 4 load balancing uses information defined at the networking transport layer as the basis for deciding how to distribute client requests across a group of servers. It is very important that the source IP Addresses are preserved. That is, the EQ DCE service must see the request originating from the individual MFP IP Address and not the NLB appliance. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 4

Layer 4 load balancing forwards traffic to a specific server based upon the selected port or service. The NLB appliance sits between the MFP and the DCE server, and listens on ports 2939 and 7627 for HP OXPd and on port 2939 for Ricoh and Lexmark for requests from the MFP, and then decides which server to send them on to. The configuration for the NLB varies depending on the vendor and type of appliance, and must be managed by the end customer s internal IT administrator. Refer to your the vendor s documentation for specific NLB appliance requirements, and your Microsoft documentation for general NLB setup. See Print Release High Availability Server Deployment on page 10 for Equitrac HA server setup. Configuring a Virtual Service A virtual service is required on your NLB appliance for DCE with a Virtual IP (VIP) assigned to it. To configure a virtual service on the NLB appliance, do the following. Consult your NLB appliance vendor for support. Configure the Virtual Service IP Address (VIP). Set the Ports to 2939 (for Lexmark and Ricoh) or 2939 and 7627 (for HP OXPd) Set the Protocol to TCP. Set the Load balancing Forwarding Method to Direct Routing (i.e. layer 4/direct routing/direct server return/ N-Path). Ensure the Persistent checkbox is not selected. Set the Check Port for server/service online to 2939. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 5

Creating a DNS Record On the DNS server, create a hostname and corresponding Host (A) record for the virtual DCE that matches the Virtual IP (VIP) for the NLB. This is needed so that the virtual DCE name resolves to the VIP used on the NLB. When installing DCE in HA mode, the installer prompts the user to supply a virtual server name. This virtual server name should match the DNS record previously created. When configuring the MFP devices to connect back to the highly available DCE, the DCE hostname/ip Address should be the VIP for the NLB and not the individual DCE hostname or IP Address. Adding a Loopback Adapter Typical Layer 4 NLB deployments require that all servers placed behind a load balanced service have primary and secondary network interface cards (NIC) configured. The primary NIC provides the server with a dedicated, full-time connection to a network. The secondary NIC does not physically connect to the network. When clients request a service via the NLB appliance, they contact an IP Address/Hostname that is configured on the NLB appliance specifically to listen for requests for that service. This is the Virtual IP (VIP) of the NLB appliance. Since the NLB appliance forwards on these requests directly to the servers offering the service without altering the destination IP Address, the servers themselves must contain at least one NIC assigned with the same IP Address as the VIP. If they do not, then the request from the client is rejected as the servers assume that the request was not intended for them. It is equally important that the secondary NIC added to each server does not actually connect to the production LAN. This ensures that when any client wishes to connect to the NLB appliance on its VIP, the servers with the secondary NIC also containing the VIP do not respond directly to the clients. This would initiate a direct connection between the client and the server and would avoid sending the traffic via the NLB appliance. In order to avoid direct client to server connection, the majority of NLB appliance vendors advise to add the secondary NIC as a loopback adapter, as this is a virtual interface that does not physically connect to a network. Refer to your vendor s documentation for more information. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 6

Adding the IP Address Variable If you are deploying an HA DCE infrastructure on servers that are also running DRE, additional configuration is required to correct certain undesired network behavior. For standalone DCE servers without DRE running on them, the following IP Address configuration in not required. If the secondary loopback NIC containing the IP Address of the VIP (for HA DCE) is added to a server that is also running DRE, when a user prints the DRE sends both IP Addresses to CAS. When there are multiple registrations with the same IP Address, users do not see jobs that have been printed to different DREs. For example, if a user prints a job to DRE1, and then another job to DRE2, they only see the job on DRE1. This happens because both DREs have registered the same IP Address (the VIP assigned to the loopback adapter), and it is assumed that since DRE1 has been queried for jobs, DRE2 has also been queried. In order to correct this behavior, and have DCE only send its production IP Address as part of the service registration message, a system environment variable must be added to each DRE/DCE containing the appropriate IP Address. 1 Go to Control Panel > All Control Panel Items > System Access on the DRE/DCE, and select Advanced system settings. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 7

2 On the System properties window, click Environmental Variables. 3 On the Environment Variables window, click New from the System variables section. 4 Create the Variable name EQ_IPADDRESSES with a Variable value of the production IP Address of your Equitrac DRE/DCE server, and press OK and then OK again. 5 Repeat these steps above for every DRE/DCE in your deployment. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 8

Configuring Weak Host Model Commands Common configuration for network hosts is to be multihomed with multiple network interfaces. A multihomed host provides enhanced connectivity because it can be simultaneously connected to multiple networks, such as an intranet or the Internet. However, since they can be connected to both an intranet and the Internet, services running on multihomed hosts can be vulnerable to attack. In the weak host model, an IP host (IPv4) can send packets on an interface that is not assigned the source IP address of the packet being sent. This is known as weak host send behavior. An IP host can also receive packets on an interface that is not assigned the destination IP address of the packet being received. This is known as weak host receive behavior. In order for Dynamic Source Routing (DSR) to work, the weak host model must be enabled on the server s loopback interface, as well as the interface from which requests are received from. To configure a multihomed server so the network interfaces can send or receive packets for addresses that they are not assigned, run the following commands from an Administrator command console. Replace the interface name in quotes with the names of your server interfaces. For the VLAN interface: netsh interface ipv4 set interface "Ethernet 2" weakhostreceive=enabled For the Loopback interface,: netsh interface ipv4 set interface "Etherrnet 3" weakhostreceive=enabled netsh interface ipv4 set interface "Ethernet 3" weakhostsend=enabled For detailed description of weak host models refer to the following Microsoft article: https://technet.microsoft.com/ en-us/library/ad9db381-1e1b-4077-be1c-bcefb11f1ea8 EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 9

Print Release High Availability Server Deployment EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 10

Couchbase Server for DCE The Couchbase Server is a NoSQL document database used to distribute the DCE cache, and is required when installing DCE as part of a high availability (HA) setup. When DCE is selected during Standard installation, there is the option to include DCE in an HA setup or not. If DCE is not selected as part of an HA setup, then a Distributed Cache service for DCE is automatically installed on the local DCE as part of the Equitrac installation process. If DCE is part of a HA setup, then the Couchbase server is required when installing DCE. Couchbase must be installed and configured on your system before installing DCE. The Couchbase Server Community Edition installation file is included in the Equitrac Installer zip file. Additional Couchbase versions can be downloaded from https:// www.couchbase.com/downloads where you can select the version that best suits your deployment and server platform. When installing Couchbase, you set the required IP address/hostname for the server nodes, data bucket names and Admin credentials. When setting up Couchbase, the administrator needs to configure the parameters required for the Couchbase fields that the EQ DCE installer prompts for during installation. See Installing Couchbase on page 12 for details. When installing DCE for HA, the Couchbase server connection information, such as the IP address/hostname of the virtual server, the connection string to the Couchbase database, and the administrator s credentials on the Couchbase server node are required. The Couchbase node and bucket is configured on each DCE service start-up in order to setup the node, bucket and add the indexes. If the node and bucket are already configured then they will not be modified. Any changes to the Couchbase server connection or administrator credentials can be done after installation by updating the Equitrac (Office/Express) installed program. Changes can also be made by modifying the connection string environment variable and credential registry keys and restarting the DCE service. Changes to the Couchbase nodes or data buckets can be made through the Couchbase console website. Ports 8091, 8093, 9101-9105 and 9111 are used by Couchbase. If these ports are in use by another process then the DCE and Couchbase will not function correctly. See the Port Communications list in the Equitrac Office and Express Planning Guide for the list of Equitrac component related ports. DCE HA only applies to HP OXPd, Lexmark and Ricoh, and may not work with other Equitrac supported embedded clients. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 11

Installing Couchbase The Couchbase server must be installed and configured before installing DCE. When setting up Couchbase for the first time, you are creating a cluster with a single node. Additional nodes can be added to the cluster after initial configuration. To install and configure Couchbase, do the following: 1 Run the Couchbase Server executable following the Installation Wizard prompts. 2 Open the Couchbase console, and click the Setup button. 3 On the Configure Server screen, set the following: a b c d Determine the Database Path and Index Path. Leave the Server Hostname default value. Start a new cluster or join an existing cluster. Select the Start a new cluster option when installing the first Couchbase cluster node, and determine the Service you want enable on the node, and the amount of memory per service. If starting a new cluster, continue to the next step. Or Select the Join a cluster now option when adding more Couchbase nodes to the cluster, enter the Node 1 IP Address, the Couchbase Administrator credentials and Services on the node. If joining a cluster, go to Joining a Cluster on page 14. Click Next. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 12

4 On the Sample Buckets screen, click Next to continue. The sample buckets are designed to demonstrate the Couchbase server features, and are not required for configuration. 5 On the Create Default Bucket screen, click Next to use the defaults. This is the bucket name used in the Equitrac installer. 6 On the Notifications screen, optionally enable software notifications, and community updates, and click Next. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 13

7 On the Configure Server screens, create an administrator account for the server, and click Next. The Couchbase Console Overview screen opens and is ready to use. After installation, Couchbase is set up with only one server node and one data bucket. Joining a Cluster After creating a cluster with a single node, you need to install Couchbase on other servers and add these nodes to the existing cluster. To join a cluster, do the following: 1 Run the Couchbase Server executable following the Installation Wizard prompts. 2 Open the Couchbase console, and click the Setup button. 3 On the Configure Server screen, select the Join a cluster now option: 4 Leave the Disk Storage and Server Hostname default values. 5 Enter the IP Address of the cluster you are joining (i.e. Node 1). EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 14

6 Enter the Couchbase Administrator Username and Password. 7 Select the desired Services for the cluster environment. 8 Click Next to continue. 9 The Couchbase Console Overview screen opens with a message stating that 'This server has been associated with the cluster and will join on the next rebalance operation.' 10 Select the Server Nodes tab and click the Rebalance button to automatically rebalance the cluster. 11 Repeat this process for all nodes you want to add to the cluster. 12 Select the Setting tab, and click the Cluster button, and then enter a Cluster Name and click Save. Leave the default values for the other values. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 15

13 On the Settings tab, click the Auto-Failover tab and enable the auto-failover feature and provide the timeout for how long (in seconds) a node is down before it fails-over. At least three nodes are required to enable Auto-Failover. Couchbase is now configured in a cluster. Refer to the Couchbase documentation for additional setup and configuration options. Installing DCE in an High Availability Setup You can install multiple DCEs to manage the communication load from release devices. Included in the DCE installation is the Couchbase server, which is a NoSQL document database required for DCE caching. Couchbase must be installed and configured on multiple remote servers in your deployment before installing DCE. To install DCE in an HA Setup, do the following: 1 Close all other applications on the server prior to running the Equitrac Office or Express installation. 2 Select and run the Installer file (Equitrac.Office.exe or Equitrac.Express.exe) to launch the Equitrac Office or Express Installation wizard. 3 At the Welcome screen, click Next to begin the installation process. 4 Read and accept the terms of the End-User License Agreement, and click Next to continue. 5 On the Select Language screen, select the language you want to display in the user interface, and select Standard install, and then click Next. NOTE: The Simple install option cannot be used when installing DCE in an HA environment across multiple machines. The Simple install contains a default set of core components, features and Administrative applications that cannot be modified during installation. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 16

6 On the Select Features screen, choose DCE and click Next. 7 If CAS is in a cluster environment or is not selected for installation, the Core Accounting Server Location screen appears. Enter the fully qualified domain name or fixed IP address of the CAS server. Click Test Connection to validate a connection across the network, and click Next to continue. 8 On the Service Log On Credentials screen, enter the Account and Password of the user who will run the Windows services. Click the Test Credentials button to verify the user, and click Next to continue. NOTE: The Account field contains the account name in domain\username format. If you are using a SQL Express database that is not on a domain and you are using local accounts, you must enter computername\username. CAUTION: When installing DCE on multiple machines, you must enter the same user credentials for each machine or CAS will not respond to requests by DCE. These credentials are used to start and run all services. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 17

9 On the Windows Firewall Exceptions screen, select either a manual or automatic setup method for the firewall exceptions, and then click Next to continue. 10 On the DCE High Availability Setup screen, select the DCE will be part of a High Availability Setup checkbox, and enter the Virtual Server name and click Next. This virtual server name should match the DNS record previously created. This virtual server name should successfully resolve to the VIP assigned to the virtual service on the NLB. 11 On the DCE Remote Cache Connection screen, do the following: a b c Enter the Connection string to the Couchbase database. The string is in the couchbase://hostname/ bucketname format. The connection string contains the values created during Couchbase configuration. Enter the Administration account and password of the Couchbase server node. Click the Test Connection button to verify the connection to the Couchbase database, then click Next. 12 On the Ready to install Equitrac Express (or Office) screen, click Install to start the installation process. The installation wizard copies files, sets up services, and creates shortcuts to the Administrative Applications. 13 At the end of the process, click Finish to exit the installation wizard and begin initial configuration. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 18

Creating Couchbase Indexes on DCE An index is a data structure that provides quick and efficient means to query and access data. Couchbase indexes are not automatically created for HA installations, and need to be manually created on the Couchbase cluster nodes. The administrator must create indexes on multiple nodes within the Couchbase cluster. Indexes must be created on at least one node for DCE to function and on at least two nodes for Couchbase Auto-Failover to function, and on multiple Couchbase nodes to allow for continued operation after failover. After DCE is installed, the cbinit.exe must be run to create Couchbase indexes. The cbinit.exe file is in the C:\Program Files\Equitrac\Express or Office\Device Control Engine folder. To create indexes on one Couchbase node, run the following command. This command must be repeated for each node that will contain the indexes. All parameters are case-sensitive and must be typed as shown. C:\Program Files\Equitrac\Express\Device Control Engine\cbinit.exe /h <hostname> /u <username> /p <password> /b <bucket> /n <node> /s <suffix> hostname The hostname of one of the Couchbase nodes in the cluster username Name of the Couchbase administrator password Password of the Couchbase administrator bucket The name of the Bucket to create the index on. This is the bucket name used in the Couchbase connection string. node The name of the Node to create the index on. The list of server names in the cluster can be seen from the Couchbase console under Server Nodes. suffix Suffix to append to the index names. This is needed to create a unique index name across the entire Couchbase cluster to ensure that all indexes have different names. The suffix can be any value as long as the name does not match an existing index. Example for creating indexes on Couchbase node 10.1.2.3. C:\Program Files\Equitrac\Express\Device Control Engine\cbinit.exe /h 10.1.2.3 /u Administrator /p ****** /b Nuance /n 10.1.2.3 /s 1 EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 19

Monitoring DCE Health in a Cluster After setting up your DCE in a high availability (HA) cluster environment, it is recommended to run a health monitor to verify that DCE is working. If a DCE cluster node fails over, the embedded clients can reconnect to an alternate DCE node in the cluster and continue the user session. In order for this to happen the NLB must first detect that the DCE node is no longer in service. This can be done by using an NLB health monitor for the DCE service. The DCE service supports a DCEHealthCheck URL for active TCP monitoring of the DCE service by the NLB to determine if DCE can respond to the request in a timely manner. The DCE HealthCheck monitor continually pings the DCE nodes on port 2939 and takes the node offline on failure. Once the NLB detects a node failure it stops routing new client connection requests to the failing DCE node. Clients with existing connections to the failing node may have to wait for a connection timeout. An alternative is to configure the NLB to reset existing client connections as soon as the failure is detected, causing the client to request a new connection without waiting for a network timeout. The following basic configuration settings are required: Interval: 15 seconds (how often the monitor sends a request to the DCE node) Timeout: 15 seconds (how long the monitor waits for a successful response before taking the failed node offline). These Interval and Timeout values are similar to the DCEs internal timeouts. Send String: GET /DCEHealthCheck HTTP/1.1\r\nConnection: close\r\n Receive String: HTTP/1.1 200 OK The following configuration example is for an F5 NLB. Consult you NLB vendor for configuration and setup support. EQ-HA-DCE-20170512 Equitrac Office and Express Print Release High Availability 20