High Availability Administrator Guide. August, 2010

Similar documents
Installing PI AF with SQL Server 2012 AlwaysOn Availability Group Feature. Version 1.0

PI DataLink MUI Language Pack

PI Web API 2014 R2 Update 1 Release Notes

PI Web Services 2012 SP1 Release Notes

PI Interface for Yokogawa FAST/TOOLS SCADA. Version x

PI for StreamInsight User Guide. Version 1.0

OSIsoft Release Notes

DCOM Configuration Guide. Published April 2010

PI Interface for CSI UCOS. Version x

PI Connector for Ping 1.0. User Guide

PI OPC DA Server User Guide

PI Interface Configuration Utility. Version

PI Interface for Honeywell TotalPlant Batch. Version x

Copyri g h t 2012 OSIso f t, LLC. 1

OSIsoft Release Notes

PI Asset Connector for Emerson DeltaV

PI Event Frame Generator (PIEFGen) Version User Guide

PI Interface for Measurex MXO. Version x

Release Notes Ping Interface OSIsoft, Inc. All rights reserved

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note

Tips from the Trenches

Failover Configuration Bomgar Privileged Access

Failover Dynamics and Options with BeyondTrust 3. Methods to Configure Failover Between BeyondTrust Appliances 4

Configuring Failover

High Availability and Disaster Recovery Solutions for Perforce

Privileged Remote Access Failover Configuration

Veritas NetBackup for Microsoft SQL Server Administrator's Guide

IBM Security SiteProtector System SecureSync Guide

WANSyncHA Microsoft Exchange Server. Operations Guide

Veritas NetBackup OpenStorage Solutions Guide for Disk

Arcserve Replication and High Availability

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008

Office and Express Print Submission High Availability for DRE Setup Guide

2016 OSIsoft TechCon. Condition-Based Maintenance with PI AF

PI ProcessBook Add-ins and PI ActiveView, Making them work together January 2009

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002

Cisco Expressway Cluster Creation and Maintenance

PI ProcessBook 2012 User Guide

1.0. Quest Enterprise Reporter Discovery Manager USER GUIDE

BrightStor ARCserve Backup for Windows

Veritas Desktop and Laptop Option Mac Getting Started Guide

Using VMware vsphere with Your System

PI Interface for Emerson DeltaV Batch. Version x

New to PI SDK and AF SDK 2010

Windows Integrated Security what it is, why you should use it

CA ARCserve Replication and High Availability for Windows

Cisco TelePresence VCS Cluster Creation and Maintenance

Veritas System Recovery 18 Management Solution Administrator's Guide

RtReports Template Reference Version 3.2

Using VMware vsphere With Your System

Application Notes for Installing and Configuring Avaya Control Manager Enterprise Edition in a High Availability mode.

Release for Microsoft Windows

Release Notes PI OLEDB Version

TANDBERG Management Suite - Redundancy Configuration and Overview

Quick Start Guide For Ipswitch Failover v9.0.1

Configuring ApplicationHA in VMware SRM 5.1 environment

Replicator Disaster Recovery Best Practices

Veritas Desktop Agent for Mac Getting Started Guide

Lasso Continuous Data Protection Lasso CDP Client Guide August 2005, Version Lasso CDP Client Guide Page 1 of All Rights Reserved.

Dell Storage Compellent Integration Tools for VMware

Oracle Fail Safe. Release for Microsoft Windows E

Enterprise Vault.cloud Archive Migrator Guide. Archive Migrator versions 1.2 and 1.3

PI SQC User Guide. Version 3.2

Office and Express Print Release High Availability Setup Guide

Parallels Virtuozzo Containers 4.6 for Windows

Dell Storage Compellent Integration Tools for VMware

CA ERwin Data Modeler

vrealize Operations Management Pack for NSX for vsphere 3.0

Symantec ediscovery Platform

Veritas NetBackup for Microsoft Exchange Server Administrator s Guide

PI AMI Interface for Itron OpenWay Collection Engine. Version x

Veritas Storage Foundation for Windows by Symantec

User Manual. Active Directory Change Tracker

Ethernet is a registered trademark of Digital Equipment Corporation, Intel, and Xerox Corporation.

EASYHA SQL SERVER V1.0

Introducing PI SDK Buffering

Business Continuity and Disaster Recovery. Ed Crowley Ch 12

Guidelines for Using MySQL with Double-Take

Contingency Planning and Disaster Recovery

PI Integrator for Business Analytics User Guide

Installation Manual. Fleet Maintenance Software. Version 6.4

Using VMware vsphere With Your System

WhatsUp Gold 2016 Installation and Configuration Guide

Alliance Key Manager AKM for AWS Quick Start Guide. Software version: Documentation version:

vcenter Server Heartbeat Administrator's Guide VMware vcenter Server Heartbeat 6.6 Update 2

Client Installation and User's Guide

Storage Foundation and High Availability Solutions HA and Disaster Recovery Solutions Guide for Microsoft SharePoint 2013

Dell PowerVault DL Backup to Disk Appliance and. Storage Provisioning Option

ControlPoint. Advanced Installation Guide. September 07,

USER GUIDE. CTERA Agent for Windows. June 2016 Version 5.5

Symantec Workflow Solution 7.1 MP1 Installation and Configuration Guide

System Administration

Oracle Fail Safe. Tutorial. Release for Windows

PI Interface for ACS Prism. Version x

Forescout. eyeextend for IBM BigFix. Configuration Guide. Version 1.2

Administrator s Guide. StorageX 7.8

CA ERwin Data Modeler

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

Installation Guide Worksoft Certify

StarWind Virtual SAN AWS EC2 Deployment Guide

Transcription:

High Availability Administrator Guide August, 2010

OSIsoft, LLC 777 Davis St., Suite 250 San Leandro, CA 94577 USA Tel: (01) 510-297-5800 Fax: (01) 510-357-8136 Web: http://www.osisoft.com OSIsoft Australia Perth, Australia OSIsoft Europe GmbH Frankfurt, Germany OSIsoft Asia Pte Ltd. Singapore OSIsoft Canada ULC Montreal & Calgary, Canada OSIsoft, LLC Representative Office Shanghai, People s Republic of China OSIsoft Japan KK Tokyo, Japan OSIsoft Mexico S. De R.L. De C.V. Mexico City, Mexico OSIsoft do Brasil Sistemas Ltda. Sao Paulo, Brazil High Availability Administrator Guide Copyright: 2010 OSIsoft, LLC. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, mechanical, photocopying, recording, or otherwise, without the prior written permission of OSIsoft, LLC. OSIsoft, the OSIsoft logo and logotype, PI Analytics, PI ProcessBook, PI DataLink, ProcessPoint, Analysis Framework, IT Monitor, MCN Health Monitor, PI System, PI ActiveView, PI ACE, PI AlarmView, PI BatchView, PI Data Services, PI Manual Logger, PI ProfileView, PI WebParts, ProTRAQ, RLINK, RtAnalytics, RtBaseline, RtPortal, RtPM, RtReports and RtWebParts are all trademarks of OSIsoft, LLC. All other trademarks or trade names used herein are the property of their respective owners. U.S. GOVERNMENT RIGHTS Use, duplication or disclosure by the U.S. Government is subject to restrictions set forth in the OSIsoft, LLC license agreement and as provided in DFARS 227.7202, DFARS 252.227-7013, FAR 12.212, FAR 52.227, as applicable. OSIsoft, LLC. Version: Published: 02 August 2010

Table of Contents Chapter 1 Introduction to High Availability... 1 Why do I Need HA?... 1 HA Architecture... 2 Limitations to HA... 3 External Enhancements to HA... 4 Chapter 2 PI Server... 5 About PI Collectives... 5 Implementation of HA Features for PI Server... 9 Security for PI Collectives... 15 Chapter 3 Interfaces... 19 Interface Failover... 19 N-Way Buffering... 24 Chapter 4 PI to PI Interface... 43 Configuration Considerations... 43 Data Transfer Between PI Collective Servers... 46 Data Aggregation Between PI Collectives... 49 Chapter 5 Clients... 51 Client Connections to PI Server... 51 PI Connection Manager... 53 Chapter 6 PI Collective Management... 57 PI Collective Health... 57 PI System Health... 62 Operations Considerations... 63 Server Management... 65 PI Collective Performance... 82 Appendix A Reference... 85 PI Collective Configuration Tables... 85 How to Verify that Configuration Databases Match on Secondary Servers... 92 Message Logs... 93 Replicated Tables... 93 Nonreplicated Tables... 95 Replication Performance Points... 95 High Availability Administrator Guide iii

Table of Contents Appendix B Technical Support and Resources... 97 iv

Chapter 1 Introduction to High Availability The PI System includes features that support high availability (HA). By configuring your PI components for HA, you can enhance the data-loss features provided by a basic configuration and provide uninterrupted access to your PI data. This section discusses: Why do I Need HA? (page 1) HA Architecture (page 2) Limitations to HA (page 3) External Enhancements to HA (page 4) Why do I Need HA? With HA, you can provide uninterrupted access to data without requiring special hardware or clustered environments. By configuring PI components to use HA features, you enhance the data-loss features that buffering services offer in a basic configuration. With basic configurations, planned and unplanned events can trigger data loss or make data inaccessible: Planned maintenance Administrators must occasionally take PI components down for planned maintenance, such as operating system updates, software upgrades, hardware upgrades, and reconfiguration. Unavailable interfaces result in gaps in data because the interface cannot record data reported by the monitored device. With an unavailable PI Server, interfaces can buffer data collected, but clients cannot access any data. You can control when planned maintenance occurs, and can therefore minimize impacts, but you cannot entirely eliminate impacts. Unplanned failures Events such as software failure, hardware failure, network failure, and human error occur infrequently, but pose a greater risk than planned maintenance. These events might bring a system down momentarily or for several hours until you detect and repair the failure. A failure at the interface node results in the loss of data recorded while the node is out of service. A failure of PI Server results in the loss of data not previously backed up. With HA, you can minimize or eliminate data losses and ensure that your data is available continuously. With HA, you can experience increased: Reliability With an HA configuration, data has multiple paths from the source to the end user. If one component fails, data can traverse an alternate path. Therefore, with HA, you can access current data when you need to. High Availability Administrator Guide 1

Introduction to High Availability Scalability You can share retrieval and computing loads between configured servers, allowing you to increase the scale of your system. Maintainability You can troubleshoot a server offline, giving you time to analyze and diagnose problems without adversely affecting users. Disaster recovery You can locate machines in different places, protecting data in catastrophes and locating data closer to those accessing it. Quality of service You can distribute connections and workloads among servers, reducing demands on individual servers. HA Architecture You can configure HA features on appropriate PI components. To ensure the high availability of PI Server data, you must configure three types of components: PI Server To implement HA, you install more than one PI Server and configure the PI System to store and write identical data on each server. Together, this set of servers, called a PI collective, acts as the logical PI Server for your system. The collective receives data from one or more interfaces and responds to requests for data from one or more clients. Because more than one server contains your system data, system reliability increases. When one server becomes unavailable, for planned or unplanned reasons, another server contains the same data and responds to requests for that data. Similarly, when demand for accessing data is high, you can spread that demand among the servers. Interfaces To implement HA, you configure interfaces to support failover and n-way buffering. Failover ensures that time-series data reaches PI Server even if one interface fails; n-way buffering ensures that identical time-series data reaches each PI Server in a collective. To support failover, you install a redundant copy of an interface on a separate computer. When one interface is unavailable, the redundant interface automatically starts collecting, buffering, and sending data to PI Server. To support n-way buffering, you configure the buffering service on interface computers to queue data independently to each PI Server in a collective. Clients To implement HA, you configure clients to connect to any server in a collective and seamlessly switch to another server if necessary. 2

Limitations to HA You can configure HA features for other PI System components, too: PI Notifications To implement HA, you install instances of PI Notifications Service on different machines and configure those different instances to run the same set of notifications. One instance acts as the primary service and sends the notifications. The other instances act as backup services and stand by. If the primary service stops either gracefully or ungracefully one of the backup services becomes the primary service. AF Server To implement HA, you can configure multiple pairs of AF Application Service and AF SQL Database into an AF collective. In addition, you can configure each pair as a SQL Cluster or mirrored SQL Server. See PI AF 2010 User Guide for more information about implementing and using HA with AF Server. Limitations to HA OSIsoft designed the PI System to support HA in environments with all servers and interfaces in a single domain a domain configured with a domain controller and a reliable DNS (domain name system) resolution. You must use special configuration procedures if: You have components not installed in a homogeneous security environment, such as components installed in different, non-trusted domains, or components installed in a work group. You do not have access to Active Directory (AD) and must configure authentication through local Windows security. See Security for PI Collectives (page 15). You can easily create PI collectives and manage servers in PI collectives with Collective Manager. However, Collective Manager requires Windows file copy access between servers. This requires properly opened TCP ports. Without this access, you must manually create collectives and initialize secondary servers. See Collective Manager Limitations (page 71). The PI System uses the buffer mechanism to replicate data from interfaces to the servers in a PI collective. Therefore, data not sent to PI Server through the buffering system is not High Availability Administrator Guide 3

Introduction to High Availability replicated. This includes manually entered data and data from client applications that run and write to a single server, such as: PI Batch Performance Equation Totalizer See Application Limitations (page 64). External Enhancements to HA When implementing HA, you might consider other strategies that can enhance reliability. External strategies are particularly useful for the primary PI Server. Recovering from the loss of a primary server is more time consuming and challenging than recovering from the loss of a secondary server. External strategies you might consider include: Uninterruptible power supply Uninterruptible power supply (UPS) provides emergency power when the main power source fails. By using batteries and associated electronic circuitry, UPS provides instantaneous or near-instantaneous protection from power interruptions and increases data availability at a local machine. Redundant hardware Redundant hardware, such as RAID (redundant array of independent disks), can increase data availability at a local machine. You can run any server in a PI collective or any interface on redundant hardware. Operating system clustering Microsoft Cluster Services provide an alternate solution to increase data availability at a local machine. You can cluster any server in a PI collective or any interface. However, you cannot use Collective Manager to manage servers in a clustered environment. Instead, you must use command-line tools to initialize the primary server, to initialize secondary servers, and to reinitialize secondary servers. See Collective Manager Limitations (page 71). For more information about installation in a Microsoft Cluster Services environment, see the PI Server Installation and Upgrade Guide. 4

Chapter 2 PI Server PI Server is the component in the PI System that stores time-series data collected at interfaces and that responds to client requests for this data. To implement HA at your PI Server, install more than one PI Server, configure the servers as a PI collective, and configure the PI System to write time-series data to all servers in the collective. Each PI Server in the collective stores identical data. Because more than one server contains your system data, reliability of your PI System increases. When one server becomes unavailable for planned or unplanned reasons another server contains the same data and can display that data. Similarly, when demand for accessing data is high, you can spread that demand among the servers. This section discusses: About PI Collectives (page 5) Implementation of HA Features for PI Server (page 9) Security for PI Collectives (page 15) About PI Collectives A PI collective is a set of more than one PI Server that acts logically as a single PI Server to provide HA in your PI System. You designate one PI Server in the collective as the primary PI Server and each additional server in the collective as a secondary PI Server. The primary server maintains the configuration data and synchronization with AF Server. The PI Server replication service copies configuration changes from the primary server to each secondary server in the collective, ensuring that all servers in the collective have consistent configuration databases. You make all configuration changes, such as point definitions, to the primary PI Server. You must configure the PI System to write time-series data to each PI Server in the collective. In most cases, you configure your interfaces to use n-way buffering, which will queue data independently to each PI Server (see N-Way Buffering (page 24)). Because all servers in the collective contain identical data, clients can connect to any server in the collective. You can use Collective Manager to create new PI collectives, configure existing collectives and their servers, and view the status of your collectives. Subsequent topics in this section discuss how you commonly deploy collectives with other PI components and provide examples of more advanced deployments: Common Deployments (page 6) Geographically Separated Deployment (page 6) Deployment with a Firewall (page 7) High Availability Administrator Guide 5

PI Server Common Deployments In most deployments, interfaces connect to each PI Server in the PI collective. You configure interfaces to use n-way buffering (page 24), which writes identical time-series data independently to each PI Server in the collective. Each server independently processes that time-series data. Because each server has an identical configuration, each server will have identical data. You configure interfaces to receive configuration information and outputs from one PI Server in the collective, usually the primary server. PI SDK (version 1.3.4 or later), which connects client applications to PI Server, recognizes a PI collective as one logical data source. PI SDK connects clients to the appropriate server and transitions to an available server when one fails. You can configure the order (page 54) that each workstation attempts to connect to servers in a collective. By alternating the preferred connection order at different workstations, you can balance loads among servers. A normal PI Server shutdown notifies PI SDK, which will immediately connect clients to an alternate server in the collective. Similarly, when PI SDK recognizes that the PI Server providing data becomes unresponsive, PI SDK connects clients to an alternate server. Geographically Separated Deployment In addition to the common deployment approach, PI collectives support custom deployment strategies, such as geographically separated deployments. For example, if deploying your PI system at two operations centers geographically separated by large distances, you might deploy the primary PI Server and one secondary PI Server at the local operations center, and deploy two secondary servers at a remote backup operations center. You can configure 6

About PI Collectives workstations to connect to their local servers before connecting to remote servers. You might even configure some workstations to connect only to local servers (page 54). Such a configuration separates loads and separates functions between the operations centers. You might have interfaces at both operations centers. You might configure the interfaces to use n-way buffering to send time-series data to all the servers in the PI collective. However, to reduce network traffic, you might have the primary PI Server send configuration information and outputs only to the interfaces at its local center and have a secondary PI Server send configuration information to interfaces at the remote center. As with the common deployment, you make all configuration changes at the primary PI Server, and the PI Server replication service sends the changes to all the secondary servers. High Availability Administrator Guide 7

PI Server Deployment with a Firewall You might deploy your PI collective in an environment that has a firewall separating a protected control network from a widely available business network. In such an environment, you might install servers on both sides of the firewall the primary PI Server along with a secondary server on the control network and two secondary servers on the business network. To reduce traffic across the firewall, you might configure the client workstations on the control network to connect only to the servers on the control network, and configure the client workstations on the business network to connect only to the servers on the business network. Similarly, you might configure interfaces to use n-way buffering and send timeseries data to the servers on the control network only. In this case, you must configure a PI to PI interface (page 43) to send the time-series data from the primary server to the servers on the business network. This deployment reduces the diversity of traffic flowing through the firewall from the control network to the business network. As in the common deployment, the PI Server replication service still copies configuration changes from the primary server to each secondary server in the PI collective. And the primary PI Server still sends configuration changes to the interfaces. 8

Implementation of HA Features for PI Server Implementation of HA Features for PI Server To implement HA features for PI Server, you must install PI Server on more than one machine, create a PI collective for the servers, and configure interfaces to send time-series data to all servers in the collective. There are two implementation procedures: How to Implement HA with an existing PI Server (page 9) Use this procedure if you have an existing PI Server installed that collects data and you want to implement HA features by installing one or more secondary servers and creating a collective. How to Implement HA with a New PI Server (page 12) Use this procedure if you are installing a new PI Server and want to implement HA features by also installing a secondary server and creating a collective. How to Implement HA with an Existing PI Server This section contains a summary of the procedure for implementing HA features if you have an existing PI Server that collects data. This procedure installs new servers and a PI collective. In this procedure, the existing PI Server becomes the primary PI Server. If you need more detailed instructions, see PI Server Installation and Upgrade Guide. To implement HA with an existing PI Server: 1. If necessary, upgrade the existing PI Server to the current version of PI Server software. You must have version 3.4.375 or later. To find the version currently installed: a. In PI System Management Tools, connect to your primary PI Server. a. Under System Management Tools, select Operation > PI Version. The data pane shows the various subsystems of the PI Server and the version installed. 2. Check that your license supports your PI collective. a. In PI System Management Tools, connect to your primary PI Server. b. Under System Management Tools, select Operation > Licensing. c. In the data pane, select Resources > pilicmgr.maxsecondarynodecount. The value specified in Total is the number of secondary servers that your license permits. If you intend to install more secondary servers than your license permits, contact OSIsoft Tech Support. 3. Synchronize the clocks on each machine that will host PI Server. If necessary, set the appropriate time and time zone. Clocks on different server machines may differ by only a few minutes. OSIsoft recommends synchronizing the PI Server clock with a network time protocol (NTP) server. 4. Install PI Server software on each machine that will host a secondary server. 5. Start each secondary server. High Availability Administrator Guide 9

PI Server 6. Use PI Connection Manager to connect the primary server with each secondary server and to connect each secondary server with the primary server. Depending on your security configuration, you might need to create a mapping for an account valid on all the servers. 7. Create a PI collective with Collective Manager on the primary server. Note: In some cases you cannot use Collective Manager; you must create collectives manually. See Collective Manager Limitations (page 71). a. Close any clients connected to the primary server, such as PI SMT. b. Open Collective Manager. Click Start > All Programs > PI System > Collective Manager. c. In Collective Manager, select File > Create New Collective to open the Create New Collective wizard. d. Select the check boxes to verify that you have a backup of your existing PI Server and that all interfaces can communicate with all servers in the collective. Click Next. Note: You can configure interfaces to communicate with servers in the collective after creating the collective. e. At the prompt about the primary server, select An existing PI server that contains historical data. Click Next. f. In Collective Primary, select your existing PI Server. The wizard automatically gives the collective the same name as your existing PI Server. This will enable you to continue to use existing clients and interfaces without reconfiguration. Specify descriptions for the primary server and collective, if desired. Click Next. g. Select each secondary server you want in your collective and click Add. When the list shows all secondary servers, click Next. h. Verify which archives you want copied to the secondary servers and click Next. i. Verify the location of the backup file and click Next. j. At the prompt to verify selections, you can set an alternative directory for archive files on the secondary server and the replication frequency, if desired. Click Advanced Options. Under Member Servers, select the secondary server that you want to set. In the property list, set desired values. Property CommPeriod Description Frequency (in seconds) that secondary server checks that it can communicate with primary server. Default value is 5. You can change this value later. See How to Set Synchronization and Communication Frequencies (page 61). 10

Implementation of HA Features for PI Server Property SyncPeriod Description Frequency (in seconds) that secondary server checks for configuration updates from primary server. 0 indicates no automatic synchronization occurs. Default value is 10. You can change this value later. See How to Set Synchronization and Communication Frequencies (page 61). k. Click Next. PIArchivePath Click OK to save the changes. Collective Manager creates the collective. Directory that stores archives on secondary server. Default value is the directory that stores archives on the primary server. If you set a different directory, the replication process automatically registers archives to this directory. You cannot change this value after adding the server to the collective. During this process, Collective Manager might create a new server ID for the primary server (to match the new collective name). l. If the Server ID Mismatch dialog box informs you about a new ID, click OK to accept the new ID. m. Click Finish. 8. Check that the collective members are communicating properly. See How to Check Communication of PI Collective Members (page 58). 9. Check that configuration changes replicate to secondary servers. See How to Check Configuration Replication (page 59). 10. Verify that PI Connection Manager shows the collective. a. In either PI SMT or Collective Manager, select File > Connections to open PI Connection Manager. b. Double-click the collective to open the Collective Member Information dialog box, which shows the servers in the collective. 11. Configure your interfaces and their buffers to send time-series data to each secondary server. If you use PI Buffering Subsystem, verify buffering configuration and in some cases point a redundant interface to a secondary collective member. See Tasks to Configure PI Buffer Subsystem for N-Way Buffering After Upgrading to a PI Collective (page 27). If you use API Buffer Server, see How to Configure N-Way Buffering with API Buffer Server at Existing Interfaces After Upgrading to a PI Collective (page 31). 12. Verify that the PI System is operating properly. See PI System Health (page 62). High Availability Administrator Guide 11

PI Server How to Implement HA with a New PI Server This section contains a summary of the procedure for implementing HA features in a new PI System (that is, one where you do not have an existing PI Server configured to connect to interfaces and clients). By following this procedure, you install new servers and a PI collective. If you need more detailed instructions, see PI Server Installation and Upgrade Guide. To implement HA in PI Server in a new PI System: 1. Install AF Server and Microsoft SQL Server. 2. Synchronize the clocks on each machine that will host PI Server. If necessary, set the appropriate time and time zone. Clocks on different server machines may differ by only a few minutes. OSIsoft recommends synchronizing the PI Server clock with a network time protocol (NTP) server. 3. Ensure you have proper permissions. To install, you must be the administrator or a member of the local administrators group. After installation, you must configure PI Server access permissions. Determine who requires initial access. 4. Obtain the license file for the primary PI Server. a. Download the generator utility. b. Generate a machine signature file (MSF). c. Upload MSF to OSIsoft. d. Download license file. The license file must acknowledge at least one secondary server. If you create a PI collective with a stand-alone license, secondary servers have a 30-day grace period. 5. Install the prerequisites on each server machine, if necessary. 6. Install PI Server software on each server machine. During installation, pay attention to: ο ο ο Location of archives and event queues Ideally, install the event queues and the archives on separate drives. The snapshot event queue buffers data collected when archives fail. Installing them on the same drive limits the benefits of this system. Size of archive and event queues OSIsoft recommends an archive size of 10Mb to 20Mb per 1,000 tags, and an event queue that is half the archive file size. Security OSIsoft recommends using Windows integrated security (WIS) and mapping domain users and groups to PI identities. Note: Collective installation overwrites the configuration of secondary servers. 12

Implementation of HA Features for PI Server 7. Start each PI Server. Run..\PI\adm\pisrvstart.bat. 8. Verify that each PI Server is running properly by checking that test tags generate data. 9. Complete PI Server configuration: ο ο ο Determine your archive strategy on each server. For most customers, this means setting Auto Archiving to ON. Configure PI Server daily backups on the primary server. Set security model at primary server by mapping Active Directory users and groups to PI identities. 10. Use PI Connection Manager to connect the primary server with each secondary server and to connect each secondary server with the primary server. Depending on your security configuration, you might need to create a mapping for an account valid on all the servers. 11. Create a PI collective with Collective Manager on the primary server. Note: In some cases you cannot use Collective Manager; you must create collectives manually. See Collective Manager Limitations (page 71). a. Close any clients connected to the primary server, such as PI SMT. b. Open Collective Manager. Click Start > All Programs > PI System > Collective Manager. c. In Collective Manager, select File > Create New Collective to open the Create New Collective wizard. d. Select the check boxes to verify that you have a backup of your existing PI Server and that all interfaces can communicate with all servers in the collective. Click Next. Note: You can configure interfaces to communicate with servers in the collective after creating the collective. e. At the prompt about the primary server, select A newly installed PI server. Click Next. f. Follow the prompts in the wizard to: Define primary server Name the collective Select secondary servers Verify which archives you want copied to secondary servers Verify location of the backup file g. At the prompt to verify selections, you can set an alternative directory for archive files on the secondary server and the replication frequency, if desired. Click Advanced Options. Under Member Servers, select the secondary server that you want to set. High Availability Administrator Guide 13

PI Server In the property list, set desired values. Property CommPeriod SyncPeriod Description Frequency (in seconds) that secondary server checks that it can communicate with primary server. Default value is 5. You can change this value later. See How to Set Synchronization and Communication Frequencies (page 61). Frequency (in seconds) that secondary server checks for configuration updates from primary server. 0 indicates no automatic synchronization occurs. Default value is 10. You can change this value later. See How to Set Synchronization and Communication Frequencies (page 61). h. Click Next. PIArchivePath Click OK to save the changes. Collective Manager creates the collective. Directory that stores archives on secondary server. Default value is the directory that stores archives on the primary server. If you set a different directory, the replication process automatically registers archives to this directory. You cannot change this value after adding the server to the collective. During this process, Collective Manager might create a new server ID for the primary server (to match the new collective name). i. If the Server ID Mismatch dialog box informs you about a new ID, click OK to accept the new ID. j. Click Finish. 12. Check that the collective members are communicating properly. See How to Check Communication of PI Collective Members (page 58). 13. Check that configuration changes replicate to secondary servers. See How to Check Configuration Replication (page 59). 14. Verify that PI Connection Manager shows the collective. a. In either PI SMT or Collective Manager, select File > Connections to open PI Connection Manager. b. Double-click the collective to open the Collective Member Information dialog box, which shows the servers in the collective. 15. Configure your interfaces and their buffering services to send time-series data to your servers. See N-Way Buffering (page 24). 16. Verify that the PI System is operating properly. See PI System Health (page 62). 14

Security for PI Collectives Security for PI Collectives This section discusses security configuration when you enable PI Server high availability (HA) features by configuring a PI collective. Topics include: Overview of Security in PI Collectives (page 15) Custom Security Configurations (page 15) How to Enable the Lookup-Failure Tuning Parameter (page 16) Creation of Mappings with a SID (page 17) Overview of Security in PI Collectives OSIsoft designed PI collectives to fully support Windows authentication. In a standard configuration, a collective replicates the PI security mappings defined on the primary server across all collective members. In non-homogeneous security environments or environments without Microsoft Active Directory (AD), PI mappings on a specific collective member will reference Windows users or groups that are not valid on other collective members. In this case, the replication process will fail. Therefore, you cannot simply replicate mappings: you must use a custom configuration. In a standard configuration one where all collective members are in the same security environment and you are using AD you configure security on the collective s primary server just as you would configure a single PI Server. The collective s PI Server replication service copies the configuration to all secondary servers in the collective. This replication process requires that all collective members be on a single domain or part of fully-trusted domains. You must use a custom security configuration if: Collective members are not contained in a homogeneous security environment, such as when members are on different non-trusted domains, or when one or more members are on no domain. You do not have access to AD and must configure authentication through local Windows security on the primary and secondary servers. Custom configuration in collective servers can affect PI applications and users when accessing PI Server information. If the same mappings are not available on all collective members, applications might fail to connect or might receive different permissions on failovers. OSIsoft recommends avoiding custom configurations whenever possible. Custom configurations are more complex. To set up and maintain a custom configuration, you must consider who needs access to each collective member, and who will need to fail over. Consult OSIsoft Technical Support if you need help. Custom Security Configurations To use a custom security configuration in a PI collective, you must configure the PI Server to accept unresolvable security mappings during replication. The PI Server includes a lookupfailure tuning parameter that tells it to ignore unresolvable mappings during replication. High Availability Administrator Guide 15

PI Server (Collectives do not replicate tuning parameters.) With this tuning parameter enabled, you can create mappings on one collective member that other collective members cannot resolve, but replication between collective members will succeed. For information on enabling the tuning parameter, see How to Enable the Lookup-Failure Tuning Parameter (page 16). For example, suppose the primary server is in the domain where you want to create mappings and you have a secondary server that is not part of that domain. If you create mappings on the primary server with domain accounts, the replication of these mappings will fail on the secondary server (because that domain does not exist for the secondary server). Replication will stop and the secondary server will fall out of synchronization. If you enable the tuning parameter on the secondary server, the server will accept the mappings and replication will succeed. Similarly, suppose the primary server defines a mapping against a local Windows group. Because secondary servers do not know about that local group, the mappings will cause replication to fail. If you enable the tuning parameter on the secondary servers, they will accept the mappings and replication will succeed. In this case, you might also need to define mappings against local Windows groups on the secondary servers. Therefore, you must also enable the tuning parameter on the primary server. After you enable the lookup-failure tuning parameter, you must use a group s Windows Security ID (SID) instead of its name when configuring a mapping for a local Windows group. Because you cannot use PI SMT to create mappings based on SIDs, you must use piconfig. See Creation of Mappings with a SID (page 17). How to Enable the Lookup-Failure Tuning Parameter You must enable the lookup-failure tuning parameter on any secondary PI Server in a PI collective that cannot resolve security mappings from the primary server. You must also enable the lookup-failure tuning parameter on the primary server in the PI collective if you define mappings valid only on secondary servers. To enable the lookup-failure tuning parameter on a PI Server: 1. Open PI SMT. Click Start > All Programs > PI System > PI System Management Tools. 2. Under Collectives and Servers, select the PI Server where you want to enable the tuning parameter. 3. Under System Management Tools, select Operation > Tuning Parameters. 4. Click the New Parameter button. 5. In Parameter name, type: Base_AllowSIDLookupFailureForMapping 6. In Value, type: 1 7. Click OK. 8. Restart the server s PI Base Subsystem. Note: Collectives do no replicate this setting (like any tuning parameter). 16

Security for PI Collectives Creation of Mappings with a Windows Security ID (SID) After you enable the lookup-failure tuning parameter, you must use a group s SID instead of its name when you configure a mapping for a local Windows group. Use PI SMT to determine the SID, and use piconfig to create the mapping based on that SID. OSIsoft recommends enabling the lookup-failure tuning parameter only when creating mappings. After you create mappings and the primary server replicates the mappings to the PI collective, you can disable the parameter to protect against the accidental creation of invalid mappings. To determine a SID: 1. Open PI SMT. Click Start > All Programs > PI System > PI System Management Tools. 2. Under Collectives and Servers, select the secondary server that needs the security mapping. 3. Under System Management Tools, select Security > Mappings and Trusts. 4. Find the SID on the Mappings tab. If a mapping based on the desired Windows group already exists: ο ο Right-click the mapping and choose Properties. Note the Windows SID on the Mapping Properties dialog box. If a mapping based on the desired Windows group does not exist: ο ο ο ο Click New to open the Add New Mapping dialog box. In Windows Account, specify the Windows group. Note the SID inserted in Windows SID. Click Cancel. To create a mapping based on a SID: 1. Open the piconfig utility. a. Open a command window. b. Navigate to the..\pi\adm directory. c. Type: piconfig The piconfig command prompt appears. 2. Update the PI Identity Mapping table (PIIDENTMAP). You must set at least three attributes: ο ο ο IdentMap Name of the PI identity mapping PIIdent Name of the PI identity that you want to map to a local Windows group Principal SID of the Windows group you want to map to the specified PI identity High Availability Administrator Guide 17

PI Server You can also specify other table attributes, if desired. For example, to create a new mapping called My_Mapping, which maps the Windows group specified by SID S-1-5-21-1234567890-1234567890-1234567890-12345 to the PI group, piadmins, you would enter the following commands at the piconfig prompts: @table PIIdentmap @mode create @istr IdentMap,Principal,PIIdent My_Mapping,S-1-5-21-1234567890-1234567890-1234567890- 12345,piadmins The following table lists all attributes in the PIIDENTMAP table. You can specify any of these attributes when you create a mapping. Attribute IdentMap Desc Flags IdentMapID PIIdent Principal PrincipalDisp Type Description The name of the PI mapping. This must be unique, but is not case-sensitive. This field is required to create a new mapping. Optional text describing the mapping. There are no restrictions on the contents of this field. Bit flags that specify optional behavior for the mapping. There are two options: 0x01 = Mapping is inactive and will not be used during authentication. 0x00 = (Default value). Mapping is active and will be used during authentication after initial setup. A unique integer corresponding to the identity mapping. The system will automatically generate the value upon creation. Value will not change for the life of the identity mapping. Name of the PI identity to which the security principal specified by Principal will be mapped. The contents of this field must match Ident in an existing entry in the PIIDENT table. The target identity must not be flagged as Disabled or MappingDisabled. Multiple IdentMap entries can map to the same PIIdent entry. This field is required to create a new identity mapping The name of the security principal (domain user or group) that is to be mapped to the identity named in PIIdent. This field is required to create a new identity mapping. For principals defined in an Active Directory domain, the format of input to this field can be any of the following: Fully qualified account name (my_domain\principal_name) Fully qualified DNS name (my_domain.com\principal_name) User principal name (UPN) (principal_name@my_domain.com) SID (S-1-5-21-nnnnnnnnnn- -nnnn) For security principals defined as local users or groups, only the fully qualified account name (computer_name\principal_name) or SID formats may be used. Output from piconfig for this field will always be in SID format, regardless of which input format was used. User-friendly rendering of the principal specified by Principal. This is an outputonly field. The principal name will be displayed in the fully-qualified account name format. This is a reserved field indicating the type of the mapping. In this release, this attribute is always set to 1. 18

Chapter 3 Interfaces Interfaces are the components of the PI System that collect time-series data from data sources and send the data to PI Server for storage. To implement HA, you configure interfaces to support failover and n-way buffering. Failover ensures that time-series data reaches PI Server even if one interface fails; n-way buffering ensures that identical time-series data reaches each PI Server in a collective. To support failover, you install a redundant copy of an interface on a separate computer. When one interface is unavailable, the redundant interface automatically starts collecting, buffering, and sending data to PI Server. To support n-way buffering, you configure the buffering service on interface computers to queue data independently to each PI Server in a collective. In some deployments, interfaces send outputs (that is, data from PI Server) to the data source. With proper configuration, failover considers the availability of PI Server for outputs in addition to the availability of the interface. This section discusses: Interface Failover (page 19) N-Way Buffering (page 24) Consider all servers in a PI collective when setting values for the pointsource and location1 parameters for interfaces. Interface Failover With interface failover, you configure redundant interfaces that is, you configure interface software on two different machines to record data from a single data source. If one machine fails, the redundant machine takes over. With redundant interfaces, you minimize data loss by ensuring that there is no single point of failure. There are three types of interface failover: Hot failover Both interfaces collect data from a source but only one interface reports that data to PI Server. If one interface fails, the redundant interface immediately begins reporting data to PI Server without any data loss. Because the data source is connected and sending data to two interfaces, this type of failover requires the most resources. Warm failover The redundant interface maintains a connection with the data source but does not collect data. If the reporting interface fails, the redundant interface begins collecting and reporting data PI Server. Minimal data loss might occur while the datacollection process starts. Cold failover The redundant interface only connects with the data source after the reporting interface fails. Some data loss might occur while the connection process High Availability Administrator Guide 19

Interfaces initiates (including tag loading) and while the data collection process starts. Because connections only occur when needed, this type of failover requires the least resources. If you have a PI collective and PI Server sends outputs to the interface in your deployment, you can use interface failover to ensure the availability of the PI Server that provides outputs. Each interface receives outputs from a specific PI Server or collective member. If that PI Server becomes unavailable, the interface will no longer receive outputs. However, you can configure each interface to receive outputs from a different collective member. If you are using hot or warm failover and the PI Server connected to the reporting interface fails, the redundant interface takes over and will receive outputs from its collective member and report time-series data to the collective. Note that PI Server-induced failover only occurs if the redundant interface remains connected to the data source. Most PI interfaces use the UniInt (Universal Interface) Failover service to manage failover. This service requires a "heartbeat" between the redundant interfaces. A heartbeat enables the interfaces to communicate their status and synchronize operations. The heartbeat mechanism determines the supported types of failover (that his hot, warm, or cold). This service supports two heartbeat mechanisms: Data-Source Synchronization (Phase 1 Failover) (page 20) Shared-File Synchronization (Phase 2 Failover) (page 21) For more detailed information on interface failover, see UniInt Interface User Manual. Data-Source Synchronization (Phase 1 Failover) You can implement interface failover using data-source synchronization (also called phase 1 failover). With data-source synchronization, UniInt Failover writes information to the data source to communicate status and to synchronize operations between two interfaces. With this method, UniInt Failover provides hot failover: No data loss occurs when an interface fails. To use data-source synchronization, the data source must be able to communicate with and provide data to two interfaces simultaneously. The data source must also be able to receive data from interfaces. In addition, the interface must be able to send data (that is, outputs) to data sources. 20

Interface Failover Only some data sources support data-source synchronization. OSIsoft recommends using data-source synchronization only in one of the following situations: Performance degradation occurs using shared-file synchronization. You cannot grant read and write permissions for the shared file to both interfaces. Each type of interface has a unique procedure for configuring data-source synchronization. Consult the interface documentation for details. Shared-File Synchronization (Phase 2 Failover) You can implement interface failover using shared-file synchronization (also called phase 2 failover). With data-source synchronization, UniInt Failover writes information to a shared file to communicate status and to synchronize operations between two interfaces. With this method, UniInt Failover can provide hot, warm, or cold failover. With hot failover, no data loss occurs when an interface fails. With warm or cold failover, however, some data loss might occur when an interface fails. High Availability Administrator Guide 21

Interfaces You must choose a location for the shared file. You can store the file on one of the interface machines or on an alternate machine. OSIsoft recommends storing the file on a file-server machine that has no other role in the data-collection process. See How to Configure Shared-File Synchronization (page 22) for a general procedure for configuring interface failover using shared-file synchronization. How to Configure Shared-File Synchronization This topic describes the procedure for setting up two interfaces for failover using shared-file synchronization. For more detailed information, see the interface documentation. Before starting this procedure, you must: Stop your interfaces. Choose a location for the shared file. Select a unique failover ID number for each interface. To configure interface failover using shared-file synchronization: 1. Configure the shared file. a. Choose a location for the shared file. You can store the file on an interface computer or on separate machine. OSIsoft recommends storing the file on a separate machine. b. Create a shared file folder and assign permissions that allow both the primary and backup interfaces to read and write files in the folder. 2. On each interface machine, open PI Interface Configuration Utility and the interface. a. Click Start > All Programs > PI System > PI Interface Configuration Utility. b. In Interface, select the interface. 22

Interface Failover 3. If you have a PI collective and PI Server sends outputs to this interface, point each interface to a different collective member. a. In the page tree, select General. b. Under PI Host Information, set SDK Member to a collective member for the interface. This property sets which PI Server in the collective sends the interface configuration data and outputs. If you set each interface to a different collective member, you enable failover when the PI Server that sends outputs becomes unavailable. c. Set API Hostname to the host of the selected SDK Member. The interface uses this information to connect to the PI Server that provides configuration data. The drop-down list shows the host specified in various formats. You can specify the host as an IP address, a path, or a host name. However, if you enable buffering, you must specify the buffered server names in the same format, otherwise buffering will not work. 4. Configure the failover parameters at each interface. a. In the page tree under UniInt, select Failover. b. Select the Enable UniInt Failover check box to enable the properties on this page. c. Select Phase 2 to indicate shared-file synchronization. d. In Synchronization File Path, specify the directory and file name of the synchronization file (click Browse to select the directory and use the default file name). e. In UFO Type, select the failover type. f. In Failover ID# for this interface, enter the unique failover ID you have selected for this interface. g. In Failover ID# for the other interface, enter the unique failover ID you have selected for the alternate interface and specify the path to that interface (click Browse to select the interface). 5. From the interface connected to the primary server, create the digital state tags to support failover. a. On the UniInt Failover page, right-click a tag and choose Create UFO_State Digital Set on Server XXX where XXX is the name of the PI collective or PI Server. b. Click OK to close the confirmation message. c. Right-click a tag and choose Create all points (UFO Phase 2). PI ICU creates the tags on PI Server. 6. Check that the user from each interface has permission to write to the shared file. a. In the page tree, select Service. b. Verify that the user name assigned in Log on as can read and write to the folder that will store the shared file. 7. Click Apply to save the configuration changes. High Availability Administrator Guide 23