Technical Information

Size: px
Start display at page:

Download "Technical Information"

Transcription

1 Technical Information Virtualization Platform Planning and Implementation Guide TI 30A05B10-01EN Yokogawa Electric Corporation , Nakacho, Musashino-shi, Tokyo, Japan TI 30A05B10-01EN Copyright Sep (YK) 1st Edition Sep (YK)

2 Blank Page

3 Preface This document indicates guidelines how to plan and implement when applying the virtualization platform to Yokogawa IA system products. i The target readers of this document are engineers who have good knowledge about: - Industrial instrumentation and control system - Information Technology computer, network, security, etc. - Yokogawa IA products - CENTUM VP, ProSafe-RS, Exa-series, etc. In this document, these technical jargons are already-known, and are used without detailed explanations. Related documents GS 30A05B10-01EN IM 30A05B10-01EN IM 30A05B20-01EN IM 30A05B30-01EN (Each product IM) (Each product IM) (Each product IM) Drawing Conventions IA System Products Virtualization Platform Virtualization Platform Read Me First Virtualization Platform Setup Virtualization Platform Security Guide Read Me First, Release Information, Users Guide, etc. Installation, Installation Guide, Installation Manual, etc. Security Guide Some drawings may be partially emphasized, simplified, or omitted, for the convenience of description. Trademark CENTUM, ProSafe, PRM, Exaopc, Exapilot, AAASuite, and Vnet/IP are either registered trademarks or trademarks of Yokogawa Electric Corporation. All other company or product names appearing in this document are trademarks or registered trademarks of their respective holders. TM or mark to indicate those trademarks or registered trademarks are not used in this document. All Rights Reserved Copyright 2018, Yokogawa Electric Corporation TI 30A05B10-01EN Sep. 28,

4 Definitions, Abbreviations, and Acronyms Definitions, abbreviations, and acronyms used in this document are described in the below table. ii Table Terms in this document Term Virtualization software Virtualization host computer Virtual machine Virtualization host OS Host OS Virtualization guest OS Guest OS Vnet/IP station Virtual Vnet/IP station NIC Network adapter Virtual environment Physical environment Standard virtual machine NMS HMI client Process control network Plant information network Thin client Remote UI network Management network LUN VLAN Description Software that realizes virtualization It may be called Hypervisor. Physical server that enables the virtualization software to be installed, and enables the virtual machines to be operated. A virtualized computer, that operates on the virtualization host computer. In this computer, OS and applications are installed and made executable. The virtualization host OS of virtualization platform is the management OS for managing other virtual machines. In this document, this is the virtualization host OS. Operating system that makes it run on virtual machine. In this document, this is the virtualization guest OS. Computers (including virtual machines) compatible with the Vnet/IP protocol such as HIS and ENG, etc., and devices such as FCS and SCS for Vnet/IP. Name when the Vnet/IP station is a virtual machine. An abbreviation for Network Interface Card. The original meaning is a PCI card for Ethernet communication with RJ-45 connector. Broadly speaking, it may refer to hardware for Ethernet communication in general, including on-board Ethernet port. In this document, it is used in the meaning of hardware for connecting with Ethernet. It is almost synonymous with NIC, but in this document it is used as the meaning that it provides means for connecting with Ethernet including not only hardware but also software. Operating environment of applications configured by introduction of virtualization. Operating environment of applications consisting only of conventional physical PC without introducing virtualization. A virtual machine that configures standard resource capacity to operate as a virtual Vnet/IP station. An abbreviation for Network Management System A system for managing and monitoring configuration information and operating conditions of devices and services existing on the network. In this document, this refers to a thin client. A network for control management that connects Vnet/IP stations, which is expressed as information bus or Ethernet on the CENTUM system. Information network for connecting Vnet/IP station and upper software package system (solution products, etc.), which is expressed as information bus or Ethernet in the CENTUM system. A client computer configured with minimum functionality/performance as a user interface for virtual machines, etc. Name of Ethernet network between virtual machine and thin client. Name of Ethernet network used to monitor/manage virtualization platform software and hardware. Abbreviation of Logical Unit Number. A number for identifying the logical unit in the storage, and the OS recognizes different disk devices in units of LUN. Abbreviation for Virtual LAN. Technology that enables configuration of virtual network segments independent of the physical connection using L2 switches. TI 30A05B10-01EN Sep. 28,

5 Toc-1 Virtualization Platform Planning and Implementation Guide TI 30A05B10-01EN 1st Edition CONTENTS 1. Overview of Virtualization What is Virtualization Server Virtualization Virtualization Software Virtual Network of the Virtualization Host Computer Cluster System of the Virtualization Host Computer Live Migration of HMI Environment of Benefits of Virtualization Matters to Consider in Virtualization Overview of Virtualization Platform What Is Virtualization Platform? Characteristics of Virtualization Platform Control System Configuration Using the Virtualization Platform HA Cluster Configuration Single Configuration Details of the Virtualization Platform System Detailed View of the System Configuration HA Cluster Configuration Single Configuration Network SNTP Server Domain Controller NMS (Network Management System) Functions Provided by the Virtualization Platform Management Software <Function of Hyper-V> Live Migration <Function of Hyper-V> Failover <Function of Hyper-V> NIC Teaming <Function of Hyper-V> Resource Control <Function of Hyper-V> Backup <Function of Hyper-V> IT Security <Function Provided by Yokogawa> Log Save <Function Provided by Yokogawa> TI 30A05B10-01EN Sep. 28,

6 Toc Checkpoint <Hyper-V function> Replication <Hyper-V function> Virtualization Platform System Configuration Selection Guide Target Product for Virtualization Platform Software to Run on the Software to Run on the Host OS Provided Media Software Environment Virtualization Host Computer Host OS Domain Controller OS IT Security Others NMS (Network Management System) Selection Criteria Various Licenses Windows OS Yokogawa System Products Hardware Configuration Virtualization Host Computer Server model About Immobilization of Network Port Allocation About the versatile network port Details of Server Specification at Single Configuration Details of Server Specification at HA Cluster Configuration Shared Storage L2 Switch Preparation for Selected Hardware Resource Capacity of the Resource Capacity Used by the Host OS Resource Capacity Used by Yokogawa System Products Common CENTUM VP ProSafe-RS Exaopc Exapilot AAASuite PRM TI 30A05B10-01EN Sep. 28,

7 Toc-3 8. Functional Specification Vnet/IP Communication Software Hardware Status Monitor Supported Interface Detectable Hardware Abnormality TCP/UDP Port Thin Client Overview Positioning Specifications Thin Client Specifications Line-up of Thin Client Other Cautions Specification of simultaneous connection to virtual machines IT Security Overview Specification Vnet/IP Communication Software Overview Specification Appendix A: Resource Capacity Server Resource Capacity Host OS Total Resource Capacity of Server Appendix B: Engineering Memo Resource Control Guest OS Relationship between the Number of Zones and the Number of Network Cards idefine of ProSafe-RS TI 30A05B10-01EN Sep. 28,

8 Blank Page

9 1. Overview of Virtualization This chapter describes understanding of virtualization in general. 1. Overview of Virtualization What is Virtualization Virtualization refers to the technology that enables a single physical hardware to look like multiple logical hardware or that enables multiple physical hardware to appear as a single logical hardware. Among virtualization technologies, the server virtualization, storage virtualization, and network virtualization are well-known. The virtualization platform refers to the platform using the server virtualization technology for Yokogawa system products Server Virtualization Server virtualization is a technology that uses virtualization software (hypervisor) to divide the hardware resources of one physical server as multiple logical resources. A virtual hardware environment that is constructed by the logical resources is called a virtual machine. Also, the operating system that is installed on the virtual machine is called the guest OS. As the performance of computer has improved and the virtualization technology has been advancing, it is now possible to run multiple virtual machines on one physical server. This enables the user to effectively utilize the hardware resources and to run different kinds of operating systems and applications while maintaining independence from one another. Virtual machine In a non-virtualized environment, a virtual machine is a unit that a computer itself exists as a single computer. When virtualization is implemented, it refers to the guest OS running in the virtualization host computer and a group of software running in the guest OS. Virtualization host computer A virtualization host computer is a physical server in which the virtualization software is installed for running virtual machines. Two or more virtual machines can be operated with one virtualization host computer Virtualization Software Nowadays, virtualization technology has produced various types of virtualization software. Each type of software has advantages and disadvantages. Therefore, the user needs to select the appropriate virtualization software suitable to the purpose of virtualization. The virtualization software for server virtualization is classified into two types, host type and bare metal type, depending on the implementation method. Each type has the following characteristics. Table Comparison of virtualization software Characteristics of virtualization software Host type Bare metal type Necessity of host OS Yes No (*1) Usability of software Easy Knowledge required Resource control of physical server High overhead Low overhead Consolidation count Small scale Large scale Virtual machine performance Low, Unstable High, Stable *1: Host OS is not involved in adjusting the CPU and memory of the virtual machine. TI 30A05B10-01EN Sep. 28,

10 1. Overview of Virtualization 1-2 Based on these characteristics, the bare metal type virtualization software is suitable for applying server virtualization to the plant control system. There are two types of implementation methods for the bare metal type virtualization software, monolithic type and microkernel type. The difference between the two is that what controls the virtual machine and where the device driver is run. The bare metal type virtualization software that is implemented as "monolithic type" runs and manages virtual machines and runs device drivers. The bare metal type virtualization software that is implemented as microkernel type only runs virtual machines, and managing the virtual machines and running the device drivers are done by other virtual machines prepared separately. Examples of the bare metal type virtualization software are VMware vsphere and Microsoft Hyper-V. VMware vsphere is classified as monolithic type and Microsoft Hyper-V as microkernel type. Physical Server Physical Server APP APP APP APP APP Guest OS Guest OS Guest OS Host OS Guest OS Guest OS Virtualization software Virtualization software Hardware Memory CPU Disk Hardware Memory CPU Disk NIC NIC Figure Monolithic type Server virtualization (bare metal type) Microkernel type F010101E.ai Physical Server APP Guest OS APP Guest OS Application Virtualization software Host OS Hardware Memory CPU NIC Disk Figure Server virtualization (host type) F010102E.ai TI 30A05B10-01EN Sep. 28,

11 1. Overview of Virtualization Virtual Network of the Virtualization Host Computer A virtual network for a virtualization host computer is a network that is implemented by and in the virtualization software in order for a virtual machine on the virtualization host computer to perform network communication with other virtual machines or external devices outside the virtualization host computer. A virtual network consists of the virtual L2 switch (virtual switch) and the virtual network adapter (virtual NIC). The virtual NIC can be usable by assigning one IP address and one MAC address to it because the virtual NIC behaves as a 1-port network adapter. The user can assign the IP address by using the guest OS and the virtualization software assigns the MAC address. When the physical network adapter (physical NIC) is connected the virtual switch, it can be used for the communication with the external network. When the virtual machine is seen from a device on the external network, it is recognized not as the MAC address and the IP address of the physical NIC but as those of the virtual NIC. Virtualization Host Computer Guest OS IP address Guest OS IP address Virtual NIC MAC address Virtual NIC MAC address. Virtualization Software Virtual Switch Hardware Physical NIC External Network Physical Switch Figure Configuration of the virtual network F010103E.ai TI 30A05B10-01EN Sep. 28,

12 1. Overview of Virtualization Cluster System of the Virtualization Host Computer The cluster system is a system that is configured to behave like a single server by combining multiple individual servers by using a network, an internal bus, or the like. Typical examples of the cluster system are the load balancing cluster and the HA cluster. Running the application on the cluster system can provide high availability services. The load balancing cluster is a system that can balance the load on one single server by distributing the processing to multiple servers. Even if one server stops due to a failure, other servers take over the processing so that the service availability can be maintained. Note that the applications running on the server must support distributed processing. The HA cluster is a system that enables the processing of the system to continue, in the case that the active server stops due to a failure, by allowing the prepared standby server in advance to take over the data and processing of the active server. This mechanism that is referred to as failover can improve the service availability. To configure the HA cluster, the cluster software is required. The applications running on the server, however, do not require specific requirements. The cluster system on the virtualization platform refers to the HA cluster, which consists of virtualization host computers. The following describes the characteristics of the HA cluster using virtualization host computers. A great deal of virtualization software includes the function of cluster software. Therefore, there is no need to prepare the cluster software separately. The servers that make up the HA cluster periodically send and receive network packets that are called heartbeats one another to confirm that each is operating normally. To let the standby server take over data from the active server at the time of failover, a data area (shared storage) accessible from both servers is required. In the HA cluster system using virtualization host computers, the data to be handed over between servers refers to a set of data to constitute a virtual machine. Therefore, a set of data to constitute the virtual machine must be placed on the shared storage. HA Cluster Virtualization Host Computer Heartbeat Virtualization Host Computer Concurrent access Shared Storage Figure HA Cluster F ai TI 30A05B10-01EN Sep. 28,

13 1.1.5 Live Migration of 1. Overview of Virtualization 1-5 The live migration (*1) of virtual machine is a function to migrate a virtual machine running on the virtualization host computer to another virtualization host computer while the virtual machine is running. With this function, the user can change the virtualization host computer in which the virtual machine runs, without stopping the guest OS or the application running on the virtual machine. Therefore, the user can utilize this function for the maintenance tasks that need to turn off or restart the virtualization host computer, for example, the BIOS update of the physical server, the hardware replacement, and applying a patch to the host OS so that it can reduce the scheduled stops of services. The live migration cannot be used for fault tolerant (FT) because it is a function that can be used when the virtualization host computer is running normally. *1: The technology is called vmotion in VMware HMI Environment of Virtual machines running in a virtualization host computer logically operate independently of each other. For the user to operate them independently, mutually independent HMI environments are required. For the user to operate a virtual machine, it is only necessary to be able to check the desktop of the guest OS and to notify the virtual machine of the guest OS desktop operation by the user. Therefore, the HMI environment using the remote connection through the network is usually used. Due to the connection through the network, the operational feeling of the HMI environment of virtual machine is different from that of the HMI environment directly connected to the physical computer. Physical Server APP Guest OS APP Guest OS APP Guest OS HIM Client HIM Client Virtual Network Display Display Virtualization Software Mouse Mouse Hardware NIC Keyboard Keyboard External Network F010105E.ai Figure HMI environment of the guest OS TI 30A05B10-01EN Sep. 28,

14 1. Overview of Virtualization 1-6 The devices (HMI clients) used by the user as the HMI environment are classified into the following three types due to differences in functions to be implemented: Thin client, Zero client, and Fat client (Computer). A thin client is a client device that has minimal I/O function (display, keyboard, and mouse) and minimum network function for connecting to the guest OS and transferring the screen. Like the thin client, a zero client has only minimal I/O and network functions; furthermore, it has the built-in hardware optimized specifically for the desktop virtualization (*1). When a conventional computer is used as an HMI client, it is referred to as a fat client. *1: The desktop virtualization is to run the desktop environment on a virtual machine prepared for each user. Characteristics of device Thin client Zero client Fat client Cost of device Inexpensive Inexpensive Expensive Processing performance Medium High High Ensuring the security Easy Easy Difficult Integrated device management Easy Not required Difficult Communication protocol dependence (*1) No Yes No Protocol processing Processed by software Processed by hardware Processed by software Supported protocol type Two or more types One type Two or more types Local storage device No No Yes Required installation space Small Small Large Fault tolerance (*2) High High Low *1: In the desktop virtualization, the virtualization software and recommended communication protocol are different depending on each vendor. *2: It is judged by the amount of rotating parts such as a fan and an HDD. Based on these characteristics, the thin client type, which does not rely on the vendor of the virtualization software and which is easy to ensure the fault tolerance and the security, is suitable as the HMI client of the virtualization platform. Hereafter, the HMI client is expressed as a thin client. TI 30A05B10-01EN Sep. 28,

15 1.2 Benefits of Virtualization 1. Overview of Virtualization 1-7 The virtualization technology can reduce the total cost of ownership (TCO) of the user as follows. Reducing the number of physical servers Configuring two or more virtual machines in one physical server enables the user to effectively utilize the hardware resources. In addition, because each virtual machine is capable of running an independent operating system, the user can also reduce the number of physical servers. Reducing it leads to the reduction of footprint and the power consumption that enables the user to reduce the total cost of ownership. Reducing the cost of management Reducing the number of physical servers enables the user to reduce the management cost such as cost of maintenance. It also has the effect of reducing power consumption. Reducing the life cycle cost The existence of virtualization software between the hardware of the physical server and each guest OS mitigates the dependency between the software (guest OS and application) and the hardware so that the flexibility of maintenance increases. Specifically, it benefits the users to be able to lay a flexible maintenance plan and to have more maintenance options. For example, even when a physical server needs to be migrated to a new one due to the deterioration or some other issues, the user can migrate it smoothly without updating the software. Consequently, the maintenance costs can be reduced. Ease of backup and restore All the data related to virtual machines are handled as files. Therefore, the user can back them up easily. Also, because the dependency on the physical server hardware is low, the user can quickly restore it in case of a failure or disaster. Being able to shorten the downtime can improve the productivity. Improving the availability Applying the virtualization technology such as failover using the HA cluster system and as live migration enables the user to shorten the downtime of virtual machine so that the productivity can be improved. To use these functions, no special mechanism is required for applications running in the virtual machine. TI 30A05B10-01EN Sep. 28,

16 1. Overview of Virtualization Matters to Consider in Virtualization For implementing the virtualization, the following matters must be considered. Initial implementation cost Implementing the virtualization requires the high-performance hardware, the thin client devices, the virtualization software, and other equipment. Therefore, the initial implementation cost may be higher than implementing the environment without the virtualization. Managing the virtual environment The user can use the dedicated tools and software to manage the virtual environment. The basic knowledge is necessary to utilize these tools and software. Therefore, learning about the virtualization technology is required when implementing the virtualization. Subheading Risk of simultaneous failure When two or more computers are consolidated in one server as a virtual machine, the system may be seriously affected at the time of the server failure as a harmful effect of the consolidation. For example, in the case that a group of computers for the operation and monitoring that increased the availability by the distributed arrangement are consolidated, if the server stops running, all the virtual machines stop so that the operation and monitoring cannot be performed at all. By incorporating measures into the system configuration, the impact can be reduced, but cannot be completely removed. Performance The hardware is abstracted by implementing the virtualization. Therefore, the performance may be slow compared to the physical environment. TI 30A05B10-01EN Sep. 28,

17 2. Overview of Virtualization Platform Overview of Virtualization Platform This chapter describes the overview of virtualization platform. 2.1 What Is Virtualization Platform? The virtualization platform is a platform for integrating physical computers where the Yokogawa system products are installed into one physical server. The hypervisor bare metal type virtualization host computer is used in the destination physical server for the integration. Because the virtual machine on the virtualization host computer cannot take advantage of the Vnet/IP interface card that is the original hardware developed by Yokogawa, the Vnet/IP communication function is realized as software. Physical Server Virtualization Host Computer HIS Vnet/IP Interface pkg. Guest OS Exaopc Vnet/IP Interface pkg. Guest OS PRM Vnet/IP Interface pkg. Guest OS Virtualization Software Hardware CPU Memory Disc Vnet/IP Ethernet Figure Generic NIC Virtualization Platform Vnet/IP Interface Card F020101E.ai The thin client to operate the virtual machine on the virtualization host computer is realized by the remote connection environment through the network. The user can use an OPKB and up to four multiple monitors as with the conventional physical environment. 4 Monitor Thin Client Monitor Monitor Operator Monitor Monitor OPKB VM VM VM 2 Monitor Virtualization Host Computer Monitor Monitor Network 1 Monitor Monitor Figure HMI configuration F020102E.ai TI30A05B10-01EN Sep. 28,

18 2. Overview of Virtualization Platform Characteristics of Virtualization Platform The characteristics of virtualization platform are as follows. Table List of characteristics of virtualization platform (1/2) Item Description Remarks Virtualization Bare metal type implementation method Virtualization software The following software is used: Microsoft Windows Server 2016 Hyper-V Hyper-V is used as the standard platform. Physical server Vnet/IP communication software (*1) (*2) Vnet/IP domain count Thin client (*3) Consolidating into a virtualization host computer The following types of physical servers are used. Rack type server Module type server (Rack mountable type) A guest OS communicates with other Vnet/IP stations by using the Vnet/IP communication software. Because the Vnet/ IP communication software can perform Vnet communication using a general-purpose Ethernet card, the Vnet/IP card is not required. Up to four domains for a single virtualization host computer (Rack type server) The remote connection environment by a thin client can support the following: 2-monitor compatible 4-monitor compatible Use of OPKB (USB connection type) Sound output Multiple virtual machines with the Yokogawa system products installed can be consolidated in the same virtualization host computer. The following conditions, however, must be observed. The resource control settings (*4) are applied to the virtual machine. The network topology of the virtual network is the same as that of the physical environment. Choice from the Yokogawa specified models The communication software must be installed on the guest OS. Vnet/IP domain count that can be consolidated into one virtualization host computer Choice from the Yokogawa specified models The resource control settings for all virtual machines are mandatory. Handling of other products (software other than the Yokogawa system products) (*5) High availability The Yokogawa system products and other products can run simultaneously on the same virtualization host computer. The following conditions, however, must be observed. The Yokogawa system products and other products are installed on different virtual machines and are consolidated in the same virtualization host computer. The operation of the Yokogawa system products is guaranteed whereas the operation of other products is not guaranteed.the operation of the Yokogawa system products is guaranteed whereas the operation of other products is not guaranteed. Adverse effects that are caused by integrating two or more computers into a virtualization host computer can be reduced by using the following virtualization software functions. Live migration Virtualization host computer failover Replication The resource control settings for the virtual machine are mandatory. Other products include WSUS, NMS, or other products that is not developed by Yokogawa. Measures against the risk of simultaneous failure Failover requires the restart of guest OS (Not fault tolerant system) TI30A05B10-01EN Sep. 28,

19 Table List of characteristics of virtualization platform (2/2) Integration rate IT security (*1) Virus management Virtual machine management tool Backup and restore Hardware failure notification 2. Overview of Virtualization Platform 2-3 Item Description Remarks Log save for host OS (*1) There is no upper limit. The design of this specification, however, assumes a maximum configuration of 18 virtual machines can run concurrently per one virtualization host computer by the standard virtual machine conversion. (*6) (*7) The following IT security tool dedicated to the virtual environment is provided. Host OS for the virtualization host computer Thin client Domain controller The following is protected using Windows Defender. Host OS for the virtualization host computer The following is protected using Yokogawa standard anti-virus software. Windows based thin client The user can use the tools included with the virtualization software. (Microsoft Hyper-V Manager) Using the Microsoft Hyper-V Manager, the user can do the following. Full backup of host OS Full backup of virtual machine To notify the hardware failure, NMS must be prepared. The hardware status of the server hardware and the shared storage configuring the virtualization platform is notified from the software provided by the hardware vendor to the NMS A log save tool is provided for collecting host OS logs for the virtualization host computer failure analysis. Regarding the log save for guest OS, the same tool is used as that in the physical environment and how to use the tool is also the same. Integration rate is the number of virtual machines that can run simultaneously in one virtualization host computer. The virtual machines can be integrated if the total of resources required by the host OS and the virtual machines does not exceed the resources of the physical server. (*8) For the NMS engineering method, refer to the manual by the NMS software *1: Yokogawa original function. *2: For details, refer to Chapter 11 Vnet/IP Communication Software. *3: For details, refer to Chapter 9 Thin Client. *4: The resource control settings of the virtual machine refer to the setting to eliminate resource conflict between virtual machines. *5: Other products refer to the software that is not allowed to coexist with the Yokogawa system products. *6: The largest virtualization host computer refers to one with 40 CPU physical cores. For details, refer to Chapter 6. *7: For the standard virtual machine, refer to Chapter 7. Resource Capacity of the. *8: There is no upper limit for the integration rate setting. For the stable operation of the entire system, however, the user can prepare two or more virtualization host computers and plan the distributed arrangement of virtual machines. TI30A05B10-01EN Sep. 28,

20 2. Overview of Virtualization Platform Control System Configuration Using the Virtualization Platform This section describes the control system configuration using the virtualization platform. Operator Room Equipments installed at Level 3 can also be used Thin Client Monitor Monitor Monitor Monitor Physical HIS Domain Controller NMS SNTP Server Monitor Monitor Monitor Monitor Remote UI network Plant Information network Server Room Management network Router KVM Server Console Storage network HA-cluster network NMS Domain Controller Virtualization Host Computer Host OS VM HIS Vnet/IP Interface pkg. Guest OS VM Exaopc Vnet/IP Interface pkg. Guest OS Virtualization Software Server Hardware VM PRM Vnet/IP Interface pkg. Guest OS Shared Storage Vnet /IP Controller / Field Equipment FCS F020301E.ai Figure System configuration of the virtualization platform Virtualization host computer It is a server computer in which the virtualization software, host OS, and virtual machines run. System products such as CENTUM VP runs in the virtual machine. For details on the specification of the virtualization host computer, refer to Chapter 6. Thin client and remote UI network A thin client is used as the monitoring function of virtual machine. The virtual machine and the thin client can be connected using RDP, and the remote UI network can be configured as dual-redundant. For details on thin client and network redundancy, refer to Chapter 9. Shared storage A shared storage is an external storage for storing the image of virtual machines. It is connected to the virtualization host computer by using the network for storage. A shared storage is required when configuring a redundant system using two or more virtualization host computers. (This document refers to as the HA cluster configuration.) For details, refer to Chapter When a virtualization host computer is used in a single configuration, the storage inside the virtualization host computer is used. Therefore, the shared storage is not required. TI30A05B10-01EN Sep. 28,

21 Host OS and the management network 2. Overview of Virtualization Platform 2-5 The management network is a network dedicated to the host OS that interconnects host OSes on multiple virtualization host computers. A host OS in one virtualization host computer enables the user to remotely connect to another host OS in another virtualization host computer so that the user can remotely configure the settings for the virtualization software and monitor the status. The user can also perform live migration, backup, and other operations. For details, refer to Chapter 3. Using the management network, a host OS can connect to the computer on the plant information network through the router. It is mainly used for the time synchronization of host OS and for connecting to the network management system (NMS). For the usage of the management network, refer to Chapter 3 or later. Network management system (NMS) The NMS can detect the hardware failure, network trouble, etc. of virtualization host computers and a shared storage, and notify the user. For details, refer to Chapters and 5.3. TI30A05B10-01EN Sep. 28,

22 2.3.1 HA Cluster Configuration 2. Overview of Virtualization Platform 2-6 The HA cluster configuration is a configuration that enables system redundancy by interconnecting multiple virtual servers through a network (the network for HA cluster) to increase availability. Building the HA cluster configuration enables the user to do the following: Live migration The live migration is a function to migrate a virtual machine to another virtualization host computer without stopping the virtual machine. In the case that the user needs to stop a virtualization host computer due to the maintenance of the virtualization host computer, using live migration enables the user to migrate the running virtual machine without stopping it to another virtualization host computer within the HA cluster configuration. Failover In the case that a virtualization host computer stops due to a failure, the failover function can restore the operations by automatically restarting a virtual machine in another virtualization host computer within the HA cluster configuration. Note that, in the HA cluster configuration, a shared storage and a domain controller are mandatory. For details on the HA cluster configuration such as how to build the configuration, refer to Chapter Single Configuration A virtualization host computer can be used in a single configuration. Note that, if the virtualization host computer aborts, all the virtual machines are terminated, so the availability is low. For more information on the hardware configuration when using a virtualization host computer in a single configuration, refer to Chapter 3. TI30A05B10-01EN Sep. 28,

23 3. Details of the Virtualization Platform System Details of the Virtualization Platform System This chapter describes the system details of the virtualization platform. 3.1 Detailed View of the System Configuration HA Cluster Configuration The figure below shows a detailed diagram of the system configuration of the HA cluster configuration in the virtualization platform. Thin Client L2SW for Remote UI network Monitor Monitor Monitor Monitor Monitor Monitor Monitor Monitor SNTP Server Equipments installed at Level3 can also be used Domain controller NMS Remote UI network (Redundant) Plant Information network L2SW for Plant Information network Expansion unit with HA-Cluster configuration Virtualization Host Computer Domain controller NMS L2SW for Storage network (Redundant) KVM Router Storage controller (Redundant) Storage network (Redundant) HA-Cluster network L2SW for Management network L2SW for Vnet/IP Shared storage L2SW for HA-Cluster network Management network Vnet/IP (Redundant) Figure System configuration of the HA cluster configuration F030101E.ai TI30A05B10-01EN Sep. 28,

24 3. Details of the Virtualization Platform System 3-2 System Configuration of the HA Cluster Configuration By adopting the HA cluster configuration, you can shorten the operation stop time of the Yokogawa system product caused by stoppage of some virtualization host computers. The stoppage of the virtualization host computer is as follows. Stoppage due to hardware failure of virtualization host computer Stoppage due to software update of host OS such as BIOS update of virtualization host computer or application of OS patch To build the HA cluster configuration, a virtualization host computer, a shared storage, and a domain controller are required. The shared storage stores the image of virtual machines, and the network with the virtualization host computer is configured as dual-redundant. In addition, the virtualization host computers communicate with each other on the HA cluster network and are used for the live migration and failover purposes. For details on the network, refer to Chapter On the virtualization platform, the user can use the Windows OS function to build the HA cluster configuration. For that purpose, a domain controller must be installed at the location accessible from the management network because the host OS of the virtualization host computer needs to be in the domain environment. Expansion Unit of the HA Cluster Configuration In the HA cluster configuration of the virtualization platform, up to four virtualization host computers can be connected to one shared storage. Also, the shared storage requires the L2 switch for the storage network. Two L2 switches are required to be configured as dual-redundant. Operation of Each Virtualization Host Computer at the HA Cluster Configuration The HA cluster configuration consists of two or more virtualization host computers. The following two methods are available to operate the virtualization host computers. Table Methods of HA cluster configuration Method Description Remarks Method 1 Method 2 The operation method using two or more virtualization host computers as the active server and one virtualization host computer dedicated to the standby server The operation method using one virtual machine or two or more virtual machines in all virtualization host computers so that utilizing surplus resources in each virtualization host computer enables failover or live migration. This method is recommended when using three or more virtualization host computers. This method is recommended when using two virtualization host computers. Method 1 In the case of method 1, the HA cluster configuration is built by using two or more virtualization host computers (active servers) where one virtual machine is or two or more virtual machines are running and one virtualization host computer (standby server) where no virtual machine is running. If an active server goes down, the original virtual machine is restarted in the standby server by the failover function. When performing the maintenance of an active server, the user needs to migrate all the virtual machines to the standby server by using the live migration function. After the active server is recovered from maintenance, to reserve one standby server, the user TI30A05B10-01EN Sep. 28,

25 3. Details of the Virtualization Platform System 3-3 needs to return all the virtual machines to the original virtualization host computer by using the live migration function. This method 1 is operable if the standby server has the substantial resources (CPU core count, memory size, disk capacity, and network port count) equivalent to the maximum resources required by each active server. Fail Over Live Migration Host OS Host OS Reserve Reserve Physical Server Physical Server Virtualization Host Computer for Active Virtualization Host Computer for Standby Figure Operation method 1 at the HA cluster configuration F030102E.ai Method 2 In the case of method 2, a dedicated standby server is not prepared as a failover backup to the active server. In this configuration, if a virtualization host computer goes down, all the virtualization host computers except for the failed server run as the standby server. If a certain virtualization host computer goes down, the virtual machines running in the virtualization host computer are distributed and restarted in other virtualization host computers by the failover function. When performing the maintenance of one virtualization host computer, the user can use the live migration function to distribute and migrate the virtual machines to other virtualization host computers. After the virtualization host computer is recovered from maintenance, to allocate surplus resources in each virtualization host computer, the user needs to return all the virtual machines to the original virtualization host computer by using the live migration function. With this method 2, the burden on the server administrator is expected to increase due to the administrative tasks such as calculating the resources in each virtualization host computer and determining which virtual machine is running on which virtualization host computer. Fail Over Live Migration Manager (Host OS) Reserve Manager (Host OS) Reserve Physical Server Physical Server Virtualization Host Computer for Active/Standby Virtualization Host Computer for Active/Standby Figure Operation method 2 at the HA cluster configuration F030103E.ai When operating with two virtualization host computers, either method 1 or 2 is operable. However, the method 2 is recommended when guaranteeing the dual-redundancy of application by two-unit operation like HIS. If the method 1 is used for operating with two virtualization host computers, two-unit operation is not performed during failover. TI30A05B10-01EN Sep. 28,

26 3. Details of the Virtualization Platform System 3-4 Behavior at one path failure of the redundant storage network path for the shared storage For the storage network of the virtualization platform, two paths are provided between the virtualization host computer and the shared storage to form a redundant path. In the virtualization platform, this redundant path is used as the active path and standby path. If the active side path stops functioning due to a failure, read/write access to the shared storage from the virtualization host computer stops temporarily until the path is switched to the standby side. As a rough guide, the following table shows the time during which the access stops when selected hardware is used. Table Approximate access stop time upon one path failure of the redundant path Location of failure (Refer to the Figure below) Approximate downtime Remarks (1), (3) 45 seconds Link down on the active side path (2), (4) 1 second or less Link down on the standby side path (5) 10 to 30 seconds Failure of the controller on the active side path (6) 1 second or less Failure of the controller on the standby side path Storage network (Active path) (5) (1) L2SW for Storage 1 (3) Controller 1 Virtualization Host Computer (6) Disk (2) L2SW for Storage 2 (4) Controller 2 Figure Storage network (Standby path) Image of the redundant storage network path for the shared storage Shared Storage F030104E.ai If any Yokogawa system product was running when read/write accesses to the shared storage from the virtualization host computer have stopped temporarily, examples of affected cases are as follows: Updating of trend data on HIS stops temporarily, and the trend data during that time may be lost. On HIS where CAMS is enabled, the alarms that occurred while access is stopped temporarily are not displayed and will be displayed collectively after recovery. TI30A05B10-01EN Sep. 28,

27 3.1.2 Single Configuration 3. Details of the Virtualization Platform System 3-5 The figure below shows a detailed diagram of the system configuration of the single configuration in the virtualization platform. Thin Client Monitor Monitor Monitor Monitor Monitor Monitor Monitor Monitor SNTP Server Equipments installed at Level3 can also be used NMS L2SW for Remote UI network Remote UI network (Redundant) Plant Information network L2SW for Plant Information network Expansion unit with single configuration KVM Router NMS L2SW for Vnet/IP Vnet/IP (Redundant) Management network L2SW for Management network Figure System configuration of the single configuration F030105E.ai System Configuration of the Single Configuration Unlike the HA cluster configuration, the system configuration of the single configuration can be built with one virtualization host computer. Virtual machines are installed not on a shared storage but on the local storage within the virtualization host computer. The failover function is not available for the single configuration. TI30A05B10-01EN Sep. 28,

28 3.1.3 Network 3. Details of the Virtualization Platform System 3-6 The virtualization platform, in addition to the network used by the guest OS, requires many networks that include the network for managing the virtualization host computer and the networks required by the host OS such as a network required when the HA cluster configuration is used. The network communication is performed with minimizing the influence of each other by dividing the network segments for each usage. The table below shows the network required for the virtualization platform. Table Network required for the virtualization platform Network User Description Plant information network Vnet/IP Remote UI network Guest OS Same as Plant information network in the conventional physical environment Same as Vnet/IP in the conventional physical environment A plant operator uses it when remotely operating the guest OS from a thin client. Network fault handling Required at the single configuration Required at the HA cluster configuration The maximum number of networks per virtualization host computer No Yes Yes 4 Yes Yes Yes 4 Yes (*1) Yes Yes 4 Subsystem communication network Same as the subsystem (3) communication network in the conventional physical environment No Yes Yes 4 Management network (*4) A server administrator uses it when remotely managing the host OS of a virtualization host computer. A host OS uses it to communicate with the host OS of other virtualization host computers regarding the cluster control. A host OS uses it to communicate with a shared storage. No Yes Yes 1 HA cluster network (*5) Host OS No (*2) No Yes 1 Storage network Yes No Yes 1 Yes: Required No: Not required *1: Even if a failure occurs in the priority route, you can resume operation if you manually switch to the other route. *2: Among communications performed in HA cluster network (live migration and the cluster control), any communication for the cluster control is also performed in the management network. *3: For details on the subsystem communication, refer to System Integration OPC Client Package (SIOS) and Plant Resource Manager (PRM). *4: Replication is done on this network. Live migration for single configuration is done on this network. *5: Live migration for HA cluster configuration is done on this network. TI30A05B10-01EN Sep. 28,

29 Network Fault Handling 3. Details of the Virtualization Platform System 3-7 Due to the network link failure between the virtualization host computer and the external network, the network directly linked to malfunction of the Yokogawa system product operating on the virtualization host computer implements corrective actions to fix the link failure. The network that implemented corrective actions and the reason why that network was selected is shown in the table below. Table Reason for selecting as a target network for link failure fault handling Network Vnet/IP Storage network Remote UI network Reason If the network for Vnet/IP is disconnected, the communication with the controller cannot be performed so that the plant monitoring cannot be performed. If the storage network is disconnected, the virtual hard disk of the virtual machine cannot be accessed so that the virtual machine stops. The remote UI network disconnection causes the blackout state so that the operator cannot perform plant monitoring. Notes on the Communication Path between and Physical NIC This section describes the notes on engineering concerning the communication path between the virtual machine and the physical NIC. As shown in the figure below, there are virtual switches (that function as L2 switches) and virtual NICs between the virtual machines and the physical NICs. The user must engineer the virtual machines and the virtual switches so that the guest OSes can connect to the external network. If these are not properly engineered, not only the communication from the guest OSes cannot be performed correctly, but also the communication cannot be performed correctly after the migration by live migration or failover is performed from other virtualization host computers. Therefore, the user must understand and design them carefully, and then perform the engineering. Virtualization Host Computer Host OS Guest OS Guest OS Guest OS IP address IP address IP address Virtual NIC Virtual NIC Virtual NIC IP address Virtualization Software Name Virtual Switch Name Virtual Switch Hardware Physical NIC Physical NIC Physical NIC Physical Switch Physical Switch Physical Switch External Network Figure Connection configuration of the standard virtual L2 switch F030106E.ai TI30A05B10-01EN Sep. 28,

30 3. Details of the Virtualization Platform System 3-8 Network used by the virtual machine The engineering for connecting the network connection port of a guest OS to a physical NIC must be performed in the following order. Note that the user needs to use Hyper-V Manager (the standard software provided by Microsoft to create virtual machines and virtual switches) to create virtual switches, virtual machines, and virtual NICs. (1) Creating a virtual switch, and specifying a physical NIC with which the created virtual switch communicates (2) Creating a virtual machine, creating a virtual NIC that the created virtual machine uses, and specifying a virtual switch with which the created virtual NIC communicates (3) Configuring the OS network settings on the guest OS The user must name a virtual switch, and the name must be unified within all the virtualization host computers in the HA cluster. Unifying the name means that the virtual switches with the same role (for example, the switch used by Vnet/IP of domain 1) have the same name on all virtualization host computers in the HA cluster. If the name is not unified, the live migration and failover do not work properly. The following describes the details. Creating a virtual switch, and establishing the communication path between the virtual switch and the physical NIC Before creating a virtual machine, the user must create a virtual switch (by using Hyper-V Manager). When creating a virtual switch, the user must name the virtual switch and specify a physical NIC to which the created virtual switch connects. Note that the user must create the virtual switch with the same name in the virtualization host computer that is used for the live migration and failover destination. (Refer to the figure below.) Virtualization Host Computer Virtualization Host Computer Guest OS Virtual NIC Live Migration Fail Over Virtualization Software Name Virtual Switch Virtualization Software Name Virtual Switch Hardware Physical NIC Hardware Physical NIC Physical Switch Physical Switch Virtual Switch with the same name F030107E.ai Figure Notes regarding creating a virtual switch when building the HA cluster configuration The live migration fails if no virtual switch with the same name exists in the destination virtualization host computer at the time of live migration. If no virtual switch with the same name exists at the time of failover, the communication path cannot be established after the guest OS starts so that the communication becomes unavailable. When building the HA cluster configuration, be sure to perform the live migration test and confirm that the virtual switch construction was done correctly. In addition, the virtual switches with the same name must be connected on the same physical network. If a virtual switch is connected to a different physical network, the communication path is established with another physical network after performing live migration or failover. Thus, the virtual machine cannot communicate with the network that is to be originally connected. Therefore, the user must engineer the active server and the standby server in the HA cluster configuration to connect to the same physical network with the same virtual switch configuration. TI30A05B10-01EN Sep. 28,

31 3. Details of the Virtualization Platform System 3-9 Creating a virtual machine and a virtual NIC used by the virtual machine Next, the user can create a virtual machine. He or she can create also a virtual NIC to be used inside the virtual machine when creating the virtual machine. When creating a virtual NIC, the user must specify a virtual switch with which the created virtual NIC connects by using the name of the virtual switch. Configuring the OS network settings on the guest OS Then, the user can log on to the guest OS and set the IP address, subnet mask, and other settings for the connection port connecting to a network. This enables the guest OS to communicate with the devices on the Ethernet. Network used by the host OS For the network used by the host OS, in the host OS, the user can set the IP address, subnet mask, and other settings for the connection port connecting to a network. This enables the host OS to communicate with the devices on the Ethernet. This network can be configured in the same way as when designing and engineering a network with a normal physical computer. Virtual L2 Switch as the L2 Switch for the Vnet/IP Domain and Its Stage Count In Vnet/IP, the domain that is connected only using the L2 switch without going through devices such as an L3 switch and a Vnet router is called the Vnet/IP domain. On the virtualization platform, the domain that is connected using the virtual L2 switch as this L2 switch is also called the Vnet/IP domain. There is an upper limit on the unit count (stage count) of the L2 switches that can exist on the routes between all Vnet/IP stations. When calculating this stage count, the virtual L2 switch should not be included in the stage count. Virtualization Host Computer and the Vnet/IP Domain Count and Zone A zone is an area isolated by network security using access control. It is applied when you want to divide the engineering computer and operator computer, etc. in zones and want to limit the range that the computers can access. Because the purpose is to isolate the computer for network security, you must be able to set the access restriction on all the networks used by the computer. Virtual switch of the virtualization host computer has no access control function. Therefore, when setting a zone in the virtual machine on the virtualization host computer, when you wish to communicate between virtual machines arranged in different zones, you must communicate between them via the external router, etc. that can perform access control of the virtualization host computer. TI30A05B10-01EN Sep. 28,

32 3. Details of the Virtualization Platform System 3-10 The following restrictions are set so that you can apply the zone to the virtualization host computer: The maximum number of zones that can be configured with one virtualization host computer is 4. The virtual machine must be located in one of the zones. Virtual machines located in different zones should not communicate directly with virtual switches in the same virtualization host computer. The previous limitation also applies to virtual Vnet/IP stations. Place one virtual Vnet/ IP station in one of the zones. Place virtual Vnet/IP stations of different Vnet/IP domains in different zones. The virtual switch of the virtualization host computer does not have a routing function. Therefore, if you want Vnet/IP communication between virtual machines of different Vnet/IP domains on the same virtualization host computer, make them communicate with each other via routers external to the virtualization host computer. If the Vnet/IP domain of the virtual Vnet/IP station is different (zone is different) for the plant information network and the remote UI network other than Vnet/IP, make them communicate with each other via the network switch external to the virtualization host computer. Domain 1 Domain 2 Router Structure of physical environment Vnet/IP Plant Information network Configuring with a Virtualization Host Computer Virtualization Host Computer Virtualization Host Computer Zone Zone Zone Zone Domain 1 Domain 2 Domain 1 Domain 2 Virtual L2SW Virtual L2SW Virtual L2SW Virtual L2SW Virtual L2SW Virtual L2SW Virtual L2SW Do not share virtual L2SW Physical L2SW Physical L3 Router between zones. Physical L3 Router Examples of design rule conformance Example of design rule violation Figure Example zone configuration for virtualization host computer F030108E.ai TI30A05B10-01EN Sep. 28,

33 3. Details of the Virtualization Platform System 3-11 Zone Combinations Allowed in One HA Cluster Configuration The standby virtualization host computer zone of the HA cluster configuration is configured to include all zones of the active virtualization host computer. This is because the standby virtualization host computer must have the ability to be an alternate server if the active virtualization host computer goes down. The maximum number of zones in the virtualization host computer of the virtualization platform is four, so you can configure up to four zones with one HA cluster. Router Network (Vnet/IP, etc.) Zone1 Zone2 Zone3 Zone4 HIS HIS HIS ENG SENG SIOS Zone 1 Zone 2 Zone 3 Zone 4 Virtualization Host Computer (Active) Virtualization Host Computer (Active) Virtualization Host Computer (Active) Virtualization Host Computer (Standby) Figure Zones for HA cluster configuration F030109E.ai TI30A05B10-01EN Sep. 28,

34 3.1.4 SNTP Server 3. Details of the Virtualization Platform System 3-12 Provide an SNTP server because, with the virtualization platform, it is necessary to implement time synchronization for the entire virtualization platform by using an SNTP server. If the time synchronization is not completed, in the case that a failure occurs in the virtualization platform, it will be difficult to investigate the cause by matching various logs. Also, in the case of the HA cluster configuration, the incomplete time synchronization may cause a failure in the live migration or failover operation. An SNTP server must be installed in a location accessible from the management network through a network. When the SNTP server is installed in the plant information network, the host OS of the virtualization host computer connects to the plant information network from the management network through the L3 router and synchronizes with the SNTP server. Be sure to prepare the SNTP server for the top of the time synchronization hierarchy as a physical device (including a physical computer). The user needs to build the system with caution not to configure this as a virtual machine (including the domain controller operated in the host OS or virtual machine). The following shows the combination between components to be subject to the time synchronization. Between a host OS and a domain controller Between a host OS and a host OS Between a host OS and a guest OS Between a guest OS and a guest OS Time must be synchronized to all SNTP servers except for the guest OS that is the Vnet/IP station. The guest OS that is the Vnet/IP station must be synchronized to the Vnet/IP network time by using the function of the Vnet/IP communication software. The user must synchronize the time of the components on the plant information network to the Vnet/IP network time in the same way as the physical environment. The user can engineer the time synchronization route by referring to the following figure. Note that, for details on engineering the time synchronization of guest OS, refer to the manual for each system product. In the following figures, Utilize the mechanism of conventional physical environment means "Performing engineering by using the same method as before to synchronize the SNTP server time and the Vnet/IP time." Management network SNTP server Router Plant Information network sync Host OS (Non-domain environment) Guest OS (Non-domain environment) Guest OS (Vnet/IP Station) Host OS Guest OS Guest OS (Non-domain environment) (Non-domain environment) (Vnet/IP Station) Utilize the mechanism of conventional physical environment Virtualization Host Computer Virtualization Host Computer Figure Vnet/IP In a single configuration, when the guest OS is in a workgroup (non-domain environment) F030110E.ai TI30A05B10-01EN Sep. 28,

35 3. Details of the Virtualization Platform System 3-13 Domain controller SNTP server Management network Router sync sync Plant Information network Host OS (Domain environment) Guest OS (Domain environment) Guest OS (Vnet/IP Station) Host OS Guest OS Guest OS (Domain environment) (Domain environment) (Vnet/IP Station) Utilize the mechanism of conventional physical environment Virtualization Host Computer Virtualization Host Computer Figure Vnet/IP F030111E.ai In the HA cluster configuration, when the guest OS is in the domain environment Domain controller SNTP server sync sync Management network Router Plant Information network sync Host OS (Domain environment) Guest OS (Non-domain environment) Guest OS (Vnet/IP Station) Host OS Guest OS Guest OS (Domain environment) (Non-domain environment) (Vnet/IP Station) Utilize the mechanism of conventional physical environment Virtualization Host Computer Virtualization Host Computer Figure Vnet/IP In the HA cluster configuration, when the guest OS is in a workgroup (non-domain environment) F030112E.ai TI30A05B10-01EN Sep. 28,

36 3.1.5 Domain Controller 3. Details of the Virtualization Platform System 3-14 On the virtualization platform, the user can build the HA cluster configuration by using Windows server function (failover clustering function). One of the requirements for this function is "a host OS must be in the domain environment." Therefore, it is essential to install a domain controller in the HA cluster configuration. To join the host OS to a domain, the domain controller must be installed in a location accessible through the management network. The user can use the domain controller installed either on the management network or on the plant information network used by the guest OS through the router. CAUTION Do not operate the domain controller for the host OS prepared to build the HA cluster configuration in a virtual machine on that very HA cluster. To avoid useless troubles, the user should prepare and operate the domain controller for the host OS in a virtual machine outside that HA cluster or in a physical server NMS (Network Management System) On the virtualization platform, NMS is used to monitor the hardware failure of the virtualization host computer and the shared storage, the network disconnection, and other failures, and to perform the trend acquisition of performance data of host OS. In addition, NMS can notify the user when the failure of the virtualization platform is detected. The user needs to engineer all devices to be monitored by NMS on the virtualization platform to be accessible from the management network. Therefore, NMS must be installed in the network that can access the management network. TI30A05B10-01EN Sep. 28,

37 3. Details of the Virtualization Platform System Functions Provided by the Virtualization Platform This section describes the functions provided by the virtualization platform. Yokogawa offers the virtualization platform whose functions derived from Hyper-V are customized for Yokogawa. We also offer the functions unique to Yokogawa Management Software <Function of Hyper-V> The user can utilize the management software (Hyper-V Manager) that is the standard software for the host OS to configure the Hyper-V settings such as creating virtual switches and specifying the location of virtual machines, and to operate the virtual machines, for example, creating virtual machines and changing the CPU core count or memory capacity of the virtual machines. The Hyper-V Manager can be installed from the server manager of the host OS. Hyper-V Manager enables the user not only to manage the local host OS but also to configure the Hyper-V settings for other virtualization host computers and to remotely control the virtual machines. The user can use the Failover Cluster Manager of host OS to build the HA cluster configuration. The Failover Cluster Manager can be installed from the server manager of the host OS Live Migration <Function of Hyper-V> Live migration is a function to migrate a virtual machine to another virtualization host computer without stopping the running virtual machine, which can be used between virtualization host computers of the single configuration or within an HA cluster configuration. Using this function enables the user to perform the application of security patch and the hardware replacement for the host OS that require stop and restart of the server without turning off the virtual machine. The user must perform live migration for virtual machines one by one manually because it requires large loads on both software and hardware of the virtualization platform. When performing live migration with the single configuration, limit the network transmission band for the live migration to prevent excessive loads on the disk. The live migration can be performed from the Hyper-V Manager in the case of single configuration and from the Failover Cluster Manager in the case of HA cluster configuration. Notes on the live migration Performing live migration results in an error if any of the following applies. Note that, even if the live migration fails, the virtual machine does not stop but continues to operate on the existing virtualization host computer. The memory of the virtualization host computer for the live migration destination is insufficient. The physical CPUs of the two virtualization host computers that perform live migration are incompatible. (*1) Either the management network or the HA cluster network is disconnected. The virtual switch required by the virtual machine targeted for the live migration is not built in the virtualization host computer for the live migration destination. *1: The physical CPU is incompatible means the CPU instruction set used by the virtual machine is different between the virtualization host computers. TI30A05B10-01EN Sep. 28,

38 3. Details of the Virtualization Platform System 3-16 Prohibiting the automatic live migration The protected network is turned on by default. Therefore, the live migration automatically occurs when virtual computer resources (CPU/memory) are overcommitted on some of the virtualization host computers in the HA cluster configuration or when the network that is specified as "Protected network" in the virtual machine of the HA cluster configuration is disconnected. Performing live migration only manually is allowed on the virtualization platform. Therefore, the user must change the default setting for the virtualization platform and use it. However, automatic live migration cannot be inhibited in all cases. An example case of not being able to change the default setting is as follows: When a virtualization host computer in the HA cluster configuration is shut down with the virtual machines running Failover <Function of Hyper-V> Failover is used to reduce the system downtime when the virtualization host computer stops due to a failure. What Is Failover? In the case that a virtualization host computer in the HA cluster configuration stops due to a failure or other reasons, the failover function can restore the operations by automatically restarting a virtual machine in another virtualization host computer within the HA cluster configuration. In addition, not only when a virtualization host computer stops but also when a guest OS hangs up, the failover function can reset and restart the virtual machine. Failover is the function available in the HA cluster configuration. The recovery of the virtual machine by the restart initiated by the failover function operates as starting OS after unexpected shutdown. The virtualization platform uses Windows Server failover clustering. Conditions for Failover Failover occurs when one of the following events occurs in the HA cluster configuration. Stop of the virtualization host computer due to a failure Simultaneous disconnection of the management network and the HA cluster network Guest OS hangup Switching Time in Failover The failover function performs the recovery by restarting the guest OS. Therefore, the time to restart the guest OS and the time to start the application are required at a minimum as the switching time in failover. The application refers to the services in HIS or PRM. TI30A05B10-01EN Sep. 28,

39 3. Details of the Virtualization Platform System NIC Teaming <Function of Hyper-V> NIC teaming is a function that uses two or more network adapters to balance the load on the network and to improve the availability of network adapter with redundant configuration. The virtualization platform uses NIC teaming when using the remote UI network in a dualredundant configuration. Vnet/IP is a network whose bus is configured as dual-redundant, and a dedicated protocol enables the Vnet/IP network to perform high-quality and real-time communication. Therefore, the user does not need to (should not) apply the NIC teaming function. The storage network does not use the NIC teaming because it is configured as dualredundant by using Microsoft Multipath I/O (MPIO). NIC teaming switches the redundant path of the network only when the network adapter of the virtualization host computer itself is linked down. It does not switch the path when the network between thin clients and the virtualization host computer experiences any error. Therefore, when configuring the remote UI network with a redundant path, design the network so that it automatically recovers from errors on the intermediate network path Resource Control <Function of Hyper-V> Resource control is a function to regulate resources such as setting the priority and upper limit for the resource usage of virtual machines. In the case that two or more virtual machines are running on a virtualization host computer, if some virtual machines consume a large amount of resources, they may affect the operation of other virtual machines. To prevent this, configuring the resource control settings is necessary. For resource control, the user can use the Hyper-V resource control function and the Storage Quality of Service (QoS) Backup <Function of Hyper-V> The backup function enables the user to acquire a full backup of a host OS or a virtual machine by manual backup. In the case when inconsistency occurs in the setting of the host OS or guest OS or in the system, restoring the backup image enables the user to quickly restore the state before the occurrence of inconsistency. Also, when replacing a server or a shared storage, restoring the backup image of the virtual machine can quickly restore the state before the replacement. Note that the backup image of virtual machine can be utilized to migrate the virtual machine to another virtualization host computer IT Security <Function Provided by Yokogawa> Yokogawa provides the IT security tool for the host OS and Windows based thin client. For details on the settings, refer to IM 30A05B30-01EN "Virtualization Platform Security Guide." Log Save <Function Provided by Yokogawa> Yokogawa provides the log save tool for the host OS. For the list of information to be acquired, refer to IM 30A05B20-01EN "Virtualization Platform Setup." TI30A05B10-01EN Sep. 28,

40 3.2.9 Checkpoint <Hyper-V function> 3. Details of the Virtualization Platform System 3-18 A checkpoint (snapshot) is a function to save the state of the virtual machine at a certain point. By creating checkpoints before applying patches, installing applications and when building the environment, you can quickly return to the original state even if you make a mistake. However, it is prohibited to run a plant operation on a virtual machine that still has checkpoints. There is a concern about performance deterioration of the virtual machine when continuing long-term operation in that state. This function should only be used temporarily for maintenance purposes, and all the checkpoints must be deleted to be invalidated before the plant goes into operation. Note that you must acquire, apply, or delete checkpoints on the virtualization platform while the virtual machine is stopped (shut down) Replication <Hyper-V function> The purpose of using replication is to reduce downtime when the main storage in a single configuration or HA cluster configuration has failed. The storage may be a local storage if it is in a single configuration, or a shared storage if it is in an HA cluster configuration. Replication in a virtualization platform is a function that periodically creates replica (duplicate) of virtual machines that are on a virtualization host computer (primary server) on another virtualization host computer (replica server) in a virtualization platform environment. It is implemented by using Hyper-V replicas. If the primary server stops due to an error, you can restore the processes that were running on the virtual machines by using the replicas of the virtual machines working on the replica server (failover of replication). However, data on the virtual machines will be rolled back to the point when the replicas were created. In the event of a failover with replication, an initial cold start (restart of virtual machines) using the replica image takes place. Like restoration from a backup, the operations and data updates that were performed during the period until the roll-back point are not reflected in the replica, and there may be inconsistencies between other devices (virtual machine, physical computers, etc.) that did not experience a failover. So, care must be taken in operation. In order for the virtual machines subjected to replication to be able to operate equally on the primary server and the replica server, the network configuration and the resource capacity that can be secured for the virtual machines must be identical on the two servers. In addition, the CPU load and disk load will increase due to replication operations on the virtualization host computer where the virtual machines subjected to replication are running (primary server) and the virtualization host computer where replicas are created (replica server). Assume that this load is equal to the load on the virtual machine that you want to replicate. This means that every time a virtual machine is replicated, an extra virtual machine of the same capacity (resource for replication) will run on the primary server and the replica serer. Because of this, it is necessary to estimate the number of virtual machines that can be consolidated on one primary server/replica server, considering also the resources for replication. TI30A05B10-01EN Sep. 28,

41 3. Details of the Virtualization Platform System 3-19 Replication Host OS Resources for replication Host OS Replica Virtual Machine (Resources for replication) Physical Server Physical Server Primary Server Replica Server F030201E.ai For example, when a virtualization host computer running 18 virtual machines with the same resource capacity as the standard virtual machine is used as the primary server, no more virtual machines can run on that computer if up to 9 computers are specified for replication. Since the resource capacity for the remaining 9 virtual machines will be used as the resources for replication, stop the remaining 9 virtual machines. In this situation, also on the replica server, the resource capacity of the same amount as that of the resource capacity secured for replication on the primary server cannot be used to run virtual machines. Note that the primary server and replica server must be specified using a Fully Qualified Domain Name (FQDN) based on the Hyper-V specification. Therefore, the management network is used for replication in the virtualization platform. When using replication, you must be careful about the management network band. This is because data of the amount written to the primary server is also written to the replica server through the network. The virtual machine image to restore a virtual machine to a state at a certain point of time is called a recovery point. A replica consists of the Latest recovery point that restores a virtual machine to its latest state and Additional recovery points that are generated every hour. Assume that the disk space required for one recovery point is equal to the space required for the virtual machine that you want to replicate. The number of additional recovery points can be changed. Decide it from the free disk space of the replica server. Additional recovery points are used to restore virtual machines to states earlier than the Latest recovery point. Additional recovery points should also be retained in case when restoring a virtual machine with Latest recovery point fails. Decide for each JOB the number of Additional recovery points that should be retained based on the disk space of the replica server and how restoration of the virtual machine is required. Activating the replica on the replica server in the event of a primary server failure is also expressed as a failover on Hyper-V Manager and Failover Cluster Manager. However, it does not mean the failover described in section Failover to a replica needs to be performed manually by using Hyper-V Manager or Failover Cluster Manager. In the case of single configuration, the virtualization host computer can be either a primary server or a replica server. In the case of HA cluster configuration, an HA cluster configuration is regarded as one server and can be a primary server or a replica server. After a failover with replication, if you want to have the recovered primary server act again as the virtualization host computer for running virtual machines, you must stop the virtual machines. TI30A05B10-01EN Sep. 28,

42 3. Details of the Virtualization Platform System Virtualization Platform System Configuration Selection Guide There are some patterns of system configurations using the virtualization platform. This section gives an overview of what is made possible, what is the matter to be concerned, and what is required as components, depending on the selected pattern. When you design a system configuration, start from a virtualization host computer in the single configuration (standalone single configuration), and consider how the installation configuration should be changed in order to add the required virtualization features. Table 3 4 shows the pros and cons of each pattern of system configuration using the virtualization platform. Consider the installation configuration, paying attention especially to the cons. Table Patterns of system configuration using the virtualization platform and their pros and cons (1/2) Virtualization configuration Installation configuration Failover Live migration Replication Pros Cons 1 Single configuration Standalone single configuration No No No If the server fails, all applications (virtual machines) stop. The data during the failure will be missing. 2 Single configuration + physical PC No No No Some of the applications can continue to run even when the server fails. Some applications stop when the server fails. 3 Single configuration + single configuration (Dual servers) No Yes Yes All applications can continue to run even when the server fails. DC is required to run a live migration. 4 HA cluster configuration Single HA cluster configuration Yes Yes No Failover can take place. Shared storage and DC are required. If the shared storage fails, all applications stop. In the event of a shared storage network error, operation is disabled for about 50 seconds. The data during that time may be lost. The failure cannot be noticed immediately. 5 HA cluster configuration HA cluster configuration + physical PC Yes (*1) Yes (*1) No Some of the applications can continue to run even when the shared storage fails. When the HA is not functional due to a network error, the PC can be notified of the failure (via NMS). Some applications stop when the HA cluster fails. Yes: Available No: Not Available TI30A05B10-01EN Sep. 28,

43 3. Details of the Virtualization Platform System 3-21 Table Patterns of system configuration using the virtualization platform and their pros andcons (2/2) 6 7 Virtualization configuration Installation configuration HA cluster configuration + single configuration HA cluster configuration + HA cluster configuration (Dual clusters) Yes: Available No: Not Available Failover Live migration Yes (*1) Yes (*1) Yes Yes (*2) Yes (*2) Yes *1: Failover and live migration can be performed only within the HA cluster configuration. *2: Failover and live migration can be performed only within each HA cluster configuration. Replication Pros Cons Plurality of applications can continue to run even when the shared storage fails (not all applications). When the HA is not functional due to a network error, notification of the failure is possible (via NMS). All applications can continue to run even when the shared storage fails. Downtime can be made 0. When the HA is not functional due to a network error, notification of the failure is possible (via NMS). Some applications stop when the HA cluster fails. It is costly. TI30A05B10-01EN Sep. 28,

44 3. Details of the Virtualization Platform System 3-22 The table below summarizes the pros and cons of each virtualization technique. Consider whether to use the technique, paying attention especially to the cons. Table Pros and cons of virtualization techniques Technique Pros Cons Shared storage Failover can be implemented. If a link-down occurs on the active-side path of the storage network, the communication stops for a certain period of time, during which data read/write access may be disabled. Failover Live migration Replication Can be a measure to reduce the disk load during live migration. Can be a measure to reduce the downtime due to a sudden death of the virtualization host computer. Can be a measure to reduce the downtime due to a planned stoppage of the virtualization host computer. Can be a measure to reduce the downtime due to a failure of the virtualization host computer. Can be a measure to reduce the downtime due to a failure of the shared storage. Since the virtualization host computer and the shared storage are connected via a network, the connection path may be vulnerable compared to a local disk. Since a failover takes time in the order of minutes, the service stops during that time. Restarting of virtual machines is mandatory. When run, the load on the HA-cluster network becomes high. Without use of a shared storage, the disk load becomes high on both the sending and receiving virtualization host computers. Virtual machines are not completely free from being stopped. Since data is synchronized periodically (roughly 5 minutes), the data during the period between the occurrence of the event and the previous synchronization will be lost. When run, disk load imposed by replication is added in addition to the disk load imposed by the virtual machines. Failover to a replica virtual machine needs to be done manually. The table below shows the components required to implement each installation configuration. Consider the installation configuration, referring to this table. Table Virtualization configuration Single configuration HA cluster configuration Components required for each installation configuration Configuration Component Installation configuration NMS SNTP server Standalone single configuration Single configuration + physical PC Single configuration + single configuration (Dual servers) Single HA cluster configuration HA cluster configuration + physical PC HA cluster configuration + single configuration HA cluster configuration + HA cluster configuration (Dual clusters) Yes: Required No: Not required *1: Required for live migration. Domain controller Yes Yes No No Yes Yes No No Yes Yes Yes (*1) No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Shared storage TI30A05B10-01EN Sep. 28,

45 4. Target Products for Virtualization Platform Target Product for Virtualization Platform This chapter describes the system products operating on the virtualization platform and how to provide them. 4.1 Software to Run on the This section describes products that support the operation on the virtual machine of the virtualization platform. Yokogawa IA System Products Table Yokogawa IA System Products for Virtualization Product name Release number Software Remarks CENTUM VP R or later HIS, ENG, etc. Except APCS, GSGW, UGS, UGS2 ProSafe-RS R or later SENG, idefine Except Vnet/IP-Upstream Exaopc R or later OPC Interface Package Except Exaopc-RD Exapilot R or later Operation Efficiency Improvement Package AAASuite R or later Advanced Alarm Administrator PRM R or later Plant Resource Manager Except RS-232C, NI-FBUS, COM Port connection Common Table Common Software for Virtualization Software Release number Remarks Yokogawa Standard Anti-virus Software It was called AV11000 formerly. IT Security R or later Vnet/IP Interface Package R or later In this document, it is described as Vnet/IP communication software. Others The following shows software provided by others than Yokogawa operating on the virtual machine. File Server Domain Controller TI 30A05B10-01EN Sep. 28,

46 4. Target Products for Virtualization Platform 4-2 Details of Yokogawa IA System Products for Virtualization Details of Yokogawa IA system products for the virtualization platform are shown as the tables below. Table CENTUM VP (ENG) Model Package name Virtualization Remarks VP6E5000 Engineering Server Function Yes VP6E5100 Standard Engineering Function Yes VP6E5210 Module-based Engineering Package Yes VP6E5215 Tuning Parameter Management Package (for Module-based Engineering) Yes VP6E5216 Bulk Editing Package (for Module-based Engineering) Yes VP6E5250 Change Management Package Yes VP6E5260 Dependency Analysis Package Yes VP6E5110 Access Control Package Yes VP6E5150 Graphic Builder Yes VP6E5165 Batch Builder (VP Batch) Yes VP6E5166 Recipe Management Package (VP Batch) Yes VP6E5170 FDA:21 CFR Part 11 package Yes VP6E5420 Test Function Yes VP6E5425 Enhanced test function package Yes VP6E5426 FCS Simulator Package Yes VP6E5427 HIS Simulator Package Yes VP6E5450 Multiple Projects Connection Builder Yes VP6E5490 Self-documentation Package Yes VP6E5800 Turbine I/O Module Logic Builder Package Yes. TI 30A05B10-01EN Sep. 28,

47 4. Target Products for Virtualization Platform 4-3 Table CENTUM VP (HIS) Model Package name Virtualization Remarks VP6H1100 Standard operation and monitoring functions Yes VP6H1120 Console HIS Support Package for Enclosed Display Style No VP6H1130 Console HIS Support Package for Open Display Style No VP6H1140 Eight-loop Simultaneous Operation Package (for AIP831) Yes VP6H2411 Exaopc OPC Interface Package (for HIS) Yes VP6H2412 CENTUM Data Access Library Yes VP6H4000 Million Tag Handling Package Yes VP6H4100 Configured Information Reference Package Yes VP6H4150 Output to External Recorder Package No VP6H4190 Line Printer Support Package No VP6H4200 Historical Message Integration Package (meeting FDA Regulations) Yes VP6H4410 Control Drawing Status Display Package Yes VP6H4420 Logic Chart Status Display Package Yes VP6H4450 Multiple Projects Connection Function package Yes VP6H4600 Multiple-monitor Support Package Yes VP6H4700 Advanced Alarm Filter Package Yes VP6H6510 Long-Term Data Archive Package Yes VP6H6530 Report Package Yes VP6H6660 Process Management Package (VP Batch) Yes VP6H6710 FCS Data Setting/Acquisition Package (PICOT) Yes VP6H1150 Server for Remote Operation and Monitoring Function Yes Hardware is console type Hardware is console type RS-232C connection with FA-M3 The printer connects by USB. Table CENTUM VP (FCS) Model Package name Virtualization Remarks VP6F1700, VP6F1705 VP6F1800, VP6F1805 VP6F1900, VP6F1905 Basic Control Functions (AFV30o/AFV40o), Control Function for FCS Simulator (for AFV30o/AFV40o) Basic Control Functions (A2FV50o), Control Function for FCS Simulator (for A2FV50o) Basic Control Functions (A2FV70o), Control Function for FCS Simulator (for A2FV70o) VP6F8620 Off-site Block Package Yes VP6F3132 Valve Pattern Monitor Package Yes VP6F3210 PID with Output Loss Compensation Package (for Field Wireless) Yes VP6F1200, VP6E5500, VP6ESETA APCS Control Function, User Custom Block Development Package, APCS Set VP6F3100 Project I/O License Yes Yes Yes Yes No TI 30A05B10-01EN Sep. 28,

48 4. Target Products for Virtualization Platform 4-4 Table CENTUM VP (Others) Model Package name Virtualization Remarks VP6P6900 SOE Server Package Yes VP6P6910 SOE Server Configurator Package Yes VP6P6920 SOE Viewer Package Yes VP6P6930 SEM OPC Interface Package Yes VP6E5030 C Language Development Environment Package for FCS No VP6E5500 User Custom Block Development Package No VP6E9001 Exatif DCS Interface for Training Simulator Yes By Omega Simulation Co., Ltd. VP6F1250 GSGW Generic Subsystem Gateway Package No VP6B2100 System Integration OPC Client Package Yes VP6B1500 UGS Unified Gateway Station Standard Function No VP6B1501 Dual-redundant Package (for UGS) No VP6B1600 Unified Gateway Station (UGS2) Standard Function No VP6B1601 Dual-redundant Package (for UGS2) No VP6B1550 VP6B1650 VP6B1553 VP6B1653 VP6B1591 VP6B1691 VP6B1570 VP6B1670 OPC Communication Package (for UGS/ UGS2) Modbus Communication Package (for UGS/ UGS2) EtherNet/IP Communication Package (for UGS/UGS2) IEC IED Communication Package (for UGS/UGS2) No No No No Table ProSafe-RS Model Package name Virtualization Remarks RS4E5000 Engineering Server Function Yes RS4E5100 Safety System Engineering and Maintenance Package Yes RS4E5170 Access Control and Operation History Management Package Yes RS4E5210 I/O List Engineering Package Yes RS4E5250 Change Management Package Yes RS4E5600 CENTUM VP Integration Package Yes RS4E5700 FAST/TOOLS Integration Package No RS4E5810 idefine Interface Package Yes RS4H2100 SOE Viewer Package Yes RS4H2200 SOE OPC Interface Package Yes Note: Do not install SENG of ProSafe-RS together into the same virtual machine with HIS-TSE (Server for Remote Operation and Monitoring Function) of CENTUM VP. TI 30A05B10-01EN Sep. 28,

49 4. Target Products for Virtualization Platform 4-5 Table Exaopc Model Package name Virtualization Remarks NTPF100-S1 NTPF100-S3 NTPF100-S6 NTPF100-SB NTPF100-SX Exaopc OPC Interface Package For CENTUM VP, for CENTUM VP Small, for CENTUM CS3000, for CENTUM CS3000 Small (DA, A&E, HDA Server Functions) Exaopc OPC Interface Package For CENTUM CS (DA, A&E, HDA Server Functions) Exaopc OPC Interface Package For CENTUM VP CAMS for HIS (DA, A&E, HDA Server Functions) Exaopc OPC Interface Package For VP Batch, For CENTUM CS Batch 3000 (DA, A&E, HAD, Batch Server Function; Exaopc/Batch) Exaopc OPC Interface Package OPC Server Redundancy Function (Exaopc-RD) Yes No Yes Yes No Table PRM Model Package name Virtualization Remarks PM4S7100 PRM Device License Yes PM4S7700 PM 4S7701 Plant Resource Manager Server Yes PM 4S7702 PM4S7710 Plant Resource Manager Client Yes PM4S7711 Documenting Calibrator Interface No COM Port connection PM4S7720 Field Communication Server Yes With Vnet/IP PM4S7730 Interface for CMMS Yes PM4S7740 PRM Advanced Diagnostic Server Yes PM4S7770 GE Energy System 1 Communication Package Yes PM4S7780 PST Scheduler Package Yes TI 30A05B10-01EN Sep. 28,

50 4. Target Products for Virtualization Platform Software to Run on the Host OS The following shows system products operating the software to run on the host OS. Anti-virus Software There is specified software as the anti-virus software for the host OS. Refer to Chapter 5 for details. Separately from the host OS, there is also specified virus software for the thin client. Refer to Chapter 9 for details. IT Security The IT security for the host OS is provided. Refer to Chapter 5 for details. Separately from the host OS, the IT security for the thin client is provided. Refer to Chapter 10 for details. Software specified by the vendor For the selected hardware of the virtualization platform, specified software is provided by the vendor of the virtualization host computer, shared storage, and network switches when building the environment. 4.3 Provided Media In order to make system products operate on the virtualization platform, two media of traditional product media and software media for virtualization platform are required. System product media When installing each system product in the virtual machine, install it using the conventional product media. The Vnet/IP communication software is included in the media of each product. Refer to each product s manual for the installation procedure of the Vnet/IP communication software. Software media for virtualization platform It is the media dedicated to the virtualization platform that includes the software for the host OS. The IMs about the virtualization platform printed on paper are packed together with this media. TI 30A05B10-01EN Sep. 28,

51 5. Software Environment 5. Software Environment 5-1 This chapter describes the software environment of the virtualization platform. 5.1 Virtualization Host Computer This section provides the software environment of the virtualization host computer Host OS OS The following shows the supported OS types. Windows Server 2016 Datacenter Edition Desktop Experience (Japanese/English) Windows Services The roles and features of the Windows server are added for using with the host OS of the virtualization host computer. Table Windows Services Name HA cluster configuration Single configuration Hyper-V Available Available Hyper-V Management Tools Available Available Windows Server Backup Available Available Failover clustering Available Not required Failover cluster management tool Available Not required Failover module for Windows Powershell Available Not required Multipath I/O Available Not required Remarks Anti-virus Software The following shows the supported anti-virus software types. Windows Defender For details, refer to IM 30A05B30-01EN Virtualization Platform Security Guide. IT Security The IT security tool for the host OS is provided. For details, refer to IM 30A05B30-01EN Virtualization Platform Security Guide. TI 30A05B10-01EN Sep. 28,

52 Backup Software 5. Software Environment 5-2 The following shows the software that supports the use in the host OS to backup/restore the host OS and the virtual machines. Refer to Chapter 10 for the execution procedure. Backup type Host OS Virtual machine Manual full backup (Offline) Windows Server Backup (*1) (*2) Hyper-V Import/Export (*1) *1: Standard feature of Windows Server *2: Windows Server installation media is required for restoration Notes on backup Offline backup is recommended for full backup because online backup may not be backed up correctly OS The following shows the supported OS types. Windows Server 2016 Standard Edition (Japanese/English) Others The same software as the physical environment can be used. TI 30A05B10-01EN Sep. 28,

53 5.2 Domain Controller 5. Software Environment 5-3 This section describes the domain controller software environment required for HA cluster configuration. Two types of domain controllers can be used, one dedicated to the virtualization host computer and the other installed for the domain environment of the Yokogawa system product OS The following shows the supported OS types. When the domain controller is common to the Yokogawa system product That is, in the case of the domain controller that is located on the plant information network and manages the guest OSes : It is the same as the physical environment. When the domain controller is dedicated to the virtualization host computer That is, in the case of the domain controller that is connected to the management network and deals with the host OSes : Windows Server 2016 Standard Edition IT Security IT security is provided respectively when domain controllers are common to Yokogawa system products and when they are dedicated to virtualization host computers. For details, refer to IM 30A05B30-01EN Virtualization Platform Security Guide Others The same software as the physical environment can be used. TI 30A05B10-01EN Sep. 28,

54 5. Software Environment NMS (Network Management System) This section explains the NMS that is used for detection of hardware failure and the monitoring of performance trend in the virtualization platform Selection Criteria Prepare the NMS that meets the following selection criteria. It is possible to acquire performance trends of host OS using WMI (Windows Management Instrument). With SNMP v3, the hardware state of the virtualization host computer and shared storage can be monitored by polling. Whatsup Gold If there is not the specified NMS in particular, Whatsup Gold is recommended.. TI 30A05B10-01EN Sep. 28,

55 5.4 Various Licenses This section describes the licenses required for the virtualization platform Windows OS 5. Software Environment 5-5 This section describes the license for using Windows OS on the virtualization platform. In the virtualization platform, license allocation is required in two places, the virtualization host computer and thin client. For licenses to be allocated to thin clients, it is necessary to allocate different licenses according to the type of the guest OS to be connected. For OS license authentication, it is necessary for the host OS and the guest OS, respectively. The Figure 5 1 figure shows the allocated part of the license and its type. RDS client access license RDS client access license Server client access license Server client access license OS license activation OS license activation OS license activation Thin Client (for WS2016) Thin Client (for WS2016) Server license access Host OS Guest OS (WS2016) Guest OS (WS 2016) Virtualization Host Computer F050401E.ai Figure License allocation required for using Windows OS Server License This is a license required to run the Windows Server OS on a computer. It is a license that needs to be allocated to the host OS. Note: It is required to run Windows service on a computer. Server Client Access License (Server CAL) This is a license required for clients connecting to Windows Server. This is a license required for thin client terminal accessing to the Windows Server OS running on the guest OS. Note: This license is unnecessary when Windows Server uses the function of another Windows Server. RDS Client Access License (Remote Desktop Service CAL) This is a license required for clients connecting to Windows Server via RDP. This is a license required for a thin client terminal that accesses to the Windows Server OS (WS2016) running on the guest OS via Remote Desktop. TI 30A05B10-01EN Sep. 28,

56 5. Software Environment 5-6 Notes on purchasing License of the Host OS Windows Server 2016 used as the host OS has two types of editions (Standard and Datacenter). In these two editions, the number of virtual machines (the number of OSE) running Windows Server OS allowed to run concurrently on the virtualization host computer and the supported OS functions are different. So, it is necessary to be purchased considering these differences. Item Standard Datacenter Number of OSEs (*1) (*2) 2 Infinite *1: Number of virtual machines running Windows Server OS allowed to run concurrently on the virtualization host computer *2: Virtual machines running Windows Desktop OS (such as Windows 10) or Linux are not counted in the number of OSE. The virtualization platform supports Datacenter Edition only. About the license of the guest OS No license is required for the Windows guest OS of the virtualization platform Yokogawa System Products Regarding Yokogawa system product licenses, according to the license requirements of each product, purchase the number of licenses to use and install them. Since it is not a license associated with the virtualization host computer like the Windows OS license, even in the HA cluster configuration, it is not necessary to prepare it in both the migration source and the migration destination of live migration and failover. TI 30A05B10-01EN Sep. 28,

57 6. Hardware Configuration 6. Hardware Configuration 6-1 This chapter describes the hardware configuration of the virtualization platform. 6.1 Virtualization Host Computer This section describes the requirements of the physical server to be used as a virtualization host computer for the virtualization platform Server model The following models are used as physical servers for virtualization host computers. Rack type: Dell PowerEdge R740 Modular type: Dell PowerEdge FX2s, Dell FC640 The following shows the reason for choosing the above machines as the Yokogawa specified models. Item Requirements Remarks Host OS Microsoft Windows Server Hyper-V (*1) Long-term maintenance It is a device type that can obtain long-term maintenance support. *1: Regarding the server OS, the physical server list authenticated by Microsoft is disclosed. - Windows Server Catalog Although the above server model is specified, the memory capacity, disk capacity, etc. installed in the server can be changed according to the number of virtual machines running on the virtualization host computer and the applications running on the virtual machine. Refer to Chapter 7 for estimating the resource capacity used by the virtualization host computer. CAUTION For R740, use it with the device driver version or loater of the RAID card (PERC H740P) About Immobilization of Network Port Allocation Immobilize the mounting position of the physical Ethernet card to be mounted on the physical server and the position of the network port of each physical Ethernet card for each application. Also, the position of the network port that can be used in each zone of the virtualization host computer is fixed. Immobilization is aimed at reducing work errors by setting up a virtualization host computer and facilitating local service work. Therefore, it is prohibited to use the network port other than the set use. For example, in the case of a zone with only Level 3 product virtual machines, the Vnet/IP port is free, but do not use for another purpose or use from another zone. TI 30A05B10-01EN Sep. 28,

58 6.1.3 About the versatile network port 6. Hardware Configuration 6-2 For each server configuration, there is a versatile network port. Use versatile network ports for applications other than Vnet/IP. The assumed usage is as follows: Change the network bandwidth by exchanging with fixed use port. Add a port for the subsystem communication network. Use as a backup-only network of the host OS Details of Server Specification at Single Configuration A rack type server as a virtualization host computer that can be used in a single configuration is available. The server is provided with two types, 1 CPU type and 2 CPU type. The following shows the server hardware specifications. R740 (1 CPU type) Item Specification Description Body Dell PowerEdge R740 based Yokogawa specified model CPU Intel Xeon Gold GHz 20 cores Total 20 cores Memory 64 GB Hard disk 600 GB x 2 Hot-plug 1.2 TB x 6 Hot-plug RAID PERC H740P internal RAID (*1) RAID1/10 On-board Ethernet 10 Gb DA/SFP+4 ports For Host OS RAID1: effective volume 558 GB For virtual machine RAID10: effective volume 3.2 TB Ethernet card 1 Gb 8 ports Installed into PCIe slot. Optical drive DVD+/-RW Power supply unit Hot plug power supplies with full redundancy 1100 W Reference: Number of standard virtual machines that can be operated: 9 VM *1: The device driver version or later should be used. R740 (2 CPU type) Item Specification Description Body Dell PowerEdge R740 based Yokogawa specified model CPU Memory Hard disk Intel Xeon Gold GHz 20 cores 128 GB 600 GB x 2 Hot-plug 1.2 TB x 10 Hot-plug RAID PERC H740P internal RAID (*1) RAID1/10 On-board Ethernet Ethernet card Optical drive Power supply unit 10 Gb DA/SFP+ 8 ports 1 Gb 8 ports DVD+/-RW Hot plug power supplies with full redundancy 1100 W Reference: Number of standard virtual machines that can be operated: 18 VM *1: The device driver version or later should be used. 2nd CPU is the same spec. Total 40 cores For Host OS RAID1: effective volume 558 GB For virtual machine RAID10: effective volume 5.4 TB Up to 5 cards by configuration. Installed into PCIe slot. TI 30A05B10-01EN Sep. 28,

59 6. Hardware Configuration Details of Server Specification at HA Cluster Configuration A rack type server or a modular type as a virtualization host computer that can be used in the HA Cluster configuration is available. In this configuration, only 2 CPU type is provided. R740 (2 CPU type) Item Specification Description Body Dell PowerEdge R740 based Yokogawa specified model CPU Memory Hard disk Intel Xeon Gold GHz 20 cores 128 GB 600 GB x 2 Hot-plug RAID PERC H740P internal RAID (*1) RAID1 On-board Ethernet Ethernet card Optical drive Power supply unit 10 Gb DA/SFP+ 4 ports 1 Gb 8 ports 10 Gb DA/SFP+ 4 ports DVD+/-RW Hot plug power supplies with full redundancy 1100 W Reference: Number of standard virtual machines that can be operated: 18 VM *1: The device driver version or later should be used. 2nd CPU is the same spec. Total 40 cores For Host OS RAID1: effective volume 558 GB Up to 4 cards by configuration. Installed into PCIe slot. 1 card. Installed into PCIe slot. FX2s (FC640) Item Specification Description Body Dell PowerEdge FC640 based Yokogawa specified model CPU Memory Hard disk Intel Xeon Gold GHz 20 cores 128 GB 600 GB x 2 Hot-plug RAID PERC H730P internal RAID RAID1 On-board Ethernet Ethernet card 10 Gb 4 ports 1 Gb 8 ports Optical drive None (*1) *1: When utilize an optical drive, use an USB type DVD drive. FX2s (chassis) 2nd CPU is the same spec. Total 40 cores For Host OS RAID1: effective volume 558 GB Item Specification Description Body Dell PowerEdge FX2s chassis Yokogawa specified model I/O module Power supply unit 8 ports 10 GbE path-through module Hot plug power supplies with full redundancy 2400 W VAC plug type: 200 V/ C20 TI 30A05B10-01EN Sep. 28,

60 Supplement of FX2s (FC640) 6. Hardware Configuration 6-4 Dell PowerEdge FX2s is called a module type server, and is classified as a blade type server. By mounting the compute thread where CPU/memory is mounted with high density and the module called storage thread where SSD/HDD is mounted with high density into the place called thread, it can be used as the 2U rack mount type server. The Dell PowerEdge FC 640 is a compute thread, and up to four can be mounted into the FX2 FX2s chassis. In the virtualization platform, FX2s mounting one to four FC640s will be line-upped as a virtualization host computer for HA cluster configuration. The figure below shows the mounting image of compute thread (FC640) as seen from the front of FX2s. FX2s chassis Compute Sled 1 (1st FC640) Compute Sled 2 (2nd FC640) Compute Sled 3 (3rd FC640) Compute Sled 4 (4th FC640) Figure Relationship between the FX2s chassis and the compute thread (FC 640) F060101E.ai TI 30A05B10-01EN Sep. 28,

61 6.2 Shared Storage 6. Hardware Configuration 6-5 In the HA cluster configuration, the shared storage is used. The storage configuration can be changed according to the capacity, the read/write speed and the number of virtual machines. For details of the capacity and the write speed, refer to Chapter 7. Dell SCv3020 Item Specification Description Body Dell EMC SCv3020 based Yokogawa specified model OS Storage controller Front-end port Management port Hard disk Rack size Power supply unit Storage Center OS Dual Controller 10 Gbps iscsi port 1 Gbps 1.2 TB 10 K RPM SAS x Gbps 2.5 inch Hot-plug 1.2 TB 10 K RPM SAS x Gbps 2.5 inch Hot-plug 1.2 TB 10 K RPM SAS x Gbps 2.5 inch Hot-plug 3 U Hot plug power supplies with full redundancy 1485 W For virtual machine RAID10: effective volume 4.8 TB For virtual machine RAID10: effective volume 10.2 TB For virtual machine RAID10: effective volume 15.1 TB Attached list: Hard disk Select the configuration according to the required disk space. Configuration Specification Remarks TB 10K RPM SAS Gbps 2.5 inch Hot plug 1.2 TB 10K RPM SAS Gbps 2.5 inch Hot plug 1.2 TB 10K RPM SAS Gbps 2.5 inch Hot plug For RAID 10 : Effective capacity 4.8 TB (Group 1) For RAID 10 : Effective capacity 10.2 TB (Group 1/2) For RAID10 : Effective capacity 15.1 TB (Group 1/2/3) The following figure shows the positions of groups 1/2/3 described in the remarks. Group 1 Group 2 Group 3 F060102E.ai TIP Part of mounted HDD is always used as a spare disk. TI 30A05B10-01EN Sep. 28,

62 6.3 L2 Switch The L2 switch for network of the following use is specified. For Storage network 6. Hardware Configuration 6-6 Dell S4048T-ON Item Specification Description Body Dell S4048T-ON Yokogawa specified model The number of ports 48 fixed 10 GBase-T ports supporting 100 M /1 G /10 G speeds 6 fixed 40 Gigabit Ethernet QSFP+ ports 1 RJ45 console/management port with RS232 signaling Performance Forwarding Capacity: 1080 Mpps MAC addresses: 160 K VLAN function The number of VLAN: 4000 Management function SNMP: v1, v2, v3 support Hardware redundancy Hot swappable redundant power Hot swappable redundant fans For Vnet/IP Use the same Recommended Switches for Vnet/IP as that in the physical environment. For remote UI network/plant information network/management network There are no specified models in these networks. The L2 switch used for plant information network in the physical environment can be used. TI 30A05B10-01EN Sep. 28,

63 6. Hardware Configuration Preparation for Selected Hardware For the virtualization host computer, shared storage, and L2 switch, the following model names are set in the hardware configuration described in this section. This is the basic configuration of the virtualization platform. The hardware with added (Option) configuration is configured by arranging additional parts with the basic configuration. Virtualization host computer Device Description Model Remarks (*1) (*2) DELL PowerEdge R740XL (*3) DELL PowerEdge FX2s 1 CPU single configuration YG4VR04-A1S1600E0 Host OS: Windows Server CPU single configuration YG4VR04-B1D1600E0 Datacenter Edition With Power Code (C13/C14) 2 CPU single configuration YG4VR04-B1D1600E0 No Jumper Code FX2s chassis 2 CPU HA cluster configuration (Dell PowerEdge FC640) YG5VR06-M1N0000X0 YG5VR06-C1D1600E0 No Power Code With Jumper Code (C19/C20) Host OS: Windows Server 2016 Datacenter Edition *1: The OEM OS of the physical server is licensed to the minimum number of cores as the host OS in each configuration. *2: The following accessories are not included in the basic configuration. Make arrangements as necessary. Keyboard Mouse Display Server CAL/Remote desktop CAL DVD drive for external connection (USB) Ethernet transceiver for SFP+ to RJ45 conversion Ethernet connection cable for SFP+ Ethernet connection cable for RJ45 PDU (power supply tap for rack) *3: For R740XL, use it with the device driver version or later of the RAID card (PERC H740P). Shared storage Device Description Model Remarks (*1) (*2) 1.2 TB 10K RPM SAS 10 VR6ST Gbps 2.5 inch Hot plug DELL 1.2 TB 10K RPM SAS 20 VR6ST No Power Code SCV3020 With Jumper Code (C13/C14) 1.2 TB 10K RPM SAS 30 VR6ST *1: The following accessories are not included in the basic configuration. Make arrangements as necessary. PDU (power supply tap for rack) Ethernet connection cable for SFP+ Ethernet connection cable for RJ45. TI 30A05B10-01EN Sep. 28,

64 6. Hardware Configuration 6-8 L2 switch Device Description Model Remarks (*1) (*2) DELL S4048T-ON 40 Gb 6 port (QSFP+) 10 Gb 48 port (RJ45) VR6SW Air flow (IO to PSU) (*2) With Power Code (C13/C14) No Jumper Code *1: The following accessories are not included in the basic configuration. Make arrangements as necessary. 40 Gbps QSFP+ to 10 Gbps SFP+ 4 breakout cable Two breakout cables per cabinet of virtualization host computers, and two breakout cables per cabinet of shared storage are required. Ethernet connection cable for RJ45 *2: Arrange the airflow direction according to the rack airflow that will accommodate the L2 switch cabinet. Thin client Device Description Remarks (*1) (*2) Dell Wyse displays DisplayPort x 2 Thin OS 8.4 Dell Wyse displays DVI x 1 Windows 10 IoT Enterprise 2015 LTSB DisplayPort x 1 Dell Wyse 7020 Quad Display 4 displays DVI x 1 DisplayPort x 3 Windows 10 IoT Enterprise 2015 LTSB TI 30A05B10-01EN Sep. 28,

65 7. Resource Capacity of the Resource Capacity of the Virtual Machine The resource capacity required for the virtualization host computer (number of CPU cores of the physical server, memory size, etc.) calculates the resource capacity required for the operation of the host OS and the virtual machine, and integrates all of them. The resource capacity required for the host OS and the virtual machine depends on the conditions of the host OS and the virtual machine you want to operate. Therefore, to estimate the required resource capacity, it is necessary to examine in advance the parameter indicating the application scale of the Yokogawa system product (number of simultaneous display displays, display update cycle, number of tags, number of data collections per second, etc.) and aspects such as whether the virtualization host computer is to be moved with single configuration or with HA cluster configuration. Estimate the resource capacity required to run the host OS and the virtual machine under the conditions that were examined in advance. For the required resource capacity of the virtual machine, refer to the operation specifications of each product. This section describes common matters and notes on resource capacity, and necessary resource capacity for the host OS. The resource capacity of the standard virtual machine shown below is an approximate resource capacity based on estimating the number of virtual machines that can be consolidated in the virtualization host computer. Based on this resource capacity, the configuration of the specified server in Section 6 is determined. The following table shows the resource capacity of the standard virtual machine. Hardware items of virtual machine Resource value Number of CPU cores 2 Memory size 4 GB Hard disk size 80 GB Disk Throughput 16 MB/sec at maximum Network Throughput 1 Gbps at maximum For each product, the resource capacity shown with the parameter conditions is represented by the following three sets of resource indices: Number of CPU cores (pcs.) Memory size (GB) Disk size (GB) In addition to the previous resources, the following resource indices may be added and expressed in some cases: Disk Throughput (MB/s) Disk IOPS (IO count/s) Network Throughput (Mbps) Disk Throughput : Data amount of reading/writing disk per unit time Disk IOPS : Number of read/write commands per unit time Network Throughput : Data amount of network communication (transmission/reception) per unit time TI 30A05B10-01EN Sep. 28,

66 About the specified server 7. Resource Capacity of the 7-2 For the specified server and shared storage, assuming the specifications of the standard virtual machine, the hardware configuration is determined assuming the number of simultaneous operations of the virtual machine. When using a specified server, you cannot change the hardware capability. Therefore, the number of virtual machines assumed by each specified server will be less than expected if there are virtual machines with resources larger than the resource capacity of the standard virtual machine. In other words, reduce the number of virtual machines that run simultaneously and allocate the reduced capacity of the virtual machine resource capacity to another virtual machine. You can add as much resource capacity in excess of the standard virtual machine as the reduced number of standard virtual machines, but you cannot exceed the capability of the specified server. For example, the disk throughput of the specified server can be calculated as 288 MB/sec. However, if you want to run a virtual machine with higher performance (larger disk throughput) than this, you cannot run a system using the specified server and shared storage in terms of performance. On each specified server, share resources with each virtual machine with the following resource capacity as the upper limit. Hardware items of virtual machine 1 CPU single configuration Configuration of specified server 2 CPU single configuration 2 CPU HA cluster configuration Number of CPU cores Memory size (*1) 54 GB 118 GB 118 GB Hard disk size (*1) 3.2 TB 5.4TB 4.8 TB (*2) (*3) 10.2TB (*2) (*4) 15.1TB (*2) (*5) Disk Throughput 144 MB/sec 288 MB/sec 288 MB/sec (*6) Remarks *1: This does not include the amount necessary for the Hypervisor to manage the virtual machine. Refer to Appendix A for details. *2: Disk capacity of the shared storage, which is shared by all virtualization host computers that connect to the shared storage. *3: Configuration 1 of SCv3020 *4: Configuration 2 of SCv3020 *5: Configuration 3 of SCv3020 *6: Ensure that the total throughput of the virtualization host computers in the HA cluster configuration does not exceed 625 MB/sec per one shared storage. Notes on the number of virtual machine cores If you want to change the number of cores to two cores or more after creating a 1-core virtual machine and installing the guest OS, reinstall the guest OS after changing the number of cores of the virtual machine. On the other hand, if you want to change a virtual machine with two or more cores to 1-core, reinstall the guest OS in the same way. About the 1 virtual machine, 1 Yokogawa product installation recommendation There are two ways to implement two or more products when running a single virtualization host computer. Install only one type of Yokogawa product on one virtual machine and run it on separate virtual machine. Place multiple Yokogawa products on one virtual machine, and run them with the same virtual machine. In the virtualization platform, unlike in the physical environment, you can take advantage of the maximum lifecycle of the station of the DCS system and the benefit of improving product maintainability without increasing the footprint or placing the products together. Therefore, we recommend the former mounting method and recommend 1 virtual machine, 1 Yokogawa product installation. TI 30A05B10-01EN Sep. 28,

67 7. Resource Capacity of the 7-3 DCS system station lifecycle maximization and product maintainability Combinations of product versions that can be placed together are specified for each product. Therefore, there is a possibility that it is not enough to apply batch application or version upgrade on one product alone. In addition, the maintenance procedure may become complicated due to the dependency relationships of products installed at the same time. In addition, while 1 virtual machine, 1 Yokogawa product installation is recommended, if you want to install the plural products in the same virtual machine, refer to Basic policies for estimating resources below. Basic policies for estimating resources of a virtual machine where multiple Yokogawa products are installed Estimate the resources required for a virtual machine where multiple Yokogawa products are installed as follows. Number of cores Since the number of cores depends on whether the Yokogawa products installed on the machine are operated or run concurrently, estimate the number of cores considering the combination of the Yokogawa products. Example: HIS/ENG The ENG function is not used while the machine is run as an HIS. Therefore, the number of cores should be the number of cores required for HIS or ENG function, whichever is larger. Memory size Estimate as follows: Memory size = the largest among the memory sizes required for the Yokogawa products installed on the machine + half the memory size for the Yokogawa product that may operate concurrently Examples of the case with possibility of concurrent operation are the case where background processing such as trend data collection and CAMS runs on HIS and the case where the Yokogawa products operate in collaboration. Disk throughput Among the Yokogawa products installed on the machine, assume that the disk throughput of the product requiring the highest disk throughput is the disk throughput. Hard disk space Assume that the total of the disk space required by all the Yokogawa products installed on the machine is the disk space. Notes on summation of the disk throughputs when each virtual machine has multiple virtual hard disks Calculate the disk throughput per one virtual machine by adding up the throughput for all the virtual hard disks that belong to the virtual machine. Since the upper limit of disk throughput is set for each virtual hard disk, add up the upper limit values set for the individual virtual hard disks, and assume the total as the disk throughput of the virtual machine. The calculation when the resources of the disk throughput of the virtualization host computer is shared by the virtual machines is done by using the disk throughput after summation. For example, if one virtual machine has two virtual hard disks and the upper limits of throughput are 16 MB/sec and 32 MB/sec, then the disk throughput of this virtual machine is 48 MB/sec. TI 30A05B10-01EN Sep. 28,

68 7. Resource Capacity of the Resource Capacity Used by the Host OS Resource Capacity Parameter Condition Remarks CPU core: 2 Memory size: 10 GB Disk size: 500 GB CPU core: 4 Memory size: 10 GB Disk size: 500 GB Single Configuration HA Cluster Configuration TI 30A05B10-01EN Sep. 28,

69 7. Resource Capacity of the Resource Capacity Used by Yokogawa System Products Common License manager License manager performs on the virtual machine of the specification below. Table Type VM requirements: License manager CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks When the license manager is installed into the same virtual machine co-residence with CENTUM VP/ ProSafe-RS/ PRM, obey each operating environment. File server The file server where VP project and AD project are arranged performs on the virtual machine of the specification below. Table Type VM requirements: File server CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks Specification is the same when using system builders only, AD Suite only, or both system builders and AD Suite CENTUM VP HIS HIS (VP6H1100) Table Type VM requirements: HIS CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation 2 (*1) (*2) Remarks *1: When using Multiple-monitor Support Package, 3 cores are required. *2: When using Long-term Data Archive Package, extend the size in accordance with the storage period. The above resource is assumed to display 1 graphic view per monitor. Therefore, when increasing graphic views, more cores and memory may be required. For example, regarding the number of CPU cores, please use the following as an aim. - HIS without Multiple-monitor Support Package, 1 graphic view : 2 cores - HIS without Multiple-monitor Support Package, 2 to 5 graphic views : 3 cores - HIS with Multiple-monitor Support Package, 1 to 4 graphic views : 3 cores - HIS with Multiple-monitor Support Package, 5 to 12 graphic views : 4 cores TI 30A05B10-01EN Sep. 28,

70 7. Resource Capacity of the 7-6 HIS-TSE (VP6H1150) Table Type VM requirements: HIS-TSE CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) HIS-TSE 4 (*1) (*3) HIS-TSE 8 (*2) (*3) Remarks *1: The number of clients that can be simultaneously connected is 4 or less. *2: The number of clients that can be simultaneously connected is 8 or less. *3: When using Long-term Data Archive Package, extend the size in accordance with the storage period. CAMS for HIS Table VM requirements: CAMS for HIS Type Recommendation CPU (Cores) 4 (+ 2) Memory (GB) 5 (+ 1) Disk Throughput (MB/Sec.) 24 (+ 8) Disk Volume (GB) 90 (+ 50) Remarks Note: The values in parentheses are additions to the recommendation settings of HIS. ENG ENG (VP6E5100) Table Type VM requirements: ENG CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks AD Suite Table Type VM requirements: AD Suite CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks The above specification is applicable when either the AD organizer or AD server is installed separately on the VM. TI 30A05B10-01EN Sep. 28,

71 7. Resource Capacity of the 7-7 FCS simulator FCS simulator performs on the virtual machine of the specification below. Table Type VM requirements: FCS simulator CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Simulator x Simulator x High load for OTS (*1) (Simulator x 1) High load for OTS (*1) (Simulator x 8) Remarks *1: The high load is assumed as follows: Simulator 10X speed Marshaling function Others SIOS Table VM requirements: SIOS engineering function Type CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks When using with HIS in the same virtual machine, add 2 CPU cores to the recommendation of HIS. Table VM requirements: SIOS Type CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks TI 30A05B10-01EN Sep. 28,

72 7. Resource Capacity of the ProSafe-RS SENG The following resources are required for SENG to perform on a virtual machine. When co-residence with CENTUM VP software, allocate the maximum value of the each required resource. Not the total of each function s resources. But the hard disk volumes should be added in total. When arranging the database of Access Control Package or Access Administrator Package, the disk volume is required additional 60 GB or more. RS4E5000 Engineering Server Function RS4E5000 Engineering Server Function performs on the virtual machine of the specification below. Table VM requirements: Engineering Server Function Type CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks RS4E5100 Safety System Engineering and Maintenance Function RS4E5100 Safety System Engineering and Maintenance Function performs on the virtual machine of the specification below. Table VM requirements: Safety System Engineering and Maintenance Function Type CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks When installing RS4E5100 into the same virtual machine together with RS4E5000, obey the operating environment of RS4E5000. However, the required quantity should be added to the hard disk volume. RS4E2100 SOE Viewer Package / ES4E2200 SOE OPC Interface Package RS4E2100 SOE Viewer Package / ES4E2200 SOE OPC Interface Package perform on the virtual machine of the specification below. Table VM requirements: SOE Viewer Package / SOE OPC Interface Package Type CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks When installing these packages into the same virtual machine together with RS4E5000 or RS4E5100, obey each operating environment. The hard disk volume is also obeyed each operating environment. TI 30A05B10-01EN Sep. 28,

73 SCS simulator 7. Resource Capacity of the 7-9 The operating environment of SCS simulator is the same as FCS simulator. About the resource specification of FCS simulator, refer to CENTUM VP as the previous section. When SCS simulator is performed in the same virtual machine together with RS4E5100, obey the bigger operating environment in SCS simulator and RS4E5100. idefine The following resources are required to perform idefine. Table VM requirements: idefine Type CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation Remarks The above resources contain the resources required for SQL Server that is used by idefine. The co-residence with CENTUM packages is prohibited. When the co-residence with SENG, allocate the maximum value of the requested resource to the virtual machine respectively. It is not an addition of resources required by each function. However, since Hard Disk is a resource value required to operate idefine, add each required value. SOE The following resources are required to perform SOE. Table VM requirements: SOE Type CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Recommendation (*1) Remarks *1: Please extend it according to the database size required for operation Exaopc The following resources are required to perform Exaopc (NTPF100) OPC server. Table VM requirements: Exaopc CAMS for HIS Historical data storage CPU (Cores) Memory (GB) Disk Throughput (MB/Sec.) Disk Volume (GB) Remarks No No Yes No No Yes / 48 (*1) 40 (*3) Yes Yes / 64 (*2) 90 (*3) *1: The number of records is 5000 or less: 32, or less: 48. *2: The number of records is 5000 or less: 48, or less: 64. *3: 4000 items/second acquisition is the assumption. During steady operation, the CPU usage rate is low. However, the CPU usage rate will be high when downloading project data from the CENUTM system or when acquiring historical data / messages. Therefore, the number of cores is specified as 2. TI 30A05B10-01EN Sep. 28,

74 7.2.5 Exapilot 7. Resource Capacity of the 7-10 Exapilot adopts the number of procedures concurrently executable as an index showing the scale of the application. The number of procedures concurrently executable can be added by installing options. The relationship between the capacity of applications and the number of procedures concurrently executable used in this section is shown in the table below. Table Relationship between the scale of the application and the number of procedures concurrently executable The scale of the application No. of procedures concurrently executable Remarks Small 1 Standard edition only Medium 4 Professional edition only Large 10 Professional edition + 3 additional options of procedures concurrently executable Exapilot only Exapilot server When Exapilot server is used on a virtual machine, allocate the resources shown in the table below in accordance with the application scale. Table VM requirements: Exapilot server The scale of the application CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Small Medium Large Remarks Exapilot client When Exapilot client is used on a virtual machine, allocate the resources shown in the table below in accordance with the application scale. Table VM requirements: Exapilot client The scale of the application CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Small / Medium Large Remarks TI 30A05B10-01EN Sep. 28,

75 7. Resource Capacity of the 7-11 Exapilot co-residence with Yokogawa system products Exapilot server When Exapilot server is used co-residence with Yokogawa system products in a virtual machine, allocate the total value of resources of the Yokogawa system product and the resources shown in the table below in accordance with the application scale. Table VM requirements: Exapilot server co-residence with Yokogawa system products The scale of the application CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Small Medium Large Remarks Exapilot client When Exapilot client is used co-residence with Yokogawa system products in a virtual machine, allocate the total value of resources of the Yokogawa system product and the resources shown in the table below in accordance with the application scale. Table VM requirements: Exapilot client co-residence with Yokogawa system products The scale of the application CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Small / Medium Large Remarks TI 30A05B10-01EN Sep. 28,

76 7.2.6 AAASuite 7. Resource Capacity of the 7-12 The resource capacity required for AAASuite and its platform Exapilot to operate is shown in this section. Master PC When Master PC of AAASuite is used on a virtual machine, allocate the resources shown in the table below in accordance with the application scale. Table VM requirements: Master PC of AAASuite Scale CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Small (*1) Large (*2) *1: The number of procedures concurrently executable: 4.Basic functions only. *2: The number of procedures concurrently executable: 6.Basic functions + options. Remarks Recovery PC When Recovery PC of AAASuite is used on a virtual machine, allocate the resources shown in the table below in spite of the application scale. Table VM requirements: Recovery PC of AAASuite Scale CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Remarks Client PC When Client PC of AAASuite is used on a virtual machine, allocate the resources shown in the table below in spite of the application scale. Table VM requirements: Client PC of AAASuite Scale CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Remarks TI 30A05B10-01EN Sep. 28,

77 7. Resource Capacity of the PRM Resource Capacity Specifications per PRM package Resource capacity specifications for virtual machine described below are per PRM package only and based on supported configurations (e.g. number of devices, number of supported FCS/SCS, etc) PRM Server (PM4S7700, PM4S7701, PM4S7702) The following table shows the PRM Server resource capacity specifications for the virtual machine. Table VM requirements for PRM Server Number of field devices CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) (*1) 300 or less or less or less or less Remarks *1: Recommended to use Database Maintenance Tool regularly to ensure sufficient hard disk availability. Refer to the table below for the required hard disk size for one year of operation based on number of field devices supported. Table PRM Server Device Database Capacity Specifications for One Year of Operations Number of field devices 300 or less 1000 or less 3000 or less 6000 or less Device Database Capacity 600 MB 2 GB 6 GB 15 GB PRM Client (PM4S7710) The following table shows the PRM Client resource capacity specifications for the virtual machine. Table VM requirements for PRM Client Number of field devices CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Remarks Field Communications Server (PM4S7720) The following table shows the Field Communication Server resource capacity specifications for the virtual machine. Table VM requirements for Field Communications Server Number of field devices CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) Remarks 4 Refer to the table below TI 30A05B10-01EN Sep. 28,

78 7. Resource Capacity of the 7-14 Table Memory requirements for Field Communications Server Connecting with CENTUM VP or ProSafe-RS Connecting with STARDOM Connecting via NI-FBUS simplified system for Foundation fieldbus Connecting with simplified system for HART device or HART multiplexer Connecting via CommDTM/ GatewayDTM Memory (*1) (*2) FCS/SCS 1-16 units ( x number of FCS/SCS) MB or more recommended FCS/SCS units ( x (number of FCS/ SCS-16)) MB or more recommended FCN/FCJ 1-16 units ( x number of FCN/FCJ) MB or more recommended FCN/FCJ units ( x (number of FCN/ FCJ-16) MB or more recommended 256 MB or more required 512 MB or more recommended 256 MB or more required 512 MB or more recommended 256 MB or more required {30+(commDTM/ gatewaydtm main memory) (No. of node)}mb or more recommended *1: The specified hardware requirements do not include the requirement for third party CommDTM/GatewayDTM. Refer to the respective DTM documentation *2: The total memory requirement should be the sum of the memory requirement for each required function and connected system. PRM Advanced Diagnosis Server (PM4S7740) The following table shows the PRM Advanced Diagnosis Server resource capacity for the virtual machine. Table VM requirements for Advanced Diagnosis Server Number of field devices CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) (*1) 300 or less or less or less or less Remarks *1: Recommended to use Database Maintenance Tool regularly to ensure sufficient hard disk availability.. Refer to the table below for the required hard disk size for one year of operation. Below is the database capacity of Device Diagnosis Data Historian hard disk requirement for one year of operations based on the following assumptions: Ten numeric device parameters values per field device are acquired every 24-hours Results of one device diagnosis per field device is stored every 10 minutes Table Device Diagnosis Data Historian Device Database Capacity Specifications for One Year of Operations Number of field devices 300 or less 1000 or less 3000 or less 6000 or less Device Database Capacity 3 GB 10 GB 30 GB 50 GB TI 30A05B10-01EN Sep. 28,

79 PST Scheduler Server (PM4S7780) 7. Resource Capacity of the 7-15 The following table shows the PST Scheduler Server resource capacity specifications for the virtual machine. Table VM requirements for PST Scheduler Server Number of field devices CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) 300 or less or less or less or less Remarks Resource Capacity specifications for combination of PRM packages Resource capacity specifications described below are based on combination of PRM packages to be installed and activated in one virtual machine Table VM requirements for Combination of PRM Packages Number of field devices 300 or less CPU (Cores) Memory (GB) Disk IOPS (MB/Sec.) Disk Volume (GB) or less (*1) (*2) 3000 or less or less 32 (*3) Remarks *1: The maximum CPU capability requirement PRM packages to be installed and activated in the virtual machine, or higher. *2: Total sum of memory size requirement for the PRM packages to be installed and activated in the virtual machine or based on the operating system requirement (whichever is higher) or more. *3: Total sum of hard disk requirement for the PRM packages to be installed and activated in the virtual machine or higher. Refer to the different General Specifications (GS) of the corresponding PRM Packages for the hard disk requirement information. Below are some restrictions for installing PRM packages in one virtual machine: it is recommended to set up a dedicated virtual machine for Field Communications Server: - When connecting to more than 24 stations. - When supporting more than 3000 devices it is recommended to set up a dedicated virtual machine for PRM Advanced Diagnosis Server when more than 300 diagnosis module are running simultaneously for 10 PRM Advanced Diagnostic Applications (PAAs). TI 30A05B10-01EN Sep. 28,

80 8. Functional Specification 8.1 Vnet/IP Communication Software 8. Functional Specification 8-1 The Vnet/IP communication software is necessary for the guest OS on the virtualization host computer to perform Vnet/IP communication without using the Vnet/IP card. The product name given to this software is Vnet/IP Interface Package. Refer to Chapter 11 Vnet/IP Communication Software for details. 8.2 Hardware Status Monitor In the virtualization platform, the NMS monitors the hardware status. By periodically monitoring, the NMS can detect hardware abnormalities and network link disconnection of the virtualization host computer and shared storage, and can collect data such as the size of free disk space from the host OS. If a hardware administrator or a plant operator wants to know the hardware status of the virtualization platform, check the hardware status with the NMS. To monitor alarms on HIS, it is necessary to send messages to HIS from the NMS. The program for sending messages to HIS is provided as a Tokuchu program. NMS OPC (Out of scope of this document) WMI Network switch for Storage network SNMP Host OS Guest OS Management Console SNMP SNMP HIS Virtualization Software Shared storage Storage controller Remote management controller Physical Server CPU, MEM, RAID, DISK, PSU, FAN, TEMP,... F080201E.ai Remote management controller The remote management controller is built into each server and its purpose is to check the BIOS settings and hardware status of the server via the network. For Dell servers, idrac (Integrated Dell Remote Access Controller) is a dedicated controller for remote monitoring. The NMS acquires the hardware status of the physical server via the remote management controller. TI 30A05B10-01EN Sep. 28,

81 Storage controller 8. Functional Specification 8-2 The storage controller receives an I/O request from the server and efficiently reads and writes the disks in shared storage. The hardware status of the shared storage can be confirmed via the storage controller. The NMS acquires the hardware status of the shared storage via the storage controller. Management console (network switch for storage network) The management console is the management interface used for setting/checking the status of the network switch. The NMS obtains the status of the network switch via the management console Supported Interface This section describes the interface which can be used for patrol monitoring of each device of the virtualization platform from NMS. WMI It can be used to collect performance data from the host OS. When this interface is used, create a user account for WMI in the host OS and make it belong to the following account group. HVS_WMI_MONITOR Performance Log Users Performance Monitor Users Also sets exception permission on the host OS Firewall. Activation of rules Item Name Settings Windows Management Instrumentation (WMI-IN) Enabled Inbound Rules Windows Management Instrumentation (DCOM-In) Enabled The HVS_WMI_MONITOR group should be made when setting up the OS environment of host OS. About the account group settings, refer to IM 30A05B30-01EN Virtualization Platform Security Guide. SNMP It can be used to monitor the hardware status of the virtualization host computer and shared storage. When this interface is used, apply SNMP v3. TI 30A05B10-01EN Sep. 28,

82 8.2.2 Detectable Hardware Abnormality 8. Functional Specification 8-3 The following shows hardware abnormality that can be detected by the virtualization platform. Virtualization Host Computer Monitor the following as the hardware status of the virtualization host computer. Hardware Detection item Remarks CPU CPU status MEMORY Memory status HDD HDD failure RAID controller Battery voltage status of RAID card NIC Network port link down TEMP Temperature abnormality inside the enclosure FAN Stop of FAN RTC Battery voltage status PSU Stop of Power supply unit Shared Storage Monitor the following as the hardware status of shared storage. Hardware Detection item Remarks Storage controller HDD TEMP FAN PSU Stop of controller Low battery voltage of RAID card HDD failure Temperature abnormality inside the enclosure Stop of FAN Stop of Power Supply Unit Network Switch for Storage Monitor the following as the hardware status of the network switch for storage. Hardware Detection item Remarks Switch port Network port link down Fan Stop of FAN PSU Stop of Power supply unit TI 30A05B10-01EN Sep. 28,

83 8.3 TCP/UDP Port 8. Functional Specification 8-4 For the virtualization platform, both the management network and the plant information network must be connected for the following reasons: Sharing of SNTP server in guest OS and host OS Sharing of domain controller in guest OS and host OS Sharing of NMS in guest OS and host OS At this time, the management network and the plant information network will be connected via the router, but set the access control list (ACL) and secure the network security. Set the ACL set between the management network and the plant information network so that the TCP/UDP port used for the previous purpose is not blocked. TI 30A05B10-01EN Sep. 28,

84 9. Thin Client 9. Thin Client 9-1 This chapter describes the features of the thin client in the virtualization platform, and devices required for the system structure, etc. 9.1 Overview Positioning The following figure shows the position of the thin client in the virtualization platform. Operator Room Scope of this chapter Equipments installed at Level 3 can also be used Thin Client Monitor Monitor Monitor Monitor Physical HIS Domain Controller NMS SNTP Server Monitor Monitor Monitor Monitor Remote UI network Plant Information network(ethernet) Server Room Management network Router NMS Domain Controller Virtualization Host OS VM HIS Vnet/IP Interface pkg. KVM Server Console Storage network HA-Cluster network Virtualization Guest OS Virtualization Software Server Hardware Shared Storage Vnet /IP Controller / Field Equipment FCS Figure System structure of virtualization platform and scope of this document F090101E.ai This section describes mainly about the thin client which is installed in the operator room. Refer to Chapter 2 for details on overall configuration and settings in this platform. Functional Overview You can remotely connect from the thin client to a virtual machine on the virtualization host computer, and display and operate the applications on the virtual machine through the network. You can connect one thin client to a specific virtual machine (one to one connection), or connect one thin client to multiple virtual machines simultaneously and toggle between the displays to operate them (one to many connection). However, there are conditions to be met when connecting to multiple virtual machines simultaneously. For more information, refer to Connecting Thin Client to multiple virtual machines simultaneously in Section TI 30A05B10-01EN Sep. 28,

85 9. Thin Client 9-2 Client Side Thin Client Thin Client Remote UI network Virtualization Host Computer VM VM VM VM VM VM Server Side Remote Connection F090102E.ai Figure Connecting Thin Client to a specific virtual machine Client Side Thin Client Thin Client Connecting simultaneously Remote UI network Connecting simultaneously Remote Connection Virtualization Host Computer VM VM VM VM VM VM Server Side Remote Connection F090103E.ai Figure Connecting Thin Client to multiple virtual machine TI 30A05B10-01EN Sep. 28,

86 9. Thin Client Specifications Thin Client Specifications The following table describes the specifications of Thin Client used in this platform. Table Thin Client Specifications OS Storage size Display output Item Descriptions Note Method of connection to server USB devices that can be used from virtual machine by connecting to a thin client Operation keyboard Sound output Access control to virtual machines Remote UI network diagnosis IT Security Windows 10 IoT Enterprise Wyse ThinOS 8.4 or later Windows 10: 64 GByte or above Thin OS: Not specified Maximum of 4 screens Ethernet (can be redundant, Windows version only) Use Microsoft Remote Desktop Protocol (RDP) as remote connection protocol. Operation keyboard Speakers USB storage (IT security settings need to be changed.) The following types of keyboards can be connected AIP830 AIP831 Can be sent out from the following devices Operation keyboard USB speakers (alternative method due to restrictions) You can set to connect/disconnect for the following items. IP address of Client User name of the remote connection Detects defects through a dedicated diagnosis software and displays notification message on the screen. Defects are detected after 3 seconds from the occurrence of defects. Strengthen security with IT Security Tool. Dell Wyse 7020/7020 Quad Display Dell Wyse 3040 Maximum number of screens is as specified in the hardware specification document of the Thin Client. Use Thin Client network to guarantee communication bandwidth of RDP. When USB devices except for USB storage are used, the Remote Desktop Session Host role service needs to be installed in virtual machines. Use 2 USB ports. Refer the restrictions mentioned below for details. Use the OS firewall feature of virtual machines. Only Windows OS version can be detected. Yokogawa provides the diagnosis software. For details, refer the section Network Diagnosis Software mentioned below in this section Application of policy through tool is performed only for the Windows OS version. The operation keyboard and USB speaker can be used for Windows version Thin Client only. TI 30A05B10-01EN Sep. 28,

87 HIS High availability feature 9. Thin Client 9-4 If a defect occurs in Remote UI network, remote desktop communication ceases and you cannot operate or monitor from the Thin Client. As a counter-measure the availability of network is increased by the following methods. 1. Making the network path redundant Make the network path between Thin Client and a virtualization host computer redundant so that when there is a defect in the network on one-side, you can switch to the network on the other side and connect to the remote desktop. In general, as the Thin Client has only one network interface, make the network interface redundant by using the USB Ethernet adapter. Making the network redundant and replacement of network is carried out through NIC teaming in the virtualization host computer. Refer Virtualization Platform External Specification for Server document for details on NIC tuning. Same subnet IP address is allotted to 2 network interfaces of the Thin Client. In general, communication is carried-out from the IP address on one side and if there is any defect, communication is carried-out from the IP address on the other side. At that time, user needs to close the remote desktop connection screen that appeared before the defect occurred and connect remotely to the virtual machine once more. Thin Client Thin Client Path2 Path1 L2SW L2SW HIS HIS Virtualization Host Computer Figure Making the network path redundant F090201E.ai TI 30A05B10-01EN Sep. 28,

88 2. Installing multiple Thin Clients 9. Thin Client 9-5 Multiple Thin clients are installed and they are connected to different L2 switches. When there is a network error, remote desktop connection of the Thin Client on one side ceases, but since there is no effect on the Thin Client on the other side, it can be operated and monitored. Speaker OPKB TC Speaker OPKB TC Multiple Clients Speaker OPKB TC Speaker OPKB TC Multiple Clients L2SW L2SW HIS HIS Multiple HISs HIS HIS Multiple HISs Figure Virtualization Host Computer Installing multiple Thin Clients F090202E.ai Further, the methods 1 and 2 mentioned above can be combined and used. Speaker TC OPKB Speaker TC OPKB Multiple Clients Speaker TC OPKB Speaker TC OPKB Multiple Clients L2SW L2SW HIS HIS Multiple HISs HIS Virtualization Host Computer HIS Multiple HISs F090203E.ai Figure Installing multiple Thin Clients and redundant usage of path. TI 30A05B10-01EN Sep. 28,

89 Network Diagnosis Software 9. Thin Client 9-6 If there is any abnormality in the path between the remote UI network and virtual machines, remote desktop screen freezes. Sometimes, it might be difficult to determine whether this freezing is due to abnormality in the network path or it is just because there is no change in the screen image. To overcome this, network diagnosis software is provided to notify the user when the screen freeze is due to any network error. This software can be run only on Thin Client whose operating system is Windows. Since Thin OS is only for Thin Client, this software cannot be installed in Thin Client with Thin OS version. Run the OS system diagnosis utility to determine if there is any abnormality on the network path which is causing the screen freeze. For details about System Diagnosis Utilities, refer to Section 6.1. < Features > Structured with a service that monitors RDP communication and a program that notifies the user about network abnormality. The service monitors RDP communication and starts the notification program when abnormality is detected. The notification program notifies the user about the network issue through dialog boxes as shown in the figure below. If RDP communication is interrupted for 3 seconds, it is judged to be a network issue. If connected to multiple virtual machines, this program notifies when there is a defect in any of the communications. A dialog box appears also when a user closes the remote desktop window or when the remote connection cannot be continued due to an error of virtual machine. F090204E.ai Figure Notification dialog box when network defect is detected TI 30A05B10-01EN Sep. 28,

90 9. Thin Client 9-7 Connecting Thin Client to multiple virtual machines simultaneously (One-tomany connection) You can connect one thin client to multiple virtual machines remotely and switch between the displays of these machines and operate them. USB devices connected to the thin client are used by one dedicated virtual machine. The USB devices cannot be used from multiple virtual machines simultaneously. Available USB devices are described in Table 2 1 USB devices that can be used from virtual machine by connecting to a thin client. However, USB storage is not included. Only the USB storage can be used from multiple virtual machines. For example, when the virtual machine 1 uses an operation keyboard and a USB speaker, even if the virtual machine 2 is in operation, only the virtual machine 1 can operate the operation keyboard and enable audio output from the USB speaker. The USB devices should be used carefully not to lead to operational errors. You can determine which virtual machine uses USB device by configuring the remote connection settings. You must specify the combination of virtual machine and USB device at the configuration of remote connection. If you want to use a USB device currently in use in a different virtual machine, you must disconnect the USB device from the virtual machine that is currently using it and then connect it to another virtual machine. CAUTION Simultaneous connection to multiple virtual machines consumes large amounts of memory resources of Thin Client. Lacking resources may cut remote connections. Confirm the memory usage rate of the Thin Client and reduce the number of simultaneous connections. The number of simultaneous connections and the memory usage rate for operation will be decided at the discretion of JOB. Monitor Specifications This is based on the operation environment of the target application. Use the monitor within the hardware specifications of the target thin client. For details about Hardware specifications, refer to Section Cautions when connecting monitor Connect the DVI terminal of monitor and the DVI terminal or DisplayPort terminal of thin client through a DVI-D cable. If you are using DisplayPort terminal of thin client, use the DisplayPort- DVI converting adapter to connect to the monitor. When using DisplayPort to connect, if monitor power is turned off, thin client cannot recognize the monitor. The screen position of monitor also changes and does not return back to its correct position even when monitor is switched on. TI 30A05B10-01EN Sep. 28,

91 Restrictions 9. Thin Client 9-8 There could be a memory leak in RDB Client due to the existing defects on Windows. This defect occurs when audio is replayed in virtual machines. The frequency of this defect differs depending upon the type of audio and settings on the remote desktop. Hence, use the following method to run applications requiring audio playback. If using operation keyboard: Set the [Buzzer replacement] setting of HIS to [Operation keyboard] to enable audio output from operation keyboard. If not using operation keyboard : Connect the USB speaker to the Thin Client to enable audio output from USB speaker. If audio output is not required: When connecting to remote desktop, set [Remote audio playback] to [Do not play]. If this defect occurs, close the window of the Remote Desktop Client and configure remote settings again. TIP This defect occurs when the operating system of Thin Client is Windows10. Correction in Windows10 LTSB is planned in the year Correction in Thin OS has been made in System version 8.4_112. TI 30A05B10-01EN Sep. 28,

92 9.2.2 Line-up of Thin Client 9. Thin Client 9-9 The following thin clients can be used as standard Thin Client in the virtualization platform. Select one of the following thin clients depending upon the project requirements. Dell Wyse 3040 Dell Wyse 7020 Dell Wyse 7020 Quad Display The characteristics of each type is explained in the following table. Table Characteristics of thin client Item Quad Display OS Thin OS 8.4 Windows 10 IoT Enterprise 2015 LTSB Maximum display output Convert to DVI and connect 2 screens DisplayPort 2 2 screens DVI 1 DisplayPort 1 4 screens DVI 1 DisplayPort 3 Network redundancy No Yes (For USB Ethernet adapter) Security Yes Partly Yes No. of USB ports 4 6 Local user settings No Possible Windows domain environment No Possible Installation of network diagnosis feature No Yes Restrictions (RDP is disconnected due to memory No Yes leakage in RDP Client) Firmware management using USB memory Yes Yes Firmware update and OS configuration using FTP server Yes No Firmware update and OS configuration using management server No Yes Refer the links below for individual specifications of thin client. Dell Wyse Dell Wyse Dell Wyse 7020 Quad Display TIP Since 3040 is a dedicated OS for remote communication, security risk is small. Since the base of 7020/7020QD is Windows OS, security risk is the same as that of general PC. IT security is configured in this platform to handle the security risks. TI 30A05B10-01EN Sep. 28,

93 9. Thin Client Other Cautions HIS When connecting an operation keyboard or USB speakers to the thin client, add the remote desktop session host role service in the virtualization guest OS. For the procedure, refer to the IM of the system products (i.e. IM 33J01C10-01EN, etc.). When connecting an operation keyboard to the thin client, install a driver for operation keyboard into the virtual machine. At this time, you need to connect the operation keyboard to the virtualization host computer and install the driver for operation keyboard to the virtualization host OS. Do not enable the auto logon of HIS (because HIS starts automatically when power is turned on). When remote connection is established after HIS is started, screen might not be displayed properly due to no. of displays and display resolution. Or else, after the power of thin client is turned on, it might be connected to virtual machine automatically and HIS might start automatically. SENG Separate settings are required to use idefine dongle. Refer to Chapter 13.3 idefine of ProSafe- RS for details. Common applications (Security) As per the security policies, the following features cannot be used in the standard settings. If you want to use these features, change the security settings by following the steps mentioned in this manual. - Data copying between USB storage connected to thin client and virtual machine. Refer to Section 4.4 for data copy. - Auto logon when connecting remotely. (feature that logs on to the virtual machine that is already specified by automatically opening the remote desktop when logged on to thin client) When taking out the data on virtual machine, use the external storage that is connected to virtualization host computer server rather than the one connected to thin client. In this case, you must temporarily cancel the security settings for virtual machine and virtualization host computer. Others When the CPU load of virtual machine reaches 100%, remote desktop might get disconnected. If the usage rate of virtual memory increases and free memory space exhaust, remote desktop might get disconnected. TI 30A05B10-01EN Sep. 28,

94 9. Thin Client Specification of simultaneous connection to virtual machines By default, two sessions can be connected to virtual machines simultaneously. If you build connections exceeding the maximum number of sessions or connections, whether to allow logon depends on the settings of virtual machines and connection users. The following table shows behaviors when the default is set and the settings are changed. If each product gives instructions about the settings, follow them to configure the settings. TI 30A05B10-01EN Sep. 28,

95 Table Windows settings Number of simultaneous connections and whether to allow logon Item Number of connection sessions Logon operation when the number of connections exceeds the maximum RD session host installation (*1) Restriction on maximum of connections (*2) Restriction on session per user (*3) Number of simultaneously connectable sessions Number of sessions per user When connecting the user name different from the user name of the current connection: When connecting the same user name as the user name of the current connection: Remarks No N/A Enabled / Nonconfiguration (*4) Disabled 9. Thin Client 9-12 Descriptions Yes Enabled (N session: 1 session by default) Disabled / Non-configuration is not supported Enabled / Nonconfiguration Disabled (*4) 2 sessions N session (*5) (Setting count of *2) N The confirmation dialog box appears in the client of the current connection. After the OK button is clicked or a certain time lapses, the display of the previous connection is interrupted and the display of the next connection appears. The session of the previous connection continues, but the display is not shown. The display of the previous connection is interrupted and the display of the subsequent connection appears. The confirmation dialog box does not appear in the client of the current connection. The session of the previous connection continues and the display appears during the subsequent connection. OS default settings Same above (When the session of the subsequent user continues) The subsequent user can connect. The previous connection also continues. In this case, more than N sessions can be connected. (When the session of the subsequent user does not continue) The subsequent connection is rejected and the previous connection continues. The display of the previous connection is interrupted and the display of the subsequent connection appears. The confirmation dialog box does not appear in the client of the current connection. The subsequent connection is rejected and the previous connection continues. *1: Install Remote Desktop Session Host role service. *2: Set Local Group policy as Restrict connection. *3: Set Local Group policy as Restrict Remote Desktop Service Session to one session for Remote Desktop Service User. *4: he initial value of OS is Non-configuration. This behavior is decided by a registry value. The default registry value is Enabled. *5: Group policy settings of Domain Controller takes priority in the Windows domain environment. TI 30A05B10-01EN Sep. 28,

96 10. IT Security IT Security This chapter describes the generous overview and specifications of IT security for virtualization platform. For more details, refer to IM 30A05B30-01EN Virtualization Platform Security Guide Overview Security settings for Yokogawa IA system products that run on virtual machines are performed by each IT security tool corresponding to that product. That is the same as security measures on real machines. However, in systems using the virtualization platform, there are components that require unique security measures. IT security focused on virtualization is explained below Specification Target Components The following IT security settings are applied to the components of virtualization platform. Table Object of IT security settings Target components IT security provided by virtualization platform Host OS Yes No Guest OS No Yes Domain controller Yes (*1) Yes (*1) Thin client (Windows 10) Yes No Thin client (Thin OS) No No IT security provided by Yokogawa system products Remarks Settings are not available because of non-windows OS. *1: You can use the domain controller for Yokogawa system products instead of using the virtual management domain controller that is dedicated for the virtualization platform. In such a case, you must use the IT security settings for the domain controller of the product. You do not have to use the IT security settings for the virtual management domain controller. IT Security Tool IT Security Tool is not installed into the target components. You must start IT Security Tool from the installation medium of virtualization platform. You must install distribution packages that are required to execute IT Security Tool beforehand. You must connect a USB optical drive for a thin client without an optical drive. (*1) The log file or files to maintain security settings are generated in the target components. *1: You can use a USB optical drive although you select Applying the StorageDevicePolicies function or Disabling USB storage device in IT security settings. Relation with IT security settings in the guest OS You are free to combine the IT security settings for virtualization platform with the IT security settings in the guest OS (IT security version, security model, and user management method). TI 30A05B10-01EN Sep. 28,

97 IT security version Only the IT security version 2.0 is available for the virtualization platform. Security model Only one type of security model is provided for the virtualization platform. User management methods 10. IT Security 10-2 The IT security settings for virtualization platform are not classified according to user management methods (standalone, combination, and domain management). TI 30A05B10-01EN Sep. 28,

98 11. Vnet/IP Communication Software Vnet/IP Communication Software This chapter describes the generous overview and specifications of Vnet/IP communication for virtualization platform. For more details, refer to IM 30A05B20-01EN Virtualization Platform Setup Overview Console type HIS Virtualization Platform PRM Field Communication Server HIS/ENG Exaopc Virtualized HIS/ENG Virtualization Host Computer Vnet/IP Domain 1 Bus 1 L3SW Bus 2 L3SW Vnet/IP Domain 2 Open communication Bus 1 L3SW Bus 2 L3SW Boundary router Vnet/IP Domain3 Intranet FFCS-L V net router APCS/GSGW UGS2 V net Domain 4 OPC server, etc. Generic Ethernet devices Figure Components of the Vnet/IP system configuration F110101E.ai As one of the components of Vnet/IP system configuration, virtualized Vnet/IP stations that are performed in the virtualization environment is supported. The virtualized Vnet/IP station performs Vnet/IP communication using a general-purpose NIC instead of the dedicated communication card VI701/VI702 (hereinafter referred to as VI70x) which had been performing Vnet/IP communication in the past, and performs HIS, ENG, PRM, each system product such as Exaopc runs on the virtual environment. Dedicated software is required to implement Vnet/IP communication using general purpose NIC. In this chapter, this dedicated software is described as Vnet/IP communication software. Vnet/IP communication software is a group of software for Vnet/IP communication operating within a virtual machine (VM) created on the virtualization platform. The Vnet/IP communication software is included in the installation media of the following products. CENTUM VP R or later ProSafe-RS R or later Exaopc R or later PRM R or later TI 30A05B10-01EN Sep. 28,

99 11.2 Specification 11. Vnet/IP Communication Software 11-2 The summary of the specification of the Vnet/IP communication software for the virtualization platform is as follows. (1) Connection to existing Vnet/IP network Any domain can be connected. However, stations equipped with Vnet/IP firmware in the domain to which the virtualized Vnet/IP stations are connected must be updated to Vnet/IP firmware Rev. 28 or higher, WAC router firmware Rev. 9 or higher. The virtualized Vnet/IP station and the redundancy platform for computer (UGS2) must connect the domains separately. (2) Communication range It is the same as the Vnet/IP communication range in the conventional real machine station. It is possible to communicate with the V net via the bus converter. (3) Communication function The following communication is not supported. (A) Sending and receiving of link transmission (scan transmission) at virtualized Vnet/IP station. But between controllers is possible as usual. When accessing the global switch, GET communication should be used. (*1) Inter-virtual domain link transmission is also included. (B) (C) (D) (E) Vnet/IP open communication Wide area mode for ProSafe-RS R2.02 or later Narrowband mode for ProSafe-RS R 3.02 or later Co-residence with HIS and SOE server for CENTUM VP. (4) Network specification The communication performance in terms of product specifications is the same as the performance allowed for each product in VI702. (5) Restrictions against the operational environment The network in the range of /16 and the virtualized Vnet/IP station may not coexist. Since it conflicts with the Vnet/IP function and may not perform properly, co-residence with other than software that Yokogawa acknowledged is prohibited. (6) Vnet/IP setting Domains and stations are set up by using the Vnet/IP interface management tool. TI 30A05B10-01EN Sep. 28,

100 12. Appendix A: Resource Capacity Appendix A: Resource Capacity 12.1 Server Resource Capacity In order to estimate the total resource capacity of the server, you must estimate the individual resource capacity at each part operated on the server and totalize it. This section describes the estimation of the individual resource capacity at each part in the virtualization host computer. Manager (Host OS) Individual resource capacity Clustering function (Host OS) Physical Server Total resource capacity Figure F120101E.ai Resource capacity of the virtualization host computer Host OS Describes the resource requirements necessary for the host OS. This resource requirement does not include the resources of the guest OS. The resource requirement assumes only the following roles in the host OS: Virtual machine control (hardware control, virtual machine management) Single Configuration The resource of the host OS in the case of a single configuration is as follows. Table Resources of single configuration Item Requirements Remarks CPU Intel Xeon E3/E5-V4 Family or later CPU speed not less than 2.4 GHz 2 or more physical cores CPU Family must be Broadwell or later (PREFETCHW instruction support). The required OS speed is 1.4 GHz or higher, but it must match with the guest OS. Memory size Capacity 10 GByte or more With ECC 4 GB+ host reservation size Hard disk capacity As capacity, 50 GB+ (memory size) 2.0 or more (*1) Connection type is SAS. 10 K rpm or more RAID-1 Network Number of ports: 1 1 Gbps 1 Core dump (= same as memory size), page file (= 1.0 times memory size), OS area, and temporary area (50 GB) Breakdown of ports Management network *1 The memory size is the size of the memory area that the virtual machine does not use within the mounted memory of the virtualization host computer. TI 30A05B10-01EN Sep. 28,

101 HA Cluster Configuration 12. Appendix A: Resource Capacity 12-2 In the case of HA cluster configuration, the host OS resources are as follows. Table CPU Resources of HA cluster configuration Item Requirements Remarks Memory size Hard disk capacity Network Intel Xeon E3/E5-V4 Family or later CPU speed not less than 2.4 GHz 4 or more physical cores Capacity 10 GByte or more With ECC As capacity, 50 GB+ (memory size) 2.0 or more (*1) Connection type is SAS. 10 K rpm or more RAID-1 Number of ports: 5 1 Gbps 2 ports 10 Gbps 2 ports 4 Gbps 1 port Added 2 cores rather than single configuration because disk protocol processing of shared storage and failover operation were added. 4 GB+ host reservation size Core dump (= same as memory size), page file (= 1.0 times memory size), OS area, and temporary area (50 GB) Breakdown of ports Management network (1 Gbps) HA cluster network (1 Gbps) Live migration (4 Gbps) Storage network (10 Gbps 2) *1: The memory size is the size of the memory area that the virtual machine does not use within the mounted memory of the virtualization host computer This section explains the precautions when estimating the resource capacity of the physical server from the virtual hardware resource capacity of the virtual machine. Resource Capacity of The resource capacity available to the guest OS on the virtual machine and the resource capacity actually required by the virtualization software to manage and control the virtual machine are slightly different. This is because virtualization software adds overhead resource capacity to manage and control the virtual machine. In particular, memory size and hard disk capacity must have a margin. The virtualization software vendor does not disclose specifications for how much surplus is required as overhead resource capacity. For the virtualization platform, the resource capacity of the selected product server has been designed based on calculations like those shown in the following table. Table Resource capacity of the virtual machine Hardware items of virtual machine Request value when creating a virtual machine (Reference) Request value when estimating physical server Virtual machine generation 2nd generation Number of processor cores (*1) 2 Count as physical core number Memory size 4 GB 4.4 GB (*2) Hard disk capacity 80 GB 96 GB (*3) (*4) Number of network cards 4 (*5) 4 (*6) *1: The speed of the physical server shall be 2.4 GHz or more *2: The overhead due to virtual machine control is calculated as 10 percent of the request value when creating the virtual machine *3: The overhead due to virtual machine control is calculated as 20 percent of the request value when creating the virtual machine *4: When taking a checkpoint, add the request value when creating the virtual machine by the (number of generations). *5: Plant information network / remote UI network / Vnet/IP (BUS 1/2) total 4 lines *6: Share with other virtual machines for use. Calculate the necessary network bandwidth and determine the actual number. To calculate the overhead in the previous table, we assume that the virtualization software simply runs (controls) the virtual machine. TI 30A05B10-01EN Sep. 28,

102 Total Resource Capacity of Server 12. Appendix A: Resource Capacity 12-3 We determine the hardware specifications of the physical server of the virtualization host computer by summing up the capacity of the request values when estimating the physical server of the host OS and guest OS. However, the following matters are not considered in this section: Backup of server/virtual machine Virtual machine snapshot CPU Single Configuration The number of cores is determined so that the total number of physical CPU cores of the physical server that is to be run as a virtualization host computer satisfies the following condition: (Total number of physical CPU cores in the physical server) (number of cores in the host OS in single configuration) + Σ (number of virtual cores in the virtual machine) Manager (Host OS) Virtual processor Physical Server Physical processor Figure F120102E.ai The number of CPU cores in the single configuration HA cluster Configuration N:1 standby configuration (number of virtualization host computers is N+1) The case when configuring a cluster on one standby virtualization host computer for N active virtualization host computers. Manager (Host OS) Manager (Host OS) Reserved Reserved Clustering function (Host OS) Clustering function (Host OS) Physical Server Physical Server Active Virtualization Host Computer Standby Virtualization Host Computer Figure HA cluster configuration (N:1 standby configuration) F120103E.ai TI 30A05B10-01EN Sep. 28,

103 About the active virtualization host computer 12. Appendix A: Resource Capacity 12-4 For each virtualization host computer, determine the number of cores so that the total number of physical CPU cores satisfies the following condition: (Total number of physical CPU cores in the physical server) (number of cores in the host OS in HA configuration) + Σ (number of virtual cores in the virtual machine) About the standby virtualization host computer The standby virtualization host computer selects the server CPU so that it has the same number of physical CPU total cores as the largest number of physical CPU total cores in the active virtualization host computer. Manager (Host OS) Clustering function (Host OS) Physical Server Total resource capacity of the standby virtualization host computer is equal to the largest total resource capacity of the active virtualization host computers Manager (Host OS) Reserved Reserved Clustering function (Host OS) Physical Server Active Virtualization Host Computer Standby Virtualization Host Computer Figure Standby virtualization host computer F120104E.ai Active/standby shared configuration (the number of virtualization host computers is M, where M 2) The case when not arranging a completely standby virtualization host computer but configuring the cluster on a virtualization host computer with both active and standby roles. Manager (Host OS) Reserved 空き Manager (Host OS) Reserved Clustering function (Host OS) Clustering function (Host OS) Physical Server Physical Server Virtualization Host Computer for Active/Standby Virtualization Host Computer for Active/Standby Figure HA cluster configuration (active/standby configuration) F120105E.ai TI 30A05B10-01EN Sep. 28,

104 12. Appendix A: Resource Capacity 12-5 About the active/standby shared virtualization host computer Use the following procedure to find the number of cores of the physical server: (1) The number of active virtual machines Nn in each virtualization host computer is obtained, and the maximum value is taken as Nmax. An active virtual machine is a virtual machine that you normally run on each virtualization host computer. Active Manager (Host OS) Reserved Clustering function (Host OS) Physical Server F120106E.ai (2) Calculate an integer value K of 1 or more that satisfies the following formula: (K-1) (M-1) < Nmax K (M-1) M: Number of servers Nmax: Maximum number of active virtual machines on each virtualization host computer (3) For each virtualization host computer, find the total value Cn, the number of cores in the virtual machine from the virtual machine with the largest number of virtual cores to the Kth virtual machine. Let Cmax be the maximum value among the Cn values of each virtualization host computer. Case: K= F120107E.ai (4) For each virtualization host computer, determine the number of cores so that the total number of physical CPU cores satisfies the following condition: (Total number of physical CPU cores in the physical server) (number of cores in the host OS in HA cluster configuration) + Σ (number of virtual cores in the active virtual machine) + Cmax TI 30A05B10-01EN Sep. 28,

105 12. Appendix A: Resource Capacity 12-6 Memory Single configuration The total amount of physical memory of the physical server run as the virtualization host computer is determined so as to satisfy the following condition: (Physical memory capacity of physical server) (memory capacity of host OS in single configuration) + Σ (memory capacity of virtual machine) + Σ (overhead of virtual machine management of host OS) Manager (Host OS) Memory Resources for Manager Resources for Manager Overhead of VMM Physical Server Implemented physical memory F120108E.ai The overhead for virtual machine management is the amount of memory that the host OS requires for each virtual machine to configure and manage the virtual machines such as the video memory of the virtual machines and the memory address conversion table. HA cluster configuration N:1 standby configuration (number of virtualization host computers is N+1) This is the case when configuring a cluster with one standby virtualization host computer for N virtualization host computers. For an illustration, see the CPU section. About the active virtualization host computer For each virtualization host computer, determine the memory capacity so that the memory capacity satisfies the following condition: (Physical memory capacity of physical server) (memory capacity of host OS with HA cluster configuration) + Σ (memory capacity of guest OS) + Σ (overhead of virtual machine management of host OS) About the standby virtualization host computer The standby virtualization host computer selects the memory of the server so that it has the same number of memory capacity as the one with the largest memory capacity on the active virtualization host computer. TI 30A05B10-01EN Sep. 28,

106 12. Appendix A: Resource Capacity 12-7 Active/standby shared configuration (the number of virtualization host computers is M, where M 2) The case when not arranging a completely standby virtualization host computer but configuring the cluster on a virtualization host computer with both active and standby roles. For an illustration, see the CPU section. About the active/standby shared virtualization host computer For the procedure of finding the memory capacity of the physical server, refer to the CPU section. In doing so, read as follows: - Read number of cores as memory capacity. - Read the last calculation formula as follows: (Physical memory capacity of physical server) (memory capacity of host OS with HA cluster configuration) + Σ (memory capacity of guest OS) + Σ (overhead of virtual machine management of host OS) + Cmax Storage Prepare disks physically different for the host OS and the virtual machine for the virtualization host computer storage. This section describes storage for virtual machines. Separate physical hard disk Manager (Host OS) Virtual hard disk Physical Server images (Including Virtual hard disk) System of Host OS Physical hard disk F120109E.ai Determine the capacity required for each storage, the total IOPS (Input Output Per Second) (*1), and the throughput (MB/s) (*2) to satisfy the following formula. (Total capacity of virtual machine storage) Σ (virtual machine virtual hard disk capacity) + Σ (virtual machine management overhead of the host OS) *1: Number of read/write instructions per second *2: Total value of reading speed and writing speed When performing backup, calculate by doubling the virtual hard disk capacity. (Total allowable IOPS number of storage for virtual machine) (70%) Σ (upper limit value of IOPS number of virtual machine storage access) (Throughput of storage for virtual machine) (70%) Σ (upper limit value of throughput of virtual machine storage access) TI 30A05B10-01EN

107 Network 12. Appendix A: Resource Capacity 12-8 The number of network ports and network bandwidth shown in this section are designed based on the following policy. The number of network ports and the network bandwidth was determined based on this. Estimate the integrated number per server as 18 VM. Allow simultaneous operation up to 18 VM. The points directly connected to where the plant operation stops due to network failure are duplicated. Vnet/IP, remote UI network, storage network As long as there is no requirement of Yokogawa system products and virtualization software, the network bandwidth shall be 1 Gbps. The network divides segments or physical ports by role. Single configuration The number of network ports and bandwidth requirements for single configuration are as follows. Item Requirements Remarks Vnet/IP 1 Gbps Ethernet 2 ports Two systems are required for duplex configuration. Plant information network Management network Remote UI network HA cluster configuration 1 Gbps Ethernet 1 port 1 Gbps Ethernet 1 port (Only used for management purposes) 5 Gbps or more Ethernet 1 port (Management / Live migration / Replication) 1 Gbps Ethernet 2 ports When bandwidth is required, an integral multiple of this number is required. Used for server management purposes. Used for communication between thin client and guest OS. Two systems are required for dual-redundant configuration. The number of network ports and bandwidth requirements for HA cluster configuration are as follows. Item Requirements Remarks Vnet/IP 1 Gbps Ethernet 2 ports Two systems are required for duplex configuration. Plant information network Management network Remote UI network Storage network HA cluster network 1 Gbps Ethernet 1 port 1 Gbps Ethernet 1 port (Only used for management purposes) 5 Gbps or more Ethernet 1 port (Management / Replication) 1 Gbps Ethernet 2 ports 10 Gbps Ethernet 2 ports 5 Gbps or more Ethernet 1 port When bandwidth is required, an integral multiple of this number is required. Used for server management purposes. Used for communication between thin client and guest OS. Two systems are required for dual-redundant configuration. Used for communication between virtualization host computer and shared storage. Two systems are required for dual-redundant configuration. Used for communication between servers that constitute a cluster. Also used for live migration. TI 30A05B10-01EN

108 13. Appendix B: Engineering Memo Appendix B: Engineering Memo 13.1 Resource Control On the virtualization host computer, resources are shared between virtual machines. Therefore, in order for the virtual machines to have no influence on others and operate properly, you must set resource usage priorities, resource limits, etc. and control the resources. Therefore, set up resource control settings for all virtual machines on the virtualization platform Guest OS The state that the total of resources allocated to the virtual machine is greater than the physical resources of the virtualization host computer is called the resource over-committed state. The over-committed state can be accommodated by the resource control settings, but it is not recommended because the behavior is not guarantee. CPU With Microsoft Hyper-V, reserved values can be specified for CPU resources in each virtual machine. Only the reserved CPU resource amount can be used exclusively by that virtual machine without being disturbed by other virtual machines. When the following conditions are satisfied, the over-committed state does not occur, so the resource control of CPU is unnecessary. (Total number of physical cores of the physical server) Σ (number of processors of each virtual machine) + (number of processors of the host OS) Memory In Microsoft Hyper-V, the virtual machine memory has a method of statically assigning fixed values and a method of activating dynamic memory to permit dynamic change of memory amount. When dynamic memory is activated, depending on the memory usage of the virtual machine, Hypervisor may recover some memory. As a result, reallocation may take time when it becomes necessary in the virtual machine. Therefore, activation of dynamic memory is prohibited. When the following conditions are satisfied, the over-committed state does not occur, so the resource control of memory is unnecessary. (Physical memory amount on the physical server) Σ (memory amount of each virtual machine) + (memory amount of the host OS) TI 30A05B10-01EN Sep. 28,

109 Hard Disk (Storage) NIC 13. Appendix B: Engineering Memo 13-2 In server virtualization, storage devices are shared by multiple virtual machines and used. Therefore, you must ensure that the total number of IOPS (the number of IO accesses per unit time) and the data transfer rate of each virtual machine does not exceed the throughput of the storage device and the data transfer rate of the intermediate route. With Microsoft Hyper-V, you can set the upper limit value of IOPS on a virtual machine basis by quality of service of hard drive. Set the upper limit value of IOPS for each virtual machine and adjust it with all the virtual machines so that it does not exceed 70 percent of the total processing amount of the storage device. Similarly, the upper limit of the data transfer rate of the virtual machine can be set as (8 KB) (IOPS). Adjust all virtual machines so that they do not exceed the data transfer rate of the intermediate path between the virtual machine and the storage device. When a virtualization host computer has single configuration and HA cluster configuration, the method of setting resource control on hard disk is different. For the setting method, refer to Section 11. When using the selected server according to this setting, adjust the total data transfer rate of all virtual machines to 288 MB/s or less per one selected server. In server virtualization, NIC (including onboard Ethernet port) is shared by multiple virtual machines and used. Therefore, you must ensure that the total network bandwidth used by each virtual machine does not exceed the network bandwidth of NIC. With Microsoft Hyper-V, you can set the upper limit value of network bandwidth for each virtual machine using Network adapter bandwidth management. Set the upper limit value of network bandwidth in each virtual machine and adjust with all the virtual machines so as not to exceed the network bandwidth of NIC. This setting is set up for each virtual machine using Hyper-V Manager. TI 30A05B10-01EN Sep. 28,

110 13. Appendix B: Engineering Memo Relationship between the Number of Zones and the Number of Network Cards When using a rackmount type server of the specified model (R740XL), you can install up to four individual zones in the network. However, the number that can be installed depends on the number of network cards installed in the virtualization host computer. Prepare the virtualization host computer taking into consideration of this. The relationship between the number of network cards and the number of zones that can be installed is as follows. Table Relationship between the number of network cards and the number of zones that can be installed NIC mounted number (*1) Number of installable zones Remarks *1: Number of 1 Gbps-4 port network cards In the case of the modular type server (FC640), up to one zone is applicable. TI 30A05B10-01EN Sep. 28,

111 TM1 FUSE RL1 CN1 (PSU-L) TM V AC READY CN2 (PSU-R) TM1 FUSE RL1 CN1 (PSU-L) TM V AC READY CN2 (PSU-R) 13.3 idefine of ProSafe-RS 13. Appendix B: Engineering Memo 13-4 The idefine license operating in a virtualization environment is authenticated through the Dongle Gateway which is Windows Service. Dongle Gateway Dongle Gateway can be installed by using the installer provided by Trinity Integrated Systems Ltd. The user can obtain the installer for Dongle Gateway from the website of Trinity Integrated Systems Ltd. and execute it on the computer where Dongle Gateway is installed. For the specification of Dongle Gateway, refer to the included User Guide. Placement The user should install Dongle Gateway on the Windows-based thin client where IT security is applied or the computer in which the SENG software is installed. The order of applying the IT security and installing Dongle Gateway does not matter. The user must set up the USB dongle where Dongle Gateway can recognize. There are three types of placement as follows. (1) Installing on the thin client device The user should install Dongle Gateway and insert the USB dongle on the thin client device. Thin Client USB Dongle Gateway OPKB RDP RDP VM HIS VM SENG idefine VM SENG idefine Guest OS Guest OS Guest OS Windows Server 2016 Hyper-V Hardware Platform Vnet/IP Remote UI network Figure Installing Dongle Gateway on the thin client F130401E.ai TI 30A05B10-01EN Sep. 28,

112 TM1 READY FUSE RL1 CN1 (PSU-L) TM V AC CN2 (PSU-R) TM1 READY FUSE RL1 CN1 (PSU-L) TM V AC CN2 (PSU-R) 13. Appendix B: Engineering Memo 13-5 (2) Installing on the physical SENG If a physical SENG computer exists, the user should install Dongle Gateway and insert the USB dongle in the computer. Thin Client USB OPKB RDP RDP VM VM VM Physical SENG HIS Guest OS SENG idefine Guest OS SENG idefine Guest OS (Real Machine) SENG Dongle Gateway Windows Server 2016 Hyper-V Hardware platform Vnet/IP Plant information networkt Remote UI networkt Figure Installing Dongle Gateway on the physical SENG F ai TI 30A05B10-01EN Sep. 28,

113 TM1 FUSE RL1 CN1 (PSU-L) TM V AC READY CN2 (PSU-R) TM1 FUSE RL1 CN1 (PSU-L) TM V AC READY CN2 (PSU-R) (3) Installing on the virtual SENG 13. Appendix B: Engineering Memo 13-6 If no physical SENG computer exists, the user should install Dongle Gateway on the virtual SENG and insert the USB dongle in the USB device server (myutn-50a) so that the USB dongle can be recognized through the USB device server. Thin Client USB OPKB RDP RDP VM HIS Guest OS VM SENG idefine Guest OS VM SENG idefine Dongle Gateway Guest OS USB Device Server Windows Server 2016 Hyper-V Hardwaew platform Vnet/IP Plant information network Remote UI network Figure Installing Dongle Gateway on the virtual SENG F ai Note: Plural idefines can connect one Dongle Gateway. License authentication procedure 1. Starting Dongle Gateway Start the computer on which Dongle Gateway is installed. The Dongle Gateway Windows Service automatically starts at the computer startup. The user can also stop or start the service by using Dongle Gateway Configurator that is included in Dongle Gateway. 2. Connecting Dongle Gateway from idefine Specify the IP address of Dongle Gateway to connect on idefine. Figure Specifying the IP address of Dongle Gateway on idefine F130401E.ai TI 30A05B10-01EN Sep. 28,

General Specifications

General Specifications General Specifications GS 30B05A21-01EN PM4S7740 PRM Advanced Diagnosis Server GENERAL PRM advanced diagnostic function is an optional feature of plant resource manager (PRM) package. It diagnoses plant

More information

General Specifications

General Specifications General Specifications GS 33J10D60-01EN Models VP6E5425, VP6E5426, VP6E5427 Expanded Test Functions Simulator Package Simulator Package [Release 6] GENERAL This document describes about specifications

More information

General Specifications

General Specifications General Specifications GS 33K05K20-50E Models LHS5450, LHS4450 Multiple Project Connection Package GENERAL manages Field Control Station (FCS) and Human Interface Station (HIS) engineering data created

More information

Technical Information

Technical Information Technical Information Vnet/IP Network Construction Guide Yokogawa Electric Corporation 2-9-32, Nakacho, Musashino-shi, Tokyo, 180-8750 Japan Tel.: 81-422-52-5634 Fax.: 81-422-52-9802 TI30A10A05-01E Copyright

More information

General Specifications

General Specifications GR0001 Distillation column 100 100 Column flow AUT NR PV SV M3/H 70.0 M3/H 50.0 MV % 65.0 100.0 80.0 60.0 40.0 20.0 0.0 TDA 37.5 41.4 TDT 53.8 RC-10 23.4 % 38.9 TRC 49.2 LICA 25.4 % 51.3 PAC LICA 45.7

More information

General Specifications

General Specifications General Specifications Models VP6E5450, VP6H4450 Multiple Connection Package GENERAL manages Field Control Station (FCS) and Human Interface Station (HIS) engineering data created in System Generation

More information

General Specifications

General Specifications General Specifications GS 32P06K51-01EN Model ALE111 Ethernet Communication Module GENERAL This document describes about Model ALE111 Ethernet communication module used with a safety control station ()

More information

General Specifications

General Specifications General Specifications GS 33K30D10-50E Model LPC6900, LPC6910, LPC6920, LPC6930 SEM Sequence of Events Manager (for Vnet/IP and FIO) GENERAL This GS (General Specifications) describes the specifications

More information

General Specifications

General Specifications General Specifications Automation Design Suite (AD Suite) VP6E5000 Engineering Server Function VP6E5100 Standard Engineering Function GENERAL Automation Design Suite (AD Suite) provides an engineering

More information

General Specifications

General Specifications General Specifications Models ALR111, ALR121 Serial Communication Modules GS 32Q06K50-31E GENERAL This document describes about Models ALR111 and ALR121 Serial Communication Modules used with a safety

More information

General Specifications

General Specifications General Specifications GS 36J02A10-01E Model NTPF100 OPC Interface Package GENERAL As data sharing between information systems increases, the requirement to efficiently access and use plant information

More information

General Specifications

General Specifications General Specifications Model VP6P6900, VP6P6910, VP6P6920, VP6P6930 SEM Sequence of Events Manager GS 33J30D10-01EN [Release 6] GENERAL This GS (General Specifications) describes the specifications of

More information

General Specifications

General Specifications General Specifications GS 32Q06D20-31E Models SSC50S, SSC50D Safety Control Unit, Duplexed Safety Control Unit (for Vnet/IP, Rack Mountable Type) GENERAL This GS provides the hardware specifications of

More information

General Specifications

General Specifications General Specifications NTPS100 Exaplog Event Analysis Package GENERAL The Exaplog Event Analysis Package is designed to provide managers, engineers and supervising operators with tools to analyze the historical

More information

General Specifications

General Specifications General Model I/O Module (for N-IO) GS 32P06K30-01EN GENERAL This General (GS) provides the hardware specifications of I/O module (for N-IO) that can be mounted on the Base Plate for N-IO I/O (S2BN1D,

More information

General Specifications

General Specifications General Specifications Model VP6P6900, VP6P6910, VP6P6920, VP6P6930 SEM Sequence of Events Manager GS 33J30D10-01EN [Release 6] GENERAL This GS (General Specifications) describes the specifications of

More information

General Specifications

General Specifications General Specifications Model VP6P6900, VP6P6910, VP6P6920, VP6P6930 SEM Sequence of Events Manager GS 33J30D10-01EN [Release 6] GENERAL This GS (General Specifications) describes the specifications of

More information

High Availability Solution

High Availability Solution s 9 160 DESIGO INSIGHT High Availability Solution HA-300 HA-500 The High Availability Solution (HA) provides increased reliability and security for our building control technology: Uninterrupted monitoring

More information

General Specifications

General Specifications General Specifications Model VP6P6900, VP6P6910, VP6P6920, VP6P6930 SEM Sequence of Events Manager [Release 6] GENERAL This GS (General Specifications) describes the specifications of the Sequence of Events

More information

General Specifications

General Specifications General Model I/O Module (for N-IO) GS 32P06K30-01EN GENERAL This General (GS) provides the hardware specifications of I/O module (for N-IO) that can be mounted on the Base Plate for N-IO I/O (S2BN1D,

More information

General Specifications

General Specifications General Specifications Model AW810D Communication Router GENERAL Communication Router () is the hardware equipment to connect domains via (WAN). Operations and monitoring of the /SCS that are distributed

More information

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Introduction to Virtualization. From NDG In partnership with VMware IT Academy Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization

More information

General Specifications

General Specifications General Specifications GS 33J20C20-01EN Model VP6B1600 Unified Gateway Station (UGS2) Standard Function Model VP6B1601 Dual-redundant Package (for UGS2) OVERVIEW Unified Gateway Station (UGS2) is a Vnet/IP

More information

Disaster Recovery Solution Achieved by EXPRESSCLUSTER

Disaster Recovery Solution Achieved by EXPRESSCLUSTER Disaster Recovery Solution Achieved by EXPRESSCLUSTER November, 2015 NEC Corporation, Cloud Platform Division, EXPRESSCLUSTER Group Index 1. Clustering system and disaster recovery 2. Disaster recovery

More information

Virtualization And High Availability. Howard Chow Microsoft MVP

Virtualization And High Availability. Howard Chow Microsoft MVP Virtualization And High Availability Howard Chow Microsoft MVP Session Objectives And Agenda Virtualization and High Availability Types of high availability enabled by virtualization Enabling a highly

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

High Availability for Virtual Environment

High Availability for Virtual Environment High Availability for Virtual Environment November, 2015 NEC Corporation, Cloud Platform Division, EXPRESSCLUSTER Group Index 1. Function Overview 2. Case Study on Virtual Environment 1. Function Overview

More information

General Specifications

General Specifications General Specifications GS 36J02A10-01E NTPF100 OPC Interface Package GENERAL As data sharing between information systems increases, the requirement to efficiently access and use plant information to meet

More information

High-reliability, High-availability Cluster System Supporting Cloud Environment

High-reliability, High-availability Cluster System Supporting Cloud Environment High-reliability, High-availability Cluster Supporting Cloud Environment Seishiro Hamanaka Kunikazu Takahashi Fujitsu is offering FUJITSU Software, a high-reliability software platform that supports the

More information

General Specifications

General Specifications General Specifications Model VP6H6530 Package [Release 6] GENERAL The Package imports process, trend and closing of the Human Interface Station (HIS) into Microsoft Excel spreadsheets to generate and print

More information

General Specifications

General Specifications General Specifications Model LHS5100, LHM5100 Standard Builder Function GENERAL The standard builder function package is used for configuring the system of. It creates database necessary for implementing

More information

Improving Blade Economics with Virtualization

Improving Blade Economics with Virtualization Improving Blade Economics with Virtualization John Kennedy Senior Systems Engineer VMware, Inc. jkennedy@vmware.com The agenda Description of Virtualization VMware Products Benefits of virtualization Overview

More information

Cluster Configuration Design Guide (Linux/PRIMECLUSTER)

Cluster Configuration Design Guide (Linux/PRIMECLUSTER) C122-A007-04EN PRIMEQUEST 1000 Series Cluster Configuration Design Guide (Linux/PRIMECLUSTER) FUJITSU LIMITED Preface This manual describes the network and shared I/O unit information and configuration

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

Exam : VMWare VCP-310

Exam : VMWare VCP-310 Exam : VMWare VCP-310 Title : VMware Certified Professional on VI3 Update : Demo 1. Which of the following files are part of a typical virtual machine? Select 3 response(s). A. Virtual Disk File (.vmdk)

More information

General Specifications

General Specifications General Specifications Model VP6E5170 Access Administrator Package (FDA:21 CFR Part 11 compliant) GS 33J10D40-01EN [Release 6] GENERAL Part 11 of Code of Federal Regulations Title 21 (21 CFR Part 11) issued

More information

VMware vsphere. Using vsphere VMware Inc. All rights reserved

VMware vsphere. Using vsphere VMware Inc. All rights reserved VMware vsphere Using vsphere 2010 VMware Inc. All rights reserved Migrating VMs VMs Move from one host to another Powered on VM requires VMware vmotion VM Files in Datastores Move from one datastore to

More information

General Specifications

General Specifications General Specifications Model NTPA230 fitoms TIM Tank Inventory Management Module GENERAL The TIM (Tank Inventory Management ) is one of the packages in the fitoms (Future Integration Technology for Oil

More information

General Specifications

General Specifications General Specifications Models PW601, PW602 24 V DC Output Power Supplies Model AEP9D Secondary Power Supply Bus Unit [Release 6] GENERAL This GS covers the hardware specifications of the 24 V DC Output

More information

HUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

HUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD. HUAWEI OceanStor Enterprise Unified Storage System HyperReplication Technical White Paper Issue 01 Date 2014-03-20 HUAWEI TECHNOLOGIES CO., LTD. 2014. All rights reserved. No part of this document may

More information

General Specifications

General Specifications General Specifications Model LHS6530 Package GS 33K05J20-50E GENERAL The Package imports process, trend and closing of the Human Interface Station (HIS) into Microsoft Excel spreadsheets to generate and

More information

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide Non-stop storage is a high-availability solution that combines ETERNUS SF products

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

Backup Solution Testing on UCS B and C Series Servers for Small-Medium Range Customers (Disk to Tape) Acronis Backup Advanced Suite 11.

Backup Solution Testing on UCS B and C Series Servers for Small-Medium Range Customers (Disk to Tape) Acronis Backup Advanced Suite 11. Backup Solution Testing on UCS B and C Series Servers for Small-Medium Range Customers (Disk to Tape) Acronis Backup Advanced Suite 11.5 First Published: June 24, 2015 Last Modified: June 26, 2015 Americas

More information

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo Vendor: EMC Exam Code: E20-002 Exam Name: Cloud Infrastructure and Services Exam Version: Demo QUESTION NO: 1 In which Cloud deployment model would an organization see operational expenditures grow in

More information

Lifecycle Performance Care Services. Bulletin 43D02A00-04EN

Lifecycle Performance Care Services. Bulletin 43D02A00-04EN Performance Care Services Bulletin 43D02A00-04EN As your trusted partner, Yokogawa is always with you to address your concerns whether recognized or hidden. Performance Care Services offer a complete service

More information

General Specifications

General Specifications General Specifications LHS5170 Access Administrator Package (FDA:21 CFR Part 11 compliant) GS 33K10D40-50E GENERAL The Food and Drug Administration (FDA) issues 21 CFR Part 11 (Electronic Records; Electronic

More information

General Specifications

General Specifications General Specifications FCN Autonomous Controller Functions (FCN-500) GENERAL This document describes the system configurations, development/maintenance, software configurations, and network specifications

More information

General Specifications

General Specifications General Specifications GS 33K20S10-50E Model SSS7100 Device OPC Server GENERAL This document describes about Model SSS7100 OPC Server which provides data of field wireless gateway to the OPC client via

More information

ExpressCluster X 3.2 for Linux

ExpressCluster X 3.2 for Linux ExpressCluster X 3.2 for Linux Installation and Configuration Guide 5/23/2014 2nd Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual 2nd 5/23/2014 Corresponds to the internal

More information

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R Table of Contents Fault Tolerance and Virtualization... 3 Fault Tolerance in the Physical World... 3 VMware Fault Tolerance...

More information

ExpressCluster X 3.1 for Linux

ExpressCluster X 3.1 for Linux ExpressCluster X 3.1 for Linux Installation and Configuration Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual Copyright NEC Corporation 2011.

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

General Specifications

General Specifications General Specifications GS 36J06B10-01E Model NTPS200 Exapilot Operation Efficiency Improvement Package OVERVIEW One of the major concerns in the plant operations is how to reduce its operation costs so

More information

General Specifications

General Specifications General Specifications Model VP6F1250 Generic Subsystem Gateway Package [Release 6] GENERAL is an operation and monitoring station for subsystems that are pre-process and post-process pieces of equipment

More information

General Specifications

General Specifications General Specifications SS2CPML Product Maintenance License GENERAL Product Maintenance License (hereinafter, referred to as PML) is a license for support service of standard system software on customer's

More information

General Specifications

General Specifications General Specifications Model Communication Module (for N-IO/FIO) [Release 6] GENERAL This document describes about Model Communication Module (for N-IO/FIO) which performs as the master device (referred

More information

The vsphere 6.0 Advantages Over Hyper- V

The vsphere 6.0 Advantages Over Hyper- V The Advantages Over Hyper- V The most trusted and complete virtualization platform SDDC Competitive Marketing 2015 Q2 VMware.com/go/PartnerCompete 2015 VMware Inc. All rights reserved. v3b The Most Trusted

More information

NICE Perform Virtualization Solution Overview

NICE Perform Virtualization Solution Overview INSIGHT FROM INTERACTIONS Solution Overview NICE Perform Virtualization Solution Overview Table of Contents Introduction... 3 Server Virtualization... 4 The Virtualization Layer (aka Hypervisor)... 6 CPU

More information

General Specifications

General Specifications General Specifications GS 32Q01B30-31E ProSafe-RS Safety Instrumented System Overview (for Vnet/IP-Upstream) GENERAL The ProSafe-RS is a Safety Instrumented System that is certified by the German certification

More information

Availability & Resource

Availability & Resource Achieving Cost-effective High Availability & Resource Management Agenda Virtual Infrastructure Stack How Vmware helps in the Data Center Availability and Resource Management 2 The VMware Virtual Infrastructure

More information

PRIMECLUSTER Global Link Services Configuration and Administration Guide: Redundant Line Control Function V4.1

PRIMECLUSTER Global Link Services Configuration and Administration Guide: Redundant Line Control Function V4.1 PRIMECLUSTER Global Link Services Configuration and Administration Guide: Redundant Line Control Function V4.1 Preface This manual describes PRIMECLUSTER GLS (Redundant Line Control Function) and explains

More information

VMware HA: Overview & Technical Best Practices

VMware HA: Overview & Technical Best Practices VMware HA: Overview & Technical Best Practices Updated 8/10/2007 What is Business Continuity? Business Continuity = Always-on uninterrupted availability of business systems and applications Business Continuity

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

EXPRESSCLUSTER D Product Introduction

EXPRESSCLUSTER D Product Introduction EXPRESSCLUSTER D Product Introduction May, 2016 EXPRESSCLUSTER Group, Cloud Platform Division, NEC Corporation 2 NEC Corporation 2016 Agenda Product Introduction 1. What is HA Cluster? 2. Achievement 3.

More information

General Specifications

General Specifications General Specifications STARDOM FOUNDATION fieldbus Communication GENERAL This General Specification (GS) describes STARDOM system which using autonomous controller FCN, FCN-RTU and FCJ (hereinafter referred

More information

Backup Solution Testing on UCS B-Series Server for Small-Medium Range Customers (Disk to Tape) Acronis Backup Advanced Suite 11.5

Backup Solution Testing on UCS B-Series Server for Small-Medium Range Customers (Disk to Tape) Acronis Backup Advanced Suite 11.5 Backup Solution Testing on UCS B-Series Server for Small-Medium Range Customers (Disk to Tape) Acronis Backup Advanced Suite 11.5 First Published: March 16, 2015 Last Modified: March 19, 2015 Americas

More information

70-414: Implementing an Advanced Server Infrastructure Course 01 - Creating the Virtualization Infrastructure

70-414: Implementing an Advanced Server Infrastructure Course 01 - Creating the Virtualization Infrastructure 70-414: Implementing an Advanced Server Infrastructure Course 01 - Creating the Virtualization Infrastructure Slide 1 Creating the Virtualization Infrastructure Slide 2 Introducing Microsoft System Center

More information

General Specifications

General Specifications General Specifications GS 36J04B10-01E NTPB001 NTPB010 Batch Plant Information Management System GENERAL Batch is an intelligent and scalable ISA- 88 based Batch PIMS (Plant Information Management System).

More information

General Specifications

General Specifications General Specifications GS 32P06P10-01EN Models S2BN4D, S2BN5D Base Plates for Barrier (for N-IO) System Models: S2ZN4D, S2ZN5D N-IO I/O Unit GENERAL This General Specifications (GS) provides the specifications

More information

General Specifications

General Specifications General Specifications NTPS100 Exaplog Event Analysis Package GS 36J06A10-01E GENERAL The Exaplog Event Analysis Package is designed to provide managers, engineers and supervising operators with tools

More information

Paragon Protect & Restore

Paragon Protect & Restore Paragon Protect & Restore ver. 3 Centralized Backup and Disaster Recovery for virtual and physical environments Tight Integration with hypervisors for agentless backups, VM replication and seamless restores

More information

VX3000-E Unified Network Storage

VX3000-E Unified Network Storage Datasheet VX3000-E Unified Network Storage Overview VX3000-E storage, with high performance, high reliability, high available, high density, high scalability and high usability, is a new-generation unified

More information

VIRTUAL APPLIANCES. Frequently Asked Questions (FAQ)

VIRTUAL APPLIANCES. Frequently Asked Questions (FAQ) VX INSTALLATION 2 1. I need to adjust the disk allocated to the Silver Peak virtual appliance from its default. How should I do it? 2. After installation, how do I know if my hard disks meet Silver Peak

More information

General Specifications

General Specifications General Specifications Models AFV10S, AFV10D Field Control Unit Duplexed Field Control Unit (for Vnet/IP, for FIO, 19" Rack Mountable Type) R3 GENERAL This GS covers the hardware specifications of the

More information

Disaster Recovery-to-the- Cloud Best Practices

Disaster Recovery-to-the- Cloud Best Practices Disaster Recovery-to-the- Cloud Best Practices HOW TO EFFECTIVELY CONFIGURE YOUR OWN SELF-MANAGED RECOVERY PLANS AND THE REPLICATION OF CRITICAL VMWARE VIRTUAL MACHINES FROM ON-PREMISES TO A CLOUD SERVICE

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

General Specifications

General Specifications General Specifications GS 36J04B10-01E Models NTPB001 NTPB010 Plant Information Management System GENERAL is an intelligent and scalable ISA-88 based Batch PIMS (Plant Information Management System). It

More information

General Specifications

General Specifications General Specifications System Overview Outline This General Specifi cations (GS) describes features of a fi eld wireless system, system confi guration a fi eld wireless network, system confi guration devices,

More information

Improving availability of VMware vsphere 4 s. virtualization environment with ExpressCluster X

Improving availability of VMware vsphere 4 s. virtualization environment with ExpressCluster X Improving availability of ware vsphere 4 s virtualization environment with ExpressCluster X 1. To begin with In a server integrated environment, due to adoption of virtual environments, system failure

More information

General Specifications

General Specifications General Specifications GS 33J60G20-01EN Model ALF111 Foundation TM fieldbus Communication Module (for N-IO/FIO) [Release 6] GENERAL This document describes about Model ALF111 Foundation fieldbus Communication

More information

Red Hat enterprise virtualization 3.1 feature comparison

Red Hat enterprise virtualization 3.1 feature comparison Red Hat enterprise virtualization 3.1 feature comparison at a glance Red Hat Enterprise Virtualization 3.1 is first fully open source, enterprise ready virtualization platform Compare functionality of

More information

Pass-Through Technology

Pass-Through Technology CHAPTER 3 This chapter provides best design practices for deploying blade servers using pass-through technology within the Cisco Data Center Networking Architecture, describes blade server architecture,

More information

Paul Hodge Virtualization Solutions: Improving Efficiency, Availability and Performance

Paul Hodge Virtualization Solutions: Improving Efficiency, Availability and Performance 2012 Honeywell Users Group Americas Sustain.Ability. Paul Hodge Virtualization Solutions: Improving Efficiency, Availability and Performance 1 Experion Virtualization Solutions Overview 2 Virtualization

More information

General Specifications

General Specifications General Specifications Integrated Production Control System System Overview (Vnet/IP Edition) GENERAL This document describes about Production Control System (for Vnet/IP) which controls and monitors industrial

More information

General Specifications

General Specifications General Specifications GS 33S01B10-31E Production Control System CENTUM CS 1000 System Overview R3 GENERAL This GS covers the system specifications, components and network specifications of the CENTUM

More information

VMware vsphere Administration Training. Course Content

VMware vsphere Administration Training. Course Content VMware vsphere Administration Training Course Content Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Fast Track Course Duration : 10 Days Class Duration : 8 hours

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

ExpressCluster X R3 WAN Edition for Windows

ExpressCluster X R3 WAN Edition for Windows ExpressCluster X R3 WAN Edition for Windows Installation and Configuration Guide v2.1.0na Copyright NEC Corporation 2014. All rights reserved. Copyright NEC Corporation of America 2011-2014. All rights

More information

2014 Software Global Client Conference

2014 Software Global Client Conference WW HMI SCADA-10 Best practices for distributed SCADA Stan DeVries Senior Director Solutions Architecture What is Distributed SCADA? It s much more than a distributed architecture (SCADA always has this)

More information

Network+ Guide to Networks 6 th Edition

Network+ Guide to Networks 6 th Edition Network+ Guide to Networks 6 th Edition Chapter 10 Virtual Networks and Remote Access Objectives 1. Explain virtualization and identify characteristics of virtual network components 2. Create and configure

More information

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

VMware vsphere 6.5: Install, Configure, Manage (5 Days) www.peaklearningllc.com VMware vsphere 6.5: Install, Configure, Manage (5 Days) Introduction This five-day course features intensive hands-on training that focuses on installing, configuring, and managing

More information

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees Course Name Format Course Books 5-day instructor led training 735 pg Study Guide fully annotated with slide notes 244 pg Lab Guide with detailed steps for completing all labs vsphere Version Covers uses

More information

T14 - Network, Storage and Virtualization Technologies for Industrial Automation. Copyright 2012 Rockwell Automation, Inc. All rights reserved.

T14 - Network, Storage and Virtualization Technologies for Industrial Automation. Copyright 2012 Rockwell Automation, Inc. All rights reserved. T14 - Network, Storage and Virtualization Technologies for Industrial Automation Rev 5058-CO900C Copyright 2012 Rockwell Automation, Inc. All rights reserved. 2 Agenda Overview & Drivers Virtualization

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

Virtualization with Arcserve Unified Data Protection

Virtualization with Arcserve Unified Data Protection Virtualization with Arcserve Unified Data Protection Server and desktop virtualization have become very pervasive in most organizations, and not just in the enterprise. Everybody agrees that server virtualization

More information

Hitachi Integrated Instrumentation System

Hitachi Integrated Instrumentation System Hitachi Integrated Instrumentation System For the best operation Since its launch in 1975, Hitachi integrated instrumentation system EX series has been used as a supervisory control system in various fields

More information

VMware Join the Virtual Revolution! Brian McNeil VMware National Partner Business Manager

VMware Join the Virtual Revolution! Brian McNeil VMware National Partner Business Manager VMware Join the Virtual Revolution! Brian McNeil VMware National Partner Business Manager 1 VMware By the Numbers Year Founded Employees R&D Engineers with Advanced Degrees Technology Partners Channel

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information