Video Surveillance Solutions Using NetApp E-Series Storage

Size: px
Start display at page:

Download "Video Surveillance Solutions Using NetApp E-Series Storage"

Transcription

1 Technical Report Video Surveillance Solutions Using NetApp E-Series Storage Joel W. King, James Laing, NetApp October 2013 TR-4233 Abstract Video surveillance solutions based on NetApp E-Series offer the physical-security integrator a highly scalable repository for video management systems supporting high camera counts, megapixel resolutions, high frame rates, and long retention periods. The architecture is designed to provide high reliability and availability to meet the demands of video surveillance deployments.

2 TABLE OF CONTENTS 1 Introduction Publication Scope Audience Why E-Series? Training Offerings Overview and Best Practices Video Surveillance Market Surveillance Cameras Retention Periods Converged Networks Standards-Based Open Architectures Solution Components Video Management System Software Deployment Characteristics Best Practice Guidelines Solution Components Deployment Models Network Video Cameras IP Network Video Management Software Viewing Workstation Video Recording Server Storage Planning and Design Virtualization of Servers File System Storage Planning with E-Series Workflow Deployment Example I/O Characteristics High Availability Multipath Overview Network Planning Server Planning Video Surveillance Solutions Using NetApp E-Series Storage

3 4.11 Design Checklist Sizing Fundamentals System Requirements General Considerations Retention Period Reserve Capacity Cameras Centrally Stored Video Clips Tiered Storage High Availability Sizing E-Series for Video Surveillance Storage, Operating System, and File System Capacity Considerations New Technology File System VMware ESXi Sizing Examples Sizing Example 1: A Simple Deployment Sizing Example 2: Larger System with Failover and RAID Sizing Example 3: Complex Deployment for a Multiuse Center Sizing Checklist Performance Considerations Overview Operational Considerations E-Series Storage Array Configurable Performance Options E-Series Performance Checklist Example 1: E-Series Storage Array E Example 2: E-Series E5400 Storage Array Hypervisor: VMware ESXi Hypervisor: Virtual Machine Layout Hypervisor: Performance Monitoring Hypervisor: Virtual Servers Hypervisor: Guest OS: Windows 2008 R2 Server Management Network Video Ingress Network Video Surveillance Solutions Using NetApp E-Series Storage

4 10.7 Uplinks Performance Validation Baseline Performance: Serial Attached ISCSI Performance Validation: Recording and Viewing Performance Validation of Tiered Storage Archive Function Recording Server E-Series Array Performance Monitoring While Archiving I/O Latency Other Performance Considerations Performance Validation: Grooming Recording on 3TB NL-SAS Using RAID Performance Summary Video Management System Partners Milestone XProtect Corporate On-Net Surveillance Systems Inc. Ocularis ES (OnSSI) Verint Nextiva Genetec Omnicast Software Releases Solution Software Releases Validated Solution Caveats Site-Specific Parameters IP Addressing Examples Cisco Nexus 3048 Switches E-Series Storage Array Cisco UCS Servers and ESXi Verification and Troubleshooting Sample Network Topology Verify Time and Reachability to Network Time Protocol Servers Verify Reachability to Gateway Addresses Verify Connectivity to Network Video Cameras Show Interface Command Verify Virtual PortChannel Verify Server Video Ingress Ports Video Surveillance Solutions Using NetApp E-Series Storage

5 15.8 Verify Device Management Ports Verify Uplinks Verify Configured Domain Name System (DNS) Servers Verify Connectivity Between VMS Components Verify Connectivity of Client Viewing Workstations Performance Monitoring of ESXi Verify Cisco Nexus 3048 Switch Load-Balance Configuration Verify NTFS Cluster Size Network and System Topology and Configuration Files E-Series Storage Array Cisco Nexus and Catalyst Switches Axis Virtual Camera Windows Server Summary Appendixes Glossary References Version History Authors LIST OF TABLES Table 1) Training offerings....9 Table 2) Key characteristics of a solution target deployment Table 3) Best practice network design concepts Table 4) Design checklist components Table 5) Usable capacity by RAID level Table 6) E-Series disk shelves for video surveillance deployments Table 7) Multiuse project Table 8) Data rate and storage per camera Table 9) Camera assignment per server Table 10) Sizing solution Table 11) E-Series controllers and disk shelves Table 12) Storage array global parameters Table 13) Parameters specific to volume and volume group Table 14) Genetec Omnicast version 4.8 validation Video Surveillance Solutions Using NetApp E-Series Storage

6 Table 15) Software releases validated Table 16) Details for sample E-Series storage configuration Table 17) Server and VM naming Table 18) Logical view of volume and LUNs for mapping hosts LIST OF FIGURES Figure 1) Solution component overview Figure 2) Recording server logical topology Figure 3) Typical resolutions: images to scale between resolutions Figure 4) E-Series for video surveillance Figure 5) E-Series disk structure Figure 6) DDP usable capacity Figure 7) High-availability design Figure 8) Architectural topology overview Figure 9) Cisco Nexus 3048 topology overview Figure 10) VMware vsphere networking configuration Figure 11) Uplink connectivity (layer 2) Figure 12) Uplink connectivity (layer 3) Figure 13) System requirements Figure 14) Example of throughput versus retention Figure 15) Axis design tool Figure 16) Video stream settings Figure 17) Daylight versus nighttime data rate Figure 18) OnSSI Ocularis storage configuration Figure 19) 64-camera transition from 1x to 16x Figure 20) OnSSI Ocularis ES recording and archiving configuration Figure 21) Sizing fundamentals Figure 22) Axis design tool bandwidth estimate Figure 23) Physical and virtual machines sizing example Figure 24) E-Series raw storage capacity Figure 25) E2600 hardware and software components Figure 26) E5400 hardware and software components Figure 27) OnSSI and Milestone virtual machine layout Figure 28) CPU and memory usage Figure 29) VMkernel port management network Figure 30) Server management network Figure 31) Video ingress network Figure 32) vswitch NIC teaming load balancing Figure 33) Video surveillance uplinks Video Surveillance Solutions Using NetApp E-Series Storage

7 Figure 34) Performance monitor output from IOMETER test Figure 35) Recording and viewing workload Figure 36) Storage configuration RACK-SVR Figure 37) Write rate during archive Figure 38) Performance monitor while archiving Figure 39) Archiving with grooming Figure 40) Recording latency and rate for 3TB RAID Figure 41) Recording latency and rate for DDP archive volume Figure 42) Cisco Nexus 3048 switches console and management interfaces Figure 43) Cisco Nexus 3048 switch cabling schematic diagram Figure 44) Cisco UCS C220-M3 chassis Figure 45) E-Series controllers and management ports Figure 46) Volume group and volume layout used for sample storage configuration Figure 47) CIMC power policies Figure 48) esxtop network statistics Figure 49) Solution network and system topology Figure 50) Axis virtual camera Video Surveillance Solutions Using NetApp E-Series Storage

8 1 Introduction NetApp E-Series storage arrays provide performance, efficiency, reliability, and enterprise-class support for large-scale video surveillance deployments. All video surveillance management software shares the common feature of recording live video feeds to storage for subsequent replay to aid in forensic analysis or investigation of persons or events within the field of view of a single camera or group of cameras. These video feeds, generated by hundreds or thousands of cameras, are typically configured to record continuously, 24 hours per day, 7 days per week, with retention periods in the range of months to years. 1.1 Publication Scope This document is intended to provide an introduction to video surveillance for those who sell, design, or implement such solutions based on NetApp E-Series storage. It describes the comprehensive functional components required to build a video surveillance solution based on NetApp E-Series storage that can reliably record video and archive video from recording servers. This document identifies the major components and features of a video surveillance system. A variety of video surveillance resources is available in Field Portal. 1.2 Audience This publication is intended to provide guidance to physical-security integrators, video surveillance management software engineers, network and storage system engineers, and architects responsible for integrating NetApp E-Series storage systems into existing video surveillance deployments or designing and implementing new deployments. The content in this report is presented with the expectation that these professionals can use this information, combined with their experience and supporting documents, to build an efficient, scalable, and highly available system. Targeted Deployments The targeted deployments for this introduction are large (200 1,000 cameras or more) with retention periods of at least 30 days and primarily use HDTV/megapixel resolution cameras. 1.3 Why E-Series? The E-Series architecture supports block-based protocols and can process real-time video applications with high reliability, performance, and availability. For these reasons, E-Series is the preferred choice for video surveillance solutions designed to utilize NetApp storage. Solution Benefits NetApp E-Series provides the following benefits for large-scale video surveillance deployments: Intuitive management. SANtricity ES software provides a graphical representation of the E-Series storage, with an easy-to-use interface. Ease of provisioning. All management tasks to the array are performed by SANtricity ES software without taking the array offline. High availability. Dual controllers mean nondisruptive controller firmware upgrades, host multipath support, and dual paths to expansion shelves. High performance. The E-Series controllers offer an excellent price-to-performance ratio. High capacity. The E5400 systems support up to 1080TB of raw capacity (using 3TB disks) in an efficient footprint. 8 Video Surveillance Solutions Using NetApp E-Series Storage

9 Drive health monitoring. The E5400 provides proactive monitoring, background repair, and extensive diagnostic features of drives. Data integrity. Background media scans proactively check drives for defects and initiate repairs before they can cause problems. Data protection. The E5400 supports RAID levels 0, 1, 10, 3, and 5 and RAID 6 for volume groups and Dynamic Disk Pools (DDP). Enterprise management. The E5400 provides a single management view of all E-Series storage systems in the management domain. 1.4 Training Offerings There are a number of web-based and instructor-led training opportunities to enable successful deployment of the NetApp E-Series storage array. The classes listed in the NetApp University Customer Learning Map under Storage Systems are recommended end-user training classes. Table 1 lists the trainings offered, their duration, and the mode of delivery. Table 1) Training offerings. Class Description Duration (Hours:Minutes) Delivery E-Series E5400 Technical Overview 01:00 Web-based E-Series E2600 Technical Overview 01:00 Web-based NetApp E-Series Hardware Architecture and Configuration 00:45 Web-based Configuring NetApp E-Series Storage Systems 24:00 Instructor-led Maintaining NetApp E-Series Storage Systems 16:00 Instructor-led 2 Overview and Best Practices The physical-security market is in a transition period in which existing analog-based video surveillance systems are being replaced by network-based digital video surveillance equipment. IP technology is becoming the preferred choice for new installations over conventional closed-circuit television analog systems. This trend benefits end customers by addressing their physical-security requirements with systems that offer more features at a lower cost. This section provides an overview of: Video surveillance market Surveillance cameras Retention periods Converged networks Standards-based open architectures Solution components Video management system software Solution characteristics Best practice guidelines 9 Video Surveillance Solutions Using NetApp E-Series Storage

10 2.1 Video Surveillance Market The video surveillance market is characterized by several vertical markets at different stages of adoption. Gaming, manufacturing, transportation, education, and government/city surveillance are strong markets and have more aggressively implemented network-based digital video surveillance. Large enterprise manufacturing, service companies, and retail deployments lag, due in part to the physical dispersion of plants and facilities and the bandwidth requirements of networked video. Growth expectations for the industry, gleaned from financial reports of leading hardware and software suppliers of networked video systems, are estimated at approximately 25% per year. Estimates for retail deployments indicate storage is approximately 30% of the installation cost, with network video cameras and their installation at 25%. Servers, networking, and video management software compose the remainder. The market is strong and has good growth potential. 2.2 Surveillance Cameras Networked video surveillance cameras that offer more than one megapixel of resolution are becoming widely adopted because they offer at least four times the resolution of a standard-definition (4CIF) camera. Television broadcasting in high-definition television (HDTV) resolution has changed end-user perception, and physical-security managers are demanding the image clarity and higher resolution that HDTV/megapixel cameras provide. It is important for physical-security integrators to manage end-user expectations. Even with the trend toward better resolution, lens quality, sharpness of focus, and lighting play a major role in determining image quality. In short, the increased resolution of networked video surveillance cameras contributes directly to an increased need for scalable storage. 2.3 Retention Periods The retention period is the length of time that video is retained on storage for viewing and analysis. This parameter is regulated by a government agency, such as the State of Nevada Gaming Control Board; a corporate policy; or the necessities of costs and the availability of disk space. Typical retention periods range from a minimum of 7 days to the more typical 30 days, or to several months or years in some cases. Physical-security managers generally prefer the longest retention period possible given efficient and costeffective storage. 2.4 Converged Networks Just as IP telephony deployments have moved from disparate networks to a common IP network, the surveillance industry is also moving to a converged IP network. Modern physical-security deployments are more than simple IP-based cameras. Most video management software also supports access control systems and integrates video and access control events. Building-management systems and energymanagement systems are also internetworked and might generate alarms for abnormal temperature changes or when sensors detect water infiltration. Although many deployments use dedicated access-layer Ethernet switches to support networked video cameras, switch selection should align with corporate standards for Ethernet LAN switching to leverage the support and expertise of the network management staff. At some point in the network topology, the network devices will be interconnected, whether or not a fully converged network of voice, video, and data is implemented or some physical segmentation is present. 2.5 Standards-Based Open Architectures There are two competing video surveillance standards organizations: the Physical Security Interoperability Alliance and the Open Network Video Interface Forum, both of which promote standards- 10 Video Surveillance Solutions Using NetApp E-Series Storage

11 based information exchange between networked video devices. The standards address concepts such as device discovery, media streaming, and exchange of metadata. Implementation of these standards facilitates the integration of video management software vendors, cameras, and other IP-based network devices sourced from different manufacturers. 2.6 Solution Components The typical video surveillance deployment is composed of: Network video cameras IP network infrastructure Servers and video management software Viewing workstations and other mobile viewing devices Storage These components are shown in Figure 1. Figure 1) Solution component overview. In proprietary systems, all components are sourced from a single manufacturer. Open platform systems allow the physical-security integrator to select the IP cameras, network routers and switches, servers and workstations, and video management software and storage that provide the best price, performance, and reliability to meet the specifications of the end user. The physical-security integrator might standardize on servers and workstations, network equipment, and storage for the majority of its business opportunities. However, networked video cameras and video management software are often selected based on end-customer requirements. Typically the majority of cameras are from a primary vendor, but it is common to have the cameras of several vendors implemented in a single deployment. Additionally, analog-to-digital encoders may be used to include legacy analog cameras. It is uncommon to see more than one video management software package implemented in a single deployment. It is important for the video management software and the storage array to work together seamlessly. 2.7 Video Management System Software This document describes video surveillance solutions based on open platform based video management software. For example, both Milestone XProtect Corporate and OnSSI Ocularis ES are certified for use on both the E5460 and E2660 storage platform. NetApp has worked with other surveillance partners as well 11 Video Surveillance Solutions Using NetApp E-Series Storage

12 (see this table); for example, Genetec Omnicast and Verint Nextiva are certified for use with the E5460 storage platform. 2.8 Deployment Characteristics This solution target deployment is characterized by the key items described in Table 2. Table 2) Key characteristics of a solution target deployment. Element High camera counts Long retention periods Description Rack space savings of up to 60% over competitive offerings can be achieved because of the maximum storage density of the E-Series using the 60-drive 4U disk shelves. Video can be maintained for months to years using the current 3TB or 4TB NL-SAS drives, with higher TB drives available as the technology matures, and the E-Series, combined with the video grooming technology of both Milestone and OnSSI recording servers. HDTV/megapixel deployments The solution is ideally suited for the increased storage demands of HDTV and megapixel camera deployments because of the storage density and performance of the E-Series. High availability NetApp validation testing Ease of use High performance Serviceability Data protection Drive health monitoring A deployment should be designed and validated to provide high availability at the application, network, and storage system levels. Fault tolerance is a key component of all video surveillance solutions. Solutions have been validated with several video management system software offerings, the Axis Communications megapixel network video cameras, and the Axis virtual camera simulator. This validation incorporated thousands of video feeds in the recording servers of NetApp s video surveillance system technology partners. Frame rates up to 30 fps from HDTV 720p and 1080p validate the performance of the solution. The SANtricity ES management component provides an enterprise view of all the storage arrays in the domain. Management of the arrays is not limited to the local network; storage arrays can be managed from one or more workstations with IP connectivity to the management interfaces of the arrays. This validation testing demonstrates that the E-Series has performance capabilities to support the requirements of video surveillance workloads. The throughput of the E-Series controllers is not the limiting factor in typical deployments. Controller firmware can be upgraded without taking the storage array offline: a feature of the E-Series duplex controllers. Additionally, power supplies, cooling fans, and disk drives can all be replaced without system downtime. Although RAID 5 is typically deployed in the industry, the E-Series supports DDP and RAID levels 0, 1, 3, 5, 6, and 10. The health of the individual disk drives is monitored, and problems can be identified before a hard drive failure. When hard drive failures occur, the system incorporates automatic drive failover and detection and rebuilds using global hot spare drives or available capacity in a DDP. 12 Video Surveillance Solutions Using NetApp E-Series Storage

13 2.9 Best Practice Guidelines The following table represents general best practice guidelines for the video surveillance solutions. Element Number of cameras per recording server Number of virtual machines per physical server Implement Network Time Protocol (NTP) Provision hot spare drives Monitor the operational state of the storage array Use the recovery guru Provision adequate reserve capacity Allow SANtricity ES to automatically select drives for volume groups Verify equal distribution of volumes across controllers Implement recommended performance tuning options Conduct a network assessment prior to implementation Verify that all components are operational Implement RAID 6 when feasible Follow proper electrostatic discharge (ESD) protocol Description The number of cameras supported per server is primarily based on the aggregate data rate of the configured cameras. However, features such as server-side motion detection might substantially decrease the number of cameras per server. In general, each recording server will support from 100Mbps to 600Mbps of video ingress. As a general rule, each physical machine can support from three to four virtual machines. An accurate time source is critical for the proper functioning of all video management applications. Synchronize all components (including IP cameras) with several accurate and reliable NTP sources. Disk drives will fail over time. Provision the recommended hot spare coverage. Immediately replace failed drives. The SANtricity ES Enterprise Management window provides an overview of the operational health of all storage arrays in the domain. Address all nonoptimal array conditions before they become critical problems. Refer to the SANtricity ES recovery guru to resolve reported problems. As a general rule, size the system with 20 30% of the reserve capacity for the target retention period. This allows increased capacity to address future requirements. The system will attempt to provide both drawer and shelf loss protection if possible. Verify that volumes are on the preferred owner for optimal balanced performance following storage array service or outages. Verify that all recommended performance tuning parameters have been implemented. Recording servers can only archive video they receive. Any network impairment between cameras and recording servers is lost video. Verify adequate bandwidth with low packet loss and reasonable latency for transporting IP video. Third-party vendors also provide these service offerings. This validated design implements redundancy for high availability. While implementing the system, verify that all redundant network paths, power supplies, fans, and so on are operational. RAID 6 provides an extra measure of protection over RAID 5: two parity disks rather than one. ESD-related component degradation might affect the long-term reliability of the system. ESD-caused degradation might not manifest into a hard outage for months or years of service. 13 Video Surveillance Solutions Using NetApp E-Series Storage

14 3 Solution Components This chapter summarizes the overall architecture of a typical video surveillance deployment. It describes both the target deployment model and the individual components. The following concepts are discussed in this chapter: Deployment models Network video cameras IP network Video management software Viewing workstation Video recording server Storage 3.1 Deployment Models IP network-based video surveillance deployments are characterized by two deployment models: Cameras streaming video to recording servers Cameras recording directly to storage Implementations of cameras recording directly to storage include Bosch (iscsi), MOBOTIX (NFS/CIFS), and IQinVision (NFS/CIFS). These implementations may have a server-based management platform for the control plane, but the media plane is direct from camera to storage. This deployment model is not discussed in this document. For more information on Bosch Security Systems, visit The target deployment focus for this document is the camera-to-recording server model. In this model, the recording servers have a control plane to the IP cameras and through the media plane receive one or more video feeds over the IP network by unicast and/or multicast packets. The media stream may be connectionless (H.264/UDP/RTP) or connection oriented (MJPEG/TCP or H.264/RTP and RTSP interleaved over TCP). The recording server model is the more common of the two models and is supported by a wide array of open-system video management software vendors. A high-level diagram of the logical topology is shown in Figure 2. Figure 2) Recording server logical topology. The video management software market is predominately based on Microsoft Windows Server 2008 R2 or later releases. Most software vendors support both NAS (NFS/CIFS) and SAN (SCSI), provided there is acceptable read and write throughput performance. 14 Video Surveillance Solutions Using NetApp E-Series Storage

15 3.2 Network Video Cameras IP surveillance cameras generate video feeds for both live viewing and video archiving by the video recording server. Most networked video cameras run a subset of the Linux operating system and implement TCP/IP services such as HTTP/HTTPs, SMTP, SNMP, FTP, telnet, and so on. Cameras increasingly include local storage either as internal flash memory or through the insertion of a secure digital (SD) nonvolatile memory card. Networked video cameras are machine-to-machine (M2M) endpoints under the control of the recording server, which issues commands and responses by a combination of HTTP and real-time streaming protocol (RTSP). Initial configuration consists of assigning IP addresses; configuring NTP servers and the local time zone; entering the camera name and descriptive information on the video overlay; and adjusting physical characteristics of focus, white balance, and color correction. Network video camera manufacturers design their cameras for ease of installation to reduce the implementation costs of physical-security integrators. Features such as auto back focus and power over Ethernet (PoE) provide installation efficiency. Networked video cameras support a wide range of resolutions; the most common are standard definition (SD) CIF, HDTV, and megapixel. Both HDTV resolutions (1920x1080 and 1280x720) are megapixel resolutions, but megapixel resolutions are not necessarily HDTV formats. Typical resolutions are shown in Figure 3. Figure 3) Typical resolutions: images to scale between resolutions. 15 Video Surveillance Solutions Using NetApp E-Series Storage

16 Because HDTV and megapixel cameras generate larger volumes of video archive data compared to standard definition, solutions based on NetApp E-Series storage are typically targeted at HDTV/megapixel deployments. Axis Communications is a market-leading networked video camera manufacturer and a NetApp partner. Other predominant manufacturers include IQinVision, Arecont, Sony, Pelco, and Panasonic. 3.3 IP Network Video surveillance deployments require a network infrastructure that addresses these requirements: Provide sufficient available capacity (bandwidth) to transport video Exhibit very low/no loss of IP video packets Feature network latency within the range suitable for the transport protocol (TCP or UDP) of the video feed Provide high availability through network redundancy and best practices in network design Meet network security and services requirements Video may be transported between endpoints using either UDP or TCP. Image quality problems (loss of frames) can occur in both transport methods. Although TCP is a connection-oriented protocol, TCP transport is the first to give up its bandwidth during congestion, and real-time applications such as video might arrive too late and need to be discarded by the receiver because the playout time has passed. Although IP network based video surveillance deployments share many of the same service-level agreements (SLAs) as voice over IP (VoIP), the bandwidth requirements of video are substantially higher than those of VoIP. Additionally, each network camera streams video over the network constantly (24/7), whereas an IP phone uses fewer network resources unless there is an active call. Implementing networkbased video on an existing network requires network quality of service (QoS) for data, VoIP, and video. Regardless of whether a physically separate network is implemented for video surveillance or video is converged on an existing network infrastructure, the physical-security department or integrator must work with the IT department to implement network equipment consistent with the existing infrastructure. Leading network vendors, as well as leading integrators offering voice and video network implementation services, can assist with network readiness assessments for IP video surveillance deployments. 3.4 Video Management Software VMS supported with video surveillance solutions is a combination of internal NetApp tested and partner self-certified software. The internally tested VMS applications are Genetec Omnicast, OnSSI Ocularis ES, and Milestone XProtect Corporate. Genetec has a formal documented certification procedure. Additional testing and validation information is available here. 3.5 Viewing Workstation One or more workstations capable of viewing live or archived video are a basic requirement of any deployment. The workstation must meet or exceed the hardware specifications of the VMS. Viewing video at higher resolutions and frame rates typically requires a high-end workstation with a video gaming performance class video card. Not implementing the minimum hardware required for a viewing workstation is a common deployment mistake and leads to end-user satisfaction issues. Low-resolution video might be viewable on laptops or smartphone applications when mobile or remote access to the video stream is more important than displaying the highest resolution. For example, Milestone XProtect Mobile is a free application for smartphones and tablets that works with XProtect video management software. 16 Video Surveillance Solutions Using NetApp E-Series Storage

17 3.6 Video Recording Server The video recording server represents one or more instances of the hardware and software used to record live video to the storage array. The software can run on a physical machine or as a guest on a virtual machine. The guest virtual machine must have the same virtual memory and virtual CPU as specified by the video management system software requirements for a physical machine. The physical machine must include, at a minimum, one Gigabit Ethernet (GbE) interface for video ingress from network video cameras and either a dual-port Fibre Channel host bus adapter (HBA), dual-port SAS HBAs, or dual Gigabit/10 Gigabit Ethernet (10GbE) interfaces for connectivity to the storage array. The number of networked video cameras per recording server and the resulting data rate are determined by the architecture and best practices documented by the VMS provider. The server must meet or exceed the hardware specifications of the VMS. As a rule of thumb, the amount of video any individual server can process ranges from 100Mbps to 600Mbps. Video loss can occur between the network camera and the workstation or between the workstation and the storage array. On ingress, missing packets can be detected by gaps in the RTP sequence numbers. On egress, missing packets will cause the video management server software to log archive-queue-full errors, media-overflow errors, or similar warnings. It might also provide a display of the number of records queued for I/O. 3.7 Storage NetApp E-Series high-performance storage systems support the following block-based storage area network (SAN) protocols: E5400: Fibre Channel, InfiniBand, iscsi (10Gbps), and SAS E2600: SAS, Fibre Channel, and iscsi (1/10Gbps) Video surveillance solutions have been validated with SAS host connectivity on the E2660, and the E5460 has been validated with Fibre Channel connectivity. All components on both storage array models are redundant, providing automated path failover. Online administration is accomplished through the SANtricity ES management client. The E-Series is ideally suited for video surveillance archiving because it incorporates: High throughput. Up to 24Gbps for the E5400 controller. Space efficiency. 60 drives in 4-RU (240TB with 4TB drives) and up to 1.48PB in 24 rack units. Reliability. Fault tolerance and redundancy are built in; all components are hot swappable. Maintainability. Firmware updates on one controller while the second controller handles all I/O. The E-Series scales from one 4-RU shelf to up to six 4-RU shelves. Deployments can encompass from hundreds to thousands of cameras, depending on the camera data rate, retention period, and available free space for reserve capacity. The breadth of the solution is illustrated in Figure Video Surveillance Solutions Using NetApp E-Series Storage

18 Figure 4) E-Series for video surveillance. The target for E-Series storage in the physical-security market is open VMS deployments, which enable the physical-security integrator to design a solution that provides best-in-class network video cameras, servers, software, and NetApp storage. The unique functionality of the E-Series storage platform makes it an ideal solution for large video surveillance deployments that use high-resolution cameras with long retention requirements. 4 Planning and Design This section discusses planning and design aspects the physical-security integrator must consider when implementing an E-Series storage array in a video surveillance deployment. 4.1 Virtualization of Servers Server virtualization is widely accepted in the enterprise data center because it provides logical segmentation of servers that was previously accomplished by physical segmentation. Data center servers that are not constrained by resource consumption (memory or CPU) are ideal candidates for virtualization. Video recording servers are not prime candidates for virtualization because the function of ingesting video feeds from possibly hundreds of network video cameras, potentially streaming live or archived video to viewing workstations, executing an analytic function (motion detection), and writing the video to disk places high demands on the resources of the server. Advances in several core CPUs have made it possible to implement video recording servers in a virtual environment. Some physical-security integrators prefer to deploy relatively inexpensive 1RU servers in a nonvirtual environment to eliminate the costs of purchasing, installing, and maintaining a hypervisor. A low-end 1RU server, 2GHz quad-core CPU, 4GB RAM, and two GbE interfaces meet the recommended recording server performance specifications of many open video management systems. This class of system is capable of supporting approximately 64 to 128 HDTV/megapixel cameras in some deployments. This 18 Video Surveillance Solutions Using NetApp E-Series Storage

19 deployment model is particularly attractive for IP SAN/iSCSI deployments. One network interface can be used for video ingress, and the second interface provides connectivity to the storage array. Deployments that require the higher performance characteristics of the E5460 controller and dual-port Fibre Channel host bus adapter (FC HBA) are more likely candidates for implementing the recording server component as virtual machines on a higher performance 1RU, 2RU, or 4RU server. In this configuration, the FC HBA is shared by all the virtual machines on the physical chassis. As an example, the E5460 can be attached to the server FC HBA through the native multipath drivers of VMware ESXi 5.x. Raw disk mapping (RDM) is used to present one or more volumes as logical unit numbers (LUNs) directly to the recording server virtual machine. A four-port GbE adapter can be defined as a PortChannel configuration to the network switch, or a single 10GbE interface can be installed. Four or more recording server virtual machines per physical server can be supported in this configuration. Virtualization is an ideal choice to implement high availability and excellent throughput for recording servers, while reducing the number of physical machines that must be deployed. 4.2 File System The majority of open platform based VMS solutions use Windows Server 2008 R2 as the operating system and the NTFS file system with an allocation unit size of 64kB. Parallel file systems such as StorNext or Lustre are not typically deployed for video surveillance. Some VMS applications implement a tiered approach to storage, allowing the VMS administrator to define multistage storage architectures. As video archive files are moved from one level of hierarchy to another, grooming to reduce the frame rate is an option. Encryption of the video archives might also be an option. The features of grooming and encryption, however, affect the performance of both the I/O and CPU. If the grooming is configured to move files from one volume (LUN) to a second, files must be read from the source LUN and groomed and written to the target volume (LUN). The effect on performance must be considered when implementing tiered storage. 4.3 Storage Planning with E-Series Each video recording server requires one or more volumes (LUNs) defined to the operating system for archiving video files. The SANtricity ES array management subsystem is used to configure the E-Series storage array. Individual hard disks are allocated to a volume group or DDP using the Create Volume Group/Create Disk Pool wizard. The minimum number of disks in a DDP is 11, whereas the minimum number of disks for a volume group depends on the RAID level. The maximum number of disks for RAID 5 or RAID 6 is 30. The limit for DDP is the total population of physical disk drives in the array. During the volume group definition step, the RAID level for all physical disks assigned to the volume group is selected. The supported levels RAID 0, 1, 10, 3, 5, and 6 are for traditional volume groups, whereas DDP uses RAID 6 stripes allocated over 10 of the drives in the pool. Each physical disk has a 512MB area for storing the array configuration database and optional space for dynamically changing the segment size. Individual volumes (LUNs) are created and mapped to a host following the volume group or DDP definition. Each volume can be individually configured for segment size, modification priority, cache settings, and media scan frequency. The logical definitions of volumes, volume groups, and disk pools are shown in Figure Video Surveillance Solutions Using NetApp E-Series Storage

20 Figure 5) E-Series disk structure. The number of physical disks per volume group or disk pool and the number of volumes per group or pool are determined by the performance and sizing requirements of the video recording server and the application software. RAID Levels For video surveillance deployments, RAID 5/RAID 6 or RAID 10 is commonly deployed in the industry. The Nevada Gaming Commission standards specify that the storage array must not lose data in the event of the failure of a single component. Although RAID 6 provides better fault tolerance because it can tolerate two disk failures, RAID 5 is often deployed instead because of lower costs while still adhering to the standards. RAID 10 is typically used for best read performance when combined with solid state disk (SSD) or disks with higher (15K) rotational speed. RAID 5/RAID 6 is used for best write performance. On the E-Series, RAID 10 is implemented by selecting RAID 1 with four or more drives. Some VMS vendors recommend using a combination of RAID 10 and RAID 5 in gaming deployments where a high volume of forensic analysis occurs, during the most recent minutes or hours of video archives. These designs use RAID 10 for the most recent archive and then, with the tiered storage feature, move video to a RAID 5 volume group for the duration of the retention period. This design consideration might not be required in environments that have infrequent forensic analysis or where the performance level is such that the RAID 5 or RAID 6 volume group provides acceptable read performance. The education market is one vertical where reviewing archived video occurs only if an incident (for example, vandalism or altercation between students) warrants analysis of the video. Hot Spares Hot spares are disks that remain idle until needed. Hot spares are used in place of a failed drive, allowing reconstruction of the data and parity across the number of drives in the volume group. Video surveillance performance is often measured during a disk rebuild because the system is under both read and write I/O during the rebuild process. NetApp recommends using a minimum of one hot spare per every 30 drives in the system. The amount of time required to rebuild a hot spare drive depends on the size of the drive and the number of drives in the volume group and might take hours or days. DDP is a means to address the performance penalty and length of time required to rebuild a failed drive. There are no idle hot spares when DDP is used; spare capacity is incorporated into the pool. 20 Video Surveillance Solutions Using NetApp E-Series Storage

21 Dynamic Disk Pools DDP is a feature available on the E-Series to maintain a consistent level of performance delivery even in the event of drive failure and reconstruction. The performance drop is minimized during rebuild, and the rebuild completes more quickly than with a traditional RAID rebuild. Because of the shorter rebuild time with DDP, the exposure to data loss from several drive failures is minimized. A single pool may be defined that includes all disks in the system, or multiple pools may be defined to the system. The minimum number of disk drives in a DDP is 11. Data is striped over 10 drives in the pool, and an extra drive is needed to provide redundancy across all drives in the pool. DDP uses RAID 6 as the RAID engine. The storage administrator may configure a mixture of both traditional volumes and DDP. Traditional volumes with RAID 10 may be created for maximum performance, and DDP can be configured for capacity volumes. NetApp recommends 30 to 60 drive pools using DDP in video surveillance applications. As an example, the usable capacity for volume sizes commonly deployed for video recording servers using 3TB drives is shown in Figure 6. Figure 6) DDP usable capacity. For video surveillance solutions, smaller single-volume disk pools are optimal for bandwidth and provide performance comparable to that of RAID 6 with rebuilds that are twice as fast. This is why video surveillance management software performance testing is measured when the system has filled the volume to capacity and is in file-deletion mode. DDP performance does not degrade like RAID 6 and is more consistent throughout the lifecycle. 4.4 Workflow The performance of disk systems is characterized by I/O operations per second (IOPS) and/or throughput in megabytes per second. Network performance is measured in packets per second and throughput in megabits per second. Optimizing IOPS is important when the disk array is used for small random I/O operations from multiple applications. Network packet-per-second performance is usually measured in small (64-byte) packets. However, video surveillance deployments are more concerned with throughput performance than with IOPS. Network video cameras generate large IP packets to the recording servers and write relatively large records to the storage array. Because the video ingress to the recording servers is over an IP network and the data rate is typically calculated in megabits per second (Mbps) for IP networks, many of the tables in this document list Mbps rather than megabytes per second (MBps). 21 Video Surveillance Solutions Using NetApp E-Series Storage

22 4.5 Deployment Example Examining the characteristics of an actual deployment helps put the workflow characteristics in perspective. A single recording server manages 46 network video cameras configured for continuous recording. These cameras are configured for H.264/RTP/UDP using HDTV 720p resolution at 30 frames per second. The network switch port interface statistics for the recording server report that the data rate to the server is 11,000 packets per second at 118Mbps. From this information, the average packet size to the server is calculated at ((118M / 8 bits) / 11,000), or 1,406 bytes per packet. The workload to the volume (LUN) defined to the recording server is reported by SANtricity ES at approximately 20MBps at 44 I/O per second. That rate is equivalent to 160Mbps with the average I/O size of 465kB. Video management systems commonly use a record size of either 256kB or 512kB. This sample recording server receives 11 IP packets every millisecond (ms) and generates a write operation to the storage array every 22ms. 4.6 I/O Characteristics The video surveillance workload in many deployments is characterized as 99% write workload and 1% read workload. In these deployments video is archived to disk either continuously or based on motion detection and is not reviewed unless there is an incident that requires analysis. The education market is one example where archives are viewed infrequently. The write workload is typically a constant workload per volume (LUN) based on the number of cameras per server. Read workload is based on the frequency and number of viewing stations reviewing archived video. Most video management systems implement analysis tools that enable the operator to fast forward video. There are also features to intelligently search archived video for motion or objects in a particular area of the field of view of the camera. These search utilities might examine all archived video between two time periods or every tenth frame. Additionally, video archives from multiple cameras can be time of day synchronized and fast forwarded. This read workload might generate I/O requests at many times the rate that the video was originally written to disk. Write workload is relatively easy to characterize, whereas read workload is less predictable. The architecture and configuration of the video management system also affect the workload to the storage array. Systems that implement tiered storage schedule a copy from one volume or directory to another at a recurring interval (such as hourly or daily), and during the copy function the IOPS of the storage array might increase by a factor of eight or more. This function generates read and write I/O. While examining workflow and performance data, video surveillance deployments must first measure the baseline write performance and then consider the frequency that video is read or copied following the initial write. 4.7 High Availability Real-time applications such as video provide a challenge for physical-security integrators in that any outage or failure between a network video camera and the storage system means the record of events is lost and cannot be recovered. Implementing high availability for video surveillance begins with considering camera placement, the network infrastructure, server and video management software redundancy, and finally the storage array. These components are shown in Figure Video Surveillance Solutions Using NetApp E-Series Storage

23 Figure 7) High-availability design. For areas of critical importance, multiple cameras with overlapping fields of view should be implemented to maintain coverage in the event a single camera or access-layer switch fails. Multiple cameras covering the critical area must be connected to separate access-layer switches with redundant uplinks to the core/distribution-layer switches. The IP network must implement high-availability network design principles, rapid convergence from link and/or switch failures, deterministic traffic recovery, and sufficient capacity to adequately service traffic during failures. VMS features that use local storage in the network video camera, failover recording servers, and a redundant management server protect the availability of the video archives. Hypervisors such as VMware ESXi have native support for link aggregation. For nonvirtual deployments, the Microsoft failover cluster virtual adapter for Windows Server 2008 supports link aggregation. For Fibre Channel or direct connect SAS connectivity between the server and the E-Series, dual-port HBAs are installed that provide redundant paths to each controller. For iscsi deployments, multiple Ethernet NICs connecting to dual IP SANs also provide for high availability to the E-Series controllers. The failover drivers are at the center of providing path failure recovery between server and storage array. In general, failover drivers implement the following functions: Identify redundant I/O paths Reroute I/O to an alternate controller when the controller or data path fails Check the state of paths to a controller Provide status of controller/bus For Windows, the failover drivers are a combination of Microsoft MPIO plus the SANtricity ES host installation device-specific module (DSM). The E-Series supports the native multipath feature of VMware ESXi. 4.8 Multipath Overview Hosts identify devices based on their initiator port, the target port, and the LUN number. Hosts with redundant IP SAN interfaces (iscsi), dual-port SAS interfaces, or dual-port HBA adapters (Fibre Channel) connected to a duplex E-Series controller have redundant paths to their LUNs. The host installation option of the SANtricity ES installation utility must be installed on the physical recording 23 Video Surveillance Solutions Using NetApp E-Series Storage

24 servers for Windows deployments to implement the multipath driver necessary to direct I/O through the correct path to the LUN. Windows Server running in guest virtual machines does not require the E-Series multipath drivers to be installed; the native multipath drivers for ESXi are used instead. In addition to providing multiple path discovery and configuration, multipath drivers manage I/O load balancing across multiple paths traversing the owning controller and manage controller, path failover, and failback. Using all available paths, for example, by selecting a round robin or least queue depth option, is most effective for increasing the throughput between host and storage controller for the relatively slower host interfaces connectivity. Deploying iscsi over GbE interfaces might encounter substantially better throughput by load balancing the multiple paths than deploying a single 8Gbps Fibre Channel connection. E-Series Certified Multipath Drivers The E-Series certified multipath drivers for Windows Server 2008 R2 are the Windows MPIO component and the SANtricity ES host installation that loads the appropriate DSM. By default, Windows supports four paths per controller, with a maximum of 32 paths. Windows supports up to 255 volumes (LUNs) per host. For VMware, the VMware native multipathing plug-in (NMP) is certified. When running Windows Server 2008 R2 in virtual machines under VMware ESXi 5.0.0, only the SANtricity ES host utility should be installed. The utility SMdevices is installed as part of the SANtricity host utility installation and is a useful troubleshooting tool for identifying the attached storage array name and volume information. The sample topology illustrated in Figure 7 has redundant paths between the video recording server and the storage array. For Fibre Channel deployments, multiple active and standby paths might exist in the topology, depending on the number of ports in use. For iscsi, configuring multiple active and standby paths is a manual process in the Microsoft iscsi initiator. To recap, SANtricity ES is installed on each Windows recording server: For VMware ESXi guests, select custom installation and install only the utilities. For nonvirtual Windows deployments, selecting the host option to install the utilities, and the DSM provides multipathing support for high availability. 4.9 Network Planning Video surveillance deployments require a network infrastructure that addresses these requirements: Provides sufficient available capacity (bandwidth) to transport video Exhibits very low/no loss of IP video packets Features network latency within the range suitable for the transport protocol (TCP or UDP) of the video feed Provides high availability through network redundancy and best practices in network design Meets the network security and services requirements Video may be transported between endpoints using either UDP or TCP. Image quality problems (loss of frames) can occur in both transport methods. Although TCP is a connection-oriented protocol, TCP transport is the first to give up its bandwidth during congestion, and real-time applications such as video might arrive too late and need to be discarded by the receiver because the playout time has passed. Although IP network-based video surveillance deployments share many of the same service-level agreements (SLAs) as voice over IP (VoIP), the bandwidth requirements of video are substantially higher than those of VoIP. Additionally, each network camera streams video over the network constantly (24/7), whereas an IP phone uses fewer network resources unless there is an active call. Implementing networkbased video on an existing network requires network quality of service (QoS) for data, VoIP, and video. Regardless of whether a physically separate network is implemented for video surveillance or video is converged on an existing network infrastructure, the physical-security department or integrator must work with the IT department to implement network equipment consistent with the existing infrastructure. 24 Video Surveillance Solutions Using NetApp E-Series Storage

25 Leading network vendors as well as leading integrators offering voice and video network implementation services can assist with network readiness assessments for IP video surveillance deployments. Networking Example This section discusses how the network switches provide top-of-rack connectivity to the recording servers and integrate with the core/distribution-layer switches. A sample configuration built and tested in NetApp s RTP labs is used to illustrate the specifics of networking for video surveillance. A sample video surveillance solution has been validated for the E2660 deployment consisting of NetApp E-Series storage, up to four Cisco UCS C220-M3 servers, and two Cisco Nexus 3048 top-of-rack integrated layer 2/3 switches. VMware ESXi 5.1 is installed on each Cisco UCS server. Each server is configured with four virtual machines, each running Windows Server 2008 R2. The Cisco Nexus 3048 switches provide the server access-layer switching infrastructure to connect the video surveillance system to the end-customer IP network infrastructure. In most deployments, the network video cameras and viewing workstations are connected to existing or new network routers, and switches are installed as part of the physical-security deployment. The IP network is a critical component in the architecture because it provides connectivity to all key components, as shown in Figure 8. Figure 8) Architectural topology overview. Note: The E-Series storage arrays are connected to the network switches to provide management access for workstations running SANtricity ES, even though the host attachment is over Fibre Channel links or direct SAS attachment. Each Cisco Nexus 3048 switch supports four 1/10Gbps SFP+ ports. Two ports on each switch are configured as 10Gbps virtual PortChannel (vpc) peer links; the two remaining ports on each switch may be used for either layer 3 (routed) or layer 2 (switched) uplinks. The tested configuration utilized one 10Gbps SFP+ Fibre on each switch for uplink connectivity and high availability. The Cisco Nexus 3048 is a 1RU chassis with 48 10/100/1000Mbps RJ-45 ports to connect to data and management interfaces on servers and the E-Series management ports. There are redundant power supplies and redundant fans in the fan tray. The Cisco Nexus 3048 enables high availability through the redundant power supplies and fans, redundant uplinks, and the vpc feature. Either of the Cisco Nexus 3048 switches in the video surveillance 25 Video Surveillance Solutions Using NetApp E-Series Storage

26 deployment can fail or be taken out of service without disrupting the ability of the video surveillance servers to capture and record video streams from networked video cameras. Network Interfaces In the sample configuration tested by NetApp, each Cisco UCS C220-M3 server contains a Broadcom quad-port Ethernet adapter. The four ports are aggregated into one logical link. This link provides video ingress to the servers from the network video cameras. Two of the member links are connected to one Cisco Nexus 3048 switch, and the other two member links are connected to the second Cisco Nexus 3048 switch. The Cisco Nexus 3048 switches are configured with two vpc peer keepalive links (1Gbps) between switches and two 10Gbps vpc peer links between switches. The vpc peer keepalive links carry only control plane traffic and are used to detect a peer failure. The vpc peer links are layer 2 trunks and transport both the device management VLAN traffic as well as traffic for the server PortChannel interfaces in certain failure situations. This network topology is illustrated in Figure 9. Figure 9) Cisco Nexus 3048 topology overview. Note: Only one server (Server 1) is shown for clarity. All servers are similarly configured. The uplink connectivity is shown later in this document. VMware vsphere Networking Configuration From the VMware vsphere client of the Cisco UCS C220-M3 server, the video ingress network is configured as a switch with four physical adapters. This configuration sample is shown as vswitch1 (VLAN 2020) in Figure Video Surveillance Solutions Using NetApp E-Series Storage

27 Figure 10) VMware vsphere networking configuration. In addition to the video ingress VLAN, a device management VLAN is configured to provide management connectivity for the E-Series management ports; a Cisco integrated management controller port for physical server management; an ESXi VMkernel management network port; and a guest operating system management network port for SSH, Linux X-terminal, or Windows remote desktop connection connectivity. Uplink Connectivity (Layer 2) As tested, the design does not require any additional Cisco NX-OS software packages for the Cisco Nexus 3048 if only layer 2 services are used. The system default (no license required) includes features used in this solution: VLAN, IEEE 802.1Q trunking, vpc, Link Aggregation Control Protocol (LACP), Secure Shell Version 2 (SSHv2) access, and Cisco Discovery Protocol. With this option, the uplink connections between the video surveillance system and the campus core/distribution switches are configured as layer 2 (switched) PortChannel trunks. This topology is shown in Figure Video Surveillance Solutions Using NetApp E-Series Storage

28 Figure 11) Uplink connectivity (layer 2). Given this assumption, the end-customer core/distribution switches must be configured to provide layer 2 and layer 3 features to support a video surveillance solution. These features include: Primary and secondary root spanning-tree bridge (rapid spanning-tree protocol [RSTP]) Ethernet switch virtual interfaces (SVIs), for example, interface VLAN for video ingress and management VLAN Hot standby router protocol (HSRP) or virtual router redundancy protocol (VRRP); virtual IP addresses for the video ingress and management VLAN As part of the installation and implementation process, verify the high-availability configuration of the design by alternately reloading the Cisco Nexus 3048 switches and validating connectivity and recovery. Uplink Connectivity (Layer 3) The end customer might require additional features not supported in the NX-OS system default (no license) and can purchase additional Cisco NX-OS software packages, the base license (N3K-C3048- BAS1K9), and the LAN enterprise license (N3K-C3048-LAN1K9). These packages include features such as IP multicast support (IP PIM-SM) or advanced layer 3 routing such as OSPFv2 or EIGRP. For example, a routed server access layer can be implemented with these optional licenses as described in the Cisco document High Availability Campus Network Design Routed Access Layer using EIGRP or OSPF. If the end customer requires layer 3 connectivity to the video surveillance deployment, the topology is as shown in Figure Video Surveillance Solutions Using NetApp E-Series Storage

29 Figure 12) Uplink connectivity (layer 3). Network Management Caveats The Cisco Nexus NX-OS does not include software support for the Domain Name System (DNS) server or Dynamic Host Configuration Protocol (DHCP) server. These services must be provided by the endcustomer network management systems if desired. As a best practice, the Cisco Nexus 3048 switches should be configured to log to a syslog server and respond to SNMP queries as well as send SNMP traps to a network management workstation. Network Design Rationale The sample Cisco Nexus switch configuration implements these best practice network design concepts, as described in Table 3. Table 3) Best practice network design concepts. Design Decision VLAN security best practices VLAN 1 VLAN 2 unused ports VLAN 3 native VLAN Explanation This application note provides details on VLAN security best practices. Refer to The concepts are incorporated in this design. In many deployments, VLAN 1 spans multiple switches and is not bounded by pruning from trunk ports between switches. Ports in this deployment are not assigned to VLAN 1, and the VLAN 1 SVI is shut down. In the sample topology, VLAN 2 is defined and configured for all unused ports on the switches. VLAN 2 is not permitted on trunk ports. If a rogue user attaches to a switch port that is unused, the connected device will not have ready access to the network outside the local switch. Additionally, it is recommended to disable unused ports. VLAN 3 is designated as the native VLAN for this deployment. A native VLAN is the untagged VLAN on an 802.1q trunked port. The native VLAN in this topology is only configured on trunked ports. VLAN 3 has no edge ports. 29 Video Surveillance Solutions Using NetApp E-Series Storage

30 Design Decision VLAN 7 device management VLAN 58 virtual PortChannel peer keepalive VLAN VLAN 2020 video ingress Virtual PortChannel Virtual PortChannel peer links PortChannel load balancing Routed server access layer Explanation This deployment utilizes a VLAN designated for managing the E-Series controller ports and the three management interfaces of each server. There are also available unused ports configured in VLAN 7 for service personnel to attach a laptop to a port for initial installation and ongoing troubleshooting. This VLAN is trunked using a layer 2 uplink or by using layer 3 connectivity to the network core. VLAN 58 is designated as the vpc peer keepalive VLAN. A PortChannel 58 interface and SVI interface are configured on both switches with two 1G member ports. An IP network address is assigned to the SVI interfaces and used as the source and destination IP addresses for vpc keepalives. The switch management interface (mgmt0) is not used for vpc keepalives, allowing the end user to connect the management interface to other devices in the network topology to manage the switches out of band. VLAN 2020 is designed to transport IP video surveillance network traffic from video surveillance cameras to the recording servers. The video management software management server virtual machines are also assigned interfaces on this VLAN. Each ESXi host has four 1Gbps Ethernet aggregated lines configured on a virtual switch and connected to a PortChannel on the 3048 switches with four member ports. Two of the four links are attached to each 3048 switch and are associated by a common vpc number. A vpc allows links that are physically connected to the two Cisco Nexus 3048 switches to appear as a single PortChannel to a third device. The third device in the deployment is the Cisco UCS C220-M3 servers with quad-port Broadcom Ethernet adapters. Additionally, if the deployment utilizes layer 2 uplinks, these are also configured as vpcs. The vpc peer link is a PortChannel with two 10Gbps member interfaces, per Cisco best practices. The vpc peer link carries control traffic between two vpc switches, multicast, broadcast, and in some instances unicast traffic. The default PortChannel load-balancing hash uses the source and destination IP to determine the load-balancing algorithm used across the interfaces in the PortChannel. The default configuration will be suitable for most deployments. This load-balancing algorithm may be changed as required. This configuration illustrates using either layer 2 or layer 3 uplinks to the network core. Layer 3 features require additional license files to be purchased and installed on each switch. The vpc is part of the system default license. The base license (N3K-BAS1K9) includes limited layer 3 and IP multicast and has some application in video surveillance deployments. The LAN enterprise license (N3K-LAN1K9) includes all the layer 3 routing features plus virtual routing and forwarding lite (VRF-Lite). Installing the LAN enterprise license on the video surveillance solution switches allows a more defined demarcation between the system and the core/distribution network switches. The same VLAN numbering scheme can be used on all switches because of the layer 3 demarcation. Troubleshooting network connectivity problems might also be easier because the default gateway addresses (HSRP virtual addresses) are configured on the 3048 switches and not on the network core/distribution switches. The layer 2 spanning-tree domain only encompasses the two Cisco Nexus 3048 switches. As a best practice, NetApp recommends implementing a routed access layer. For more information, see High Availability Campus Network Design Routed Access Layer using EIGRP or OSPF. 30 Video Surveillance Solutions Using NetApp E-Series Storage

31 4.10 Server Planning Physical-security integrators have traditionally looked at servers as a commodity item, and deploying the lowest cost server that meets the performance requirements of the video management software is the primary design consideration. The idea behind deploying open platform systems is to allow flexibility in selecting the best component for the task. Best may be defined as least expensive while meeting the performance criteria. A common design point for physical-security integrators is to deploy relatively low-end 1RU recording servers without virtualization as a rack-and-stack means of cost savings. This design is most advantageous when the host interface to the storage array is a relatively inexpensive 1Gbps iscsi. When dual-port Fibre Channel HBAs or dual-port direct connect SAS host interfaces are used, the cost of the HBA, or the limits to scalability with a direct connection, preclude a rack-and-stack approach. Deploying fewer high-end servers with virtualization might be a more cost-effective choice. Hardware Recommendations At a minimum, the viewing workstation and recording servers must meet the minimum hardware requirements of the video management software vendor. For example, OnSSI lists its hardware recommendations at These are usually general recommendations, for example, dual-core Intel Xeon (quad core recommended) or Intel Core i5 or better, rather than specifying an exact model or clock rate. CPU specifications change too frequently and have far too many derivations for exhaustive testing of each model. CPU In validation testing, CPUs that are tested have ranged from Intel Xeon E5504 at 2GHz with two processor sockets with four cores per socket for a total of eight processors to Intel Xeon E at 2.90GHz with two processor sockets with eight cores per socket for a total of 16 processors. Given that both these configurations meet the minimum hardware recommendation, from a design standpoint the difference will be the number of cameras that can be supported per recording server. Or alternately, the CPU utilization will be higher for the same number of cameras with the lower performing CPU than the higher performing CPU. One advantage of deploying recording servers as virtual machines is the ability to take advantage of unused CPU cycles by adding recording server virtual machines to a physical machine to more fully utilize CPU cycles. Although there might be additional costs associated with licenses for the hypervisor, these costs might be offset by more efficient use of resources. Server Manufacture For solution validation testing, Cisco UCS C220 M2 and M3 rack servers as well as Fujitsu PRIMERGY RX300 S6 servers have been used. One advantage of selecting a preferred system from the same manufacturer is the support synergy. Each manufacturer has its own management interface; for example, Fujitsu ServerView Remote Management irmc or Cisco integrated management controller is used for management and monitoring of the system. Implementing systems from multiple vendors means additional support costs associated with learning multiple management interfaces. Memory In validation testing, memory is not a limiting factor. 4GB to 8GB of RAM per recording server is sufficient, as recommended by the video management software. 31 Video Surveillance Solutions Using NetApp E-Series Storage

32 General Design Criteria The following items represent general design criteria when selecting a server hardware platform: Sufficient main memory to support the virtual machine requirements (for example, 8GB RAM per virtual machine, using four virtual machines, 32GB total RAM) Quad-core CPU in the 2.0GHz through 2.9 GHz range per recording server/virtual machine Integrated Ethernet adapters and PCI-based quad-port 1Gbps Ethernet or 10Gbps Ethernet for video ingress and optionally IP SAN connectivity Dual internal disk drives configured as a RAID 1 virtual drive (internal RAID controller) for highavailability boot drive Form factor: 1 RU for space savings Dual power supplies for redundancy Embedded server management to provide a remote virtual KVM and power cycle/reset capabilities 4.11 Design Checklist To design the system properly, a myriad of factors must be considered to address customer requirements in an efficient and cost-effective manner. Table 4 represents some of the high-level considerations that must be examined to select the best components. Table 4) Design checklist components. Design Element Aggregate video data rate Video management system software End-user requirements Local support High availability Host interface considerations Retention requirements Network requirements Description The number of cameras and the resulting aggregate data rate must be determined to estimate the number of recording servers and the type and size of the storage array. The architecture of the VMS determines the workload requirements of the storage array. Systems implemented for public sector deployments might have dramatically different workloads. Deployments with a high percentage of viewing video might require more servers and different volume layouts than systems with little forensic review of video. This refers to the geographical location and local support staff. Readiness of on-site support staff might determine the number of hot spare disk drives or influence the decision to deploy traditional volume groups or DDP. The costs associated with download or video loss might be more of a consideration in some deployments than others. Implementing a highly available design mitigates outages, but increases the cost and complexity of the deployment. The number of servers required influences the choice of host interface to the storage array. Direct connect serial-attached SCSI (SAS) provides high throughput but is limited by distance and scalability. Fibre Channel is costlier, but provides high throughput and reasonable cabling flexibility. iscsi provides acceptable throughput at low interface costs and has no practical distance limitation. The video retention policy is a key component to sizing the system and has a direct influence on the performance characteristics of the system. The additional network routers and switches required to support the implementation must address the high-availability requirements, the type of host interface connectivity (IP SAN requirements), and the readiness of the existing customer network. Video that is lost between camera and server is never archived. 32 Video Surveillance Solutions Using NetApp E-Series Storage

33 Design Element Type of servers Description Deployments that implement recording servers in a virtual machine have different server requirements compared to deployments in which the host operating system is installed on a physical server. 5 Sizing Fundamentals Video surveillance solutions based on NetApp E-Series provide performance, efficiency, and reliability with enterprise-class support for large-scale video surveillance deployments. The solutions can utilize the NetApp E-Series storage array in an E5460 configuration or an E2660 configuration. This section addresses sizing guidance for both deployment models. 5.1 System Requirements The system requirements are specified in a request for proposal/quote developed by either the end customer or a physical-security consultant in contract with the end customer. The physical-security integrator must work with the physical-security manager to accurately assess specific requirements, including: Retention period Number, location, and type of cameras, resolution, frame rate, and so on Video management software selected Number of cameras per recording server Continuous recording or record on motion Frequency of viewing archived video Failover design requirements These requirements have dependencies that affect the total system and are illustrated in Figure 13. Figure 13) System requirements. 33 Video Surveillance Solutions Using NetApp E-Series Storage

34 The individual requirements must be discovered, analyzed, and documented to accurately size the storage array. The following sections address each of the components, provide technical background on the options, and make recommendations on industry best practices. 5.2 General Considerations The process of sizing storage to address the video management application requirements is an exercise that balances throughput and capacity considerations. In video surveillance deployments that are characterized by minimal forensic analysis of archived video, capacity is the primary consideration. Recording servers configured to continuously record video exhibit a relatively deterministic I/O pattern. The arrival rate of video feeds from IP cameras is constant, and the video recording server in turn writes these streams to an archive at a consistent rate. Many video management systems write the video archive to a temporary or recording directory on the volume logical unit number (LUN) and subsequently read and write the temporary video archive to a permanent directory. Although this movement of video data files from one location to another adds a secondary I/O to the initial write, the workload characteristics can be quantified. Markets that fall into this category are secondary and higher education, enterprise, and commercial or retail deployments. Video is only retrieved and analyzed if an incident requires investigation. For example, the education market might only review video from a few cameras several times a week. The I/O is mainly writes, and there is little viewing (reads) of the archived video. The gaming market is an example where there is a high degree of forensic analysis of video archives. In these deployments, several security operators are frequently reviewing video archives to investigate suspicious activity, theft, and fraud. This activity adds a high degree of read I/O to the workload, which is exaggerated by the use of fast forward capabilities of the VMS. These I/O patterns are less deterministic than the constant influx of video feeds from camera to the server because the workload is a function of the number of operators conducting investigations, the number of camera archives being viewed, and the playback speed. These deployments might require the use of two volumes (LUNs) per recording server. One volume is often configured for RAID 10 for the recording directory, and a second volume for storing the video archive to the target retention period is configured as RAID 5 or RAID 6. RAID 10 is used for the recording directory due to better assumed or observed read performance than RAID 5 or RAID 6 on the storage array. However, if the performance required by the application is achievable with a recording directory using RAID 5 or RAID 6, then this dual (tiered) volume approach adds to the cost and complexity of the implementation, with no functional advantage. In many cases, the measurable differences between RAID 10 versus RAID 5 or RAID 6 performance might be trivial in a well-designed solution, and the advantage of RAID 10 might be anecdotal from an older, atypical deployment using a poorly performing storage array. 5.3 Retention Period The retention period is usually determined by organizational or regulatory policy. For example, the Nevada Gaming Commission Regulation 5.160(2), Surveillance Standards for Nonrestricted Licensees, specifies a minimum retention period of 7 days. However, cameras deployed in nonregulated areas such as lobbies, parking lots, and other common areas might have a 30-day retention period governed by hotel policy. State and local governments typically specify video retention periods as part of their state records retention schedules. Geographies with a history of criminal activity or terrorist threats might specify longer retention period requirements. Increasing the retention period does not increase the arrival data rate of video feeds from IP cameras to the recording server and storage array. Longer retention policies make the sizing exercise more of a 34 Video Surveillance Solutions Using NetApp E-Series Storage

35 capacity calculation than a performance consideration. This is illustrated by the example shown in Figure 14. Figure 14) Example of throughput versus retention. There are both an absolute limit and a practical limit to the amount of storage allocated to an individual recording server. Additionally, factors such as network interface capacity and CPU, memory, and application performance limit the number of cameras per recording server. In most video surveillance deployments, the performance characteristics of the E-Series controllers meet or exceed the requirements of the video management application. All video management systems reach a steady state, where video archives are deleted at the same rate that new video files are added to the storage array. NetApp recommends a retention period that meets or exceeds the applicable policy. 5.4 Reserve Capacity When the retention period policy is defined, the amount of reserve capacity must be considered. As a best practice, each volume should be maintained at approximately 80% utilization. Most VMS packages implement a file deletion trigger when the configured maximum size of the archive is reached. Each volume owned by the recording server has a configurable threshold defined as the minimum free space or maximum archive size. Usually this is specified in gigabytes/terabytes in addition to the configured (days) retention period. If either of the two maximum values is reached, that is, the retention period in days or the maximum archive size, the oldest video files are removed from the system to prevent a disk full error condition. As a best practice, size the system based on a target capacity of 80% based on the stated retention period of the organization. Then, when implementing the system, configure the VMS to use all the configured capacity, maintaining only 5% unused space. This method makes sure that the video retention meets or exceeds the specified policy. For example, if the target retention period is 30 days, during implementation specify a 38-day retention period with minimum free space at 5% of capacity. This method forces the VMS to delete files based on capacity, utilizing 95% of the volume rather than 80% of the volume. 5.5 Cameras The number of cameras and the configured resolution, frame rate, codec type, compression factor, and image complexity must be determined to estimate the aggregate video data rate and the amount of storage required. Most camera manufacturers provide a design tool to estimate the data rate and storage requirements based on the camera model and specified values. Figure 15 illustrates an example of one such tool. 35 Video Surveillance Solutions Using NetApp E-Series Storage

36 Figure 15) Axis design tool. For more information on the Axis design tool, refer to the Axis website. These tools provide only an estimate. The results might vary and should be verified through field trials run by the physical-security integrator. The total number of cameras at a specific location is a function of the physical-security requirements of the site and how the security integrator plans to address these requirements. The total population of cameras will encompass a variety of camera models, and in some cases the cameras might be from different manufacturers. The integrator can select from several types of cameras, including indoor or outdoor cameras; fixed or pan, tilt, zoom (PTZ); and tamper-proof or vandal-proof dome cameras. From a sizing perspective, the type of camera is not important, but the resolution and number of channels (video feeds) from each camera are important. When the total number of cameras is determined, they need to be further categorized by the resolution. Resolution There are a wide variety of resolutions available in the video surveillance camera industry. They are distinguished by these categories: Analog video (NTSC/PAL): 4CIF is commonly used: 704x480 pixels or 0.4 megapixels Video graphics array (VGA) to XVGA: 640x480 pixels to 1024x768 pixels up to 0.75 megapixels Megapixel SXGA to QSXGA: 1280x1024 pixels to 2560x2048 pixels or 1.3 to 5.2 megapixels High-definition television (HDTV): 1280x720 pixels and 1920x1080 pixels or 0.9 to 2 megapixels E-Series storage is ideal for HDTV and megapixel resolution deployments because of the high density and performance characteristics of the E-Series. For new installations, HDTV/megapixel cameras are the preferred choice over standard definition (SD) cameras. Although the purchase price of a HDTV/megapixel camera is slightly more than that of an SD camera, the installation cost is the same. The total cost of cameras and installation might be less for HDTV/megapixel deployments over SD cameras, because fewer cameras are required to effectively cover an area. Video surveillance camera models are selected to meet a functional requirement. These requirements are classified as detection, recognition, and identification. There are industry guidelines for the number of pixels per foot required to address each category. Detection has a lower resolution requirement than identification. One HDTV/megapixel camera might provide sufficient resolution to meet the pixel-per-foot requirements, whereas earlier multiple SD cameras would need to be deployed. 36 Video Surveillance Solutions Using NetApp E-Series Storage

37 Network video cameras can be configured at resolutions below the specified maximum resolution. An example of the configurable resolutions for the Axis M3204 network camera is shown in Figure 16. Figure 16) Video stream settings. An HDTV/megapixel camera can also be an SD camera, an HDTV format camera, or a megapixel format camera. The Axis M3204 camera can be configured for HDTV format (1280x720 resolution, 16:9 aspect ratio), megapixel resolution WXGA (1280x800 resolution, 16:10 aspect ratio), or VGA (640x480, 4:3 aspect ratio) resolution. NetApp recommends HDTV/megapixel cameras for new deployments. Frame Rate The configured frame rate of the network video cameras also must be determined to accurately estimate the required storage capacity. Frame rates in the gaming industry are specified by regulations and are required to be 30 frames per second. Common frame rates for other industries are 7 to 12 frames per second or less. Cameras positioned at cash registers and teller stations usually require at least 12 to 15 frames per second. In school or office hallways, 5 frames per second are usually sufficient. Parking lots and other overview scenes for detecting cars, people, or objects often require only 1 to 3 frames per second. Cameras positioned with horizontal movement across the field of view or with high-speed movement (highway intersections, for example) generally require higher frame rates than scenes with vertical movement or slow-moving people or objects. As a reference, motion pictures (35mm sound film) have traditionally used 24 frames per second. The human eye begins to notice choppy motion below 16 to 18 frames per second. The specified frame rate greatly influences the network bandwidth and storage requirements. In addition to categorizing the cameras by resolution, now frame rate is also added to the equation. NetApp recommends using a frame rate that meets the regulatory or functional requirements of the camera. 37 Video Surveillance Solutions Using NetApp E-Series Storage

38 Compression Type The compression type (compression standard) options for video surveillance deployments are Motion JPEG or MPEG-4/H.264. Motion JPEG is a series of individual images that have no interdependency between frames. Motion JPEG is often required by analytic software implementations. Because there are no interframe dependencies, Motion JPEG is used in networks that exhibit rates of packet loss that would make MPEG-4 /H.264 unusable. As the frame per second rate increases, the bandwidth and storage requirements for Motion JPEG become increasingly costlier compared to MPEG-4/H.264 for a given resolution. MPEG-4 (MPEG-4 Part 2) and H.264 (MPEG-4 Part 10 AVC) are compression standards that transmit a reference frame periodically and send the changes in the scene in subsequent frames. The frequency of reference frames is configurable by the value specified for group of video/group of pictures (GOV/GOP) length. H.264 is more efficient than MPEG-4, and its use has superseded MPEG-4 for HDTV/megapixel video cameras. Typically, motion JPEG uses Transmission Control Protocol (TCP) to transport video between a camera and a server. TCP is connection oriented and provides for reliable transport. Alternately, MPEG-4 and H.264 are usually transported in Real-Time Transport Protocol (RTP)/User Datagram Protocol (UDP). The UDP transport is connectionless and does not retransmit lost packets. The TCP protocol favors reliability over timeliness. The majority of voice and video applications use RTP/UDP transport. The network infrastructure must exhibit a very low packet loss for transport of MPEG-4/H.264 for acceptable image quality. Dropping as little as 1/20 of 1% of the IP packets for an H.264/RTP/UDP video stream is noticed as trailing artifacts in the image. The distortion of the image is corrected when the next reference frame is successfully received. In video surveillance deployments, NetApp recommends using H.264 and RTP/UDP transport for network bandwidth and storage efficiency as a best practice. Compression Ratio The compression ratio is a configurable value on the network video camera that specifies to what factor the image is reduced before transmission to the recording server. A 10% value indicates little compression, and a 90% value indicates a high degree of compression. The process of compressing the image is known as quantization. In video image processing, this is implemented as lossy compression, meaning details are lost to reduce the size of the image. Selecting a high value might result in an image with compression artifacts or pixelization. An appropriate value depends on the scene complexity, lighting, shapes of objects, and colors in the scene. Values of 30% to 50% are common. NetApp recommends leveraging the recommendations of the camera manufacturer and experience of the system integrator when selecting an appropriate compression ratio. Variable or Constant Bit Rate The H.264/MPEG-4 video encoder in the network video camera may be configured for either variable or constant bit rate. Constant bit rate (CBR) varies the image quality to maintain a constant output network bit rate. If there is little motion in the scene, the quality will remain high. If there is a complex scene with motion (for example, trees swaying in the breeze), the image quality decreases. The decrease in image quality is recognized as noticeable pixilation of all or part of the image. Variable bit rate (VBR) maintains the image quality but changes the output network bit rate to accommodate motion in the scene. Network video cameras deployed in areas with little or incidental motion will have lower bandwidth and storage requirements with VBR. In low-light settings, the camera imager might introduce image noise, which has the same effect as motion. The sizing calculator tool of the camera manufacturer might allow the user to specify level of detail and percentage of motion or select an image scenario such as intersection, stairway, and so on. A night option that will return higher bandwidth and storage requirements compared to the daylight version scenario might also be available. 38 Video Surveillance Solutions Using NetApp E-Series Storage

39 Figure 17 illustrates the effect to ambient indoor lighting levels being adjusted at 7 a.m. through 7 p.m. with 64 HDTV cameras recording onto a single volume (LUN). In this example, there is approximately a 7% difference between the two data rates and storage required. Figure 17) Daylight versus nighttime data rate. It is important to accurately estimate the selected scene complexity because it greatly influences the bandwidth and storage required. In some cases, the storage required for a scene at night might be 60% higher than during the day. Physical-security integrators use infrared (IR) illuminators to provide additional light in low-light environments. The use of IR illuminators minimizes video image noise and the resulting increase in bandwidth and storage. NetApp recommends using VBR for H.264/MPEG-4 and accurately estimating the scene complexity and percentage of motion. Audio Most network video cameras also support an audio channel. The network bandwidth and storage required for audio are minimal compared to video. When sizing, estimate approximately 30Kbps if the audio channel is stored with the video. Video Management System The VMS software is the platform for managing the network video cameras, processing of video streams from the cameras, and recording of streams to the storage array for archive and retrieval. The VMS software also integrates with access control systems, manages events, and generates alerts. Failover recording servers can be configured to continue archiving video streams in the event of a hardware failure. Feature Set Limitations Video management systems might be licensed by a feature set. For example, OnSSI Ocularis has four feature sets: PS, IS, CS, and ES. The ES feature set supports thousands of cameras at multiple locations and utilizes a single central management platform for all recording servers. There are no fixed limits to the number of cameras per recording server, but the documented recommendation for HDTV/megapixel cameras is 40 per recording server instance. There is no absolute limit on the number of days of retention 39 Video Surveillance Solutions Using NetApp E-Series Storage

40 on the live recording volume (LUN). The distinction between multiple recording locations is further explained in the section on Tiered Storage. Following is an example of a single recording location. Figure 18 illustrates Ocularis ES configured for 31- day retention in the live recording volume (LUN) for 128 cameras on a single recording server. Figure 18) OnSSI Ocularis storage configuration. Alternately, the Ocularis CS feature set has an absolute limit of 64 cameras per recording server, and the live recording volume is limited to seven days of recordings. Each recording server is configured independently; there is no central management feature. The selection of the appropriate VMS and feature set is a collaborative effort between the video security integrator and the end user. Regardless of the VMS software package selected, any performance or hard limitations of the number of cameras per server or absolute size of volumes must be assessed to accurately size and configure the storage array. Number of Volumes (LUNs) per Server Most video management software can write video archives to multiple drives. For example, the recording server might have both an E:\ and an F:\ drive as target locations for storing video archives. The recording server might have limitations that prevent a single camera from being written to separate drive letters. The recording server will typically select the location with the most available space when making a determination between drives. It is more efficient to use a single volume rather than several volumes because a minimum amount of free space is required on each volume. Continuous Recording or Record on Motion VMS can be configured to record continuously or to record a length of time before and after a triggered event. Record on motion is the most common event to trigger recording. Because video cameras are commonly placed in areas that routinely have little motion (for example, emergency exits) or have little 40 Video Surveillance Solutions Using NetApp E-Series Storage

41 motion for some period of time (for example, school hallways at night), implementing record on motion can greatly reduce storage requirements. Although configuring record on motion is efficient in the use of storage, it also has several disadvantages. Implementing server-side record on motion is CPU intensive and reduces the number of cameras that a recording server can support compared to continuous recording. Record on motion does not reduce the throughput requirements of the storage array because the recording server must temporarily store the video to disk to implement prebuffering. Prebuffering is the amount of video retained before the event. The amount of video to retain following the event is also configurable. Many video management systems store video in files that contain three to five minutes of video from each camera. It is common to implement record on motion by deleting all files except the files that contain the prebuffer and postbuffer of the event. Record on motion also can be complicated and time consuming to configure. Usually a detection area in the scene is selected to be the target of the record on motion algorithm, for example, a doorway. Then the degree of sensitivity is selected. A change in lighting also triggers a motion event. The algorithm detects changes in either the chroma (color) or luminance (brightness) of pixels. If the sensitivity is set too low, subtle changes do not trigger the motion event, and the video is not archived. If the sensitivity is set too high, the disk usage approaches that of continuous recording. Other factors such as object size and percentage of object in view might also require configuration. VMS might support camera-side motion detection with little CPU overhead on the recording server. In a camera-side implementation, the CPU on the network video camera executes the motion detection algorithm. When motion is identified, an alert is sent to the recording server to indicate that motion was detected. The server retains the video files that contain motion and deletes files that do not. Camera-side analytic algorithms might also support video analysis such as tripwire, people counting, objects left behind, or individuals loitering events. The NetApp recommendation is to evaluate the tradeoff between reducing storage costs against the increase in system management costs when implementing record on motion. Video Walls Video walls are one or more client workstations and monitors used for displaying video in a control room setting. Video can be pushed to the workstation running in a video wall mode and might also display video as the result of triggered events. From the sizing perspective, video displayed on a video wall requires the same amount of throughput as a client-viewing workstation. Viewing Archived Video The frequency and number of camera archives viewed concurrently will incur read I/O to the storage array. Video archives may be viewed at the normal speed or at an increased playback speed. The application might support increasing the playback speed over one thousand times the recorded speed. Figure 19 demonstrates the performance characteristics of viewing 64 HDTV cameras at normal playback speed and transitioning to 16x playback speed. 41 Video Surveillance Solutions Using NetApp E-Series Storage

42 Figure 19) 64-camera transition from 1x to 16x. In this example, the average data rate increased from approximately 140Mbps (2.2Mbps per camera) to 400Mbps (6.2Mbps per camera) when transitioning from 1x to 16x playback. Note that the data rate increase is not a linear relationship; a 16x speed increase increased the I/O rate by a factor of 3x. The performance characteristics associated with the forensic capabilities of the VMS-viewing client are implementation specific and might vary between releases. Both video walls and client-viewing workstations affect the performance characteristics of the storage array but not the capacity required of the storage array. Investigative activities by the client workstations and incident reporting files might need to be considered when sizing the storage array. 5.6 Centrally Stored Video Clips Centrally stored exported video clips (OnSSI Ocularis refers to these as bookmarks) may be stored on volumes (LUNs) on the storage array. The location is configured during the software installation. The amount of storage required is dependent on the frequency and size of the video archives stored. These files have an infinite retention period. They are only deleted manually by the operator. 5.7 Tiered Storage Tiered storage is the term for distinguishing between two or more types of storage that vary in price, performance, capacity, or function. Several VMS packages implement an archival function that moves video files from a temporary or recording location to an archive location. This is referred to as tiered storage because the function aspect of the data has changed. Note: Industry documentation often refers to storing video files in a database. This terminology does not imply that the video is stored in a relational database structure; rather, the individual files are stored within a directory structure of the file system. 42 Video Surveillance Solutions Using NetApp E-Series Storage

43 Archiving is the term used to describe the function of moving video files from their original location on disk to an alternate location. This configuration is defined on a server-by-server basis. The configuration might have zero, one, or more archive locations. These archive locations can be defined on the same volume (LUN) as the recording location or on a separate volume (LUN) on the same or a different storage array. For example, the recording location volume (LUN) might be RAID 10, whereas the archive location could be RAID 6. When using multiple storage arrays, they need not be the same type; block-level SAN storage can be the source of the archive, and a file-level NAS device can be the destination. Files in the archive location may be backed up to tape or other media provided the archive process is not active. During the archive process, video files may be groomed to reduce the frame rate of the original recording or may be encrypted. The archive process occurs on a user-configured schedule. It might be hourly, every four hours, or daily. This archive process affects both the performance and capacity of the storage array. Figure 20 is a sample configuration of separate directories on the same volume (LUN) for recording and archiving. Figure 20) OnSSI Ocularis ES recording and archiving configuration. In the example, both maximum size and retention time are configurable. Files in the recording directory are moved to the archive directory when the files are over 24 hours old. The archive process runs every hour. They are stored in the archive directory for seven days and then deleted. Drive letter J: is mapped to a RAID 5 volume group with TB of capacity. The recording server will not use all of the available capacity on drive J: because of the maximum size parameters specified on the recording (1.95TB) and archive (5.86TB) configuration. This configuration can use up to 7.81TB of the TB available, or approximately 72%. Performance of Tiered Storage During the normal processing of video feeds from cameras and writing to the video recording storage location, the data rate is relatively constant between the server and storage array. When the archive process is initiated, the I/O characteristics change, based on the source and destination of the archive. If the administrator has configured separate volumes (seen by recording servers as LUNs) for the recording location and archive location (for example, E:\RECORDING and F:\ARCHIVE), then the volume containing the recording location will incur read I/O, and the archive location will incur write I/O. The 43 Video Surveillance Solutions Using NetApp E-Series Storage

44 archive function in this example changes the I/O characteristics of the recording location from primarily writes to both reads and writes for the duration of the archive process. The duration of the archive process is a function of the amount of data that must be moved. It is important to understand that the recording server writes video to storage at approximately the rate of arrival from the networked video cameras. The archive process will read and write data as rapidly as the recording server can read, process, and write the files to the destination. In testing, the I/O rates have been observed to increase by eight times or more during the archive process. Sizing of Tiered Storage Every recording location and archive location must have some free space for the system to record arriving video streams. If the locations have insufficient space, the oldest video will be deleted or autoarchived if possible. If files cannot be autoarchived or deleted quickly enough to reclaim space, the recording server might not be able to write video streams to disk. It is important to allocate sufficient capacity to the respective archive locations to meet the requirements of the video retention policy. Additionally, the archive process must occur more frequently than the configured retention period for the location. For example, if the retention period is seven days, the archive function should occur at least once a day; every four hours or hourly is also commonly implemented. 5.8 High Availability A failover server is an idle recording server that can assume the recording server role in the event a primary recording server becomes unavailable. This enhances the availability of the system; however, video is lost during the time taken to detect the server failure and initiate recording on the failure server. The implementation deploys a keepalive (heartbeat) TCP session between the primary and failover servers. Failover servers are defined in a group, and the primary server is configured to specify the failover group. This allows for greater flexibility and efficiency because there is no one-to-one relationship between the active server and failover server. For example, a design utilizing 10 recording servers might be configured with 2 or 4 failover servers. Sizing of Failover Volumes The failover servers must have sufficient disk capacity to assume the archiving function for any of the recording servers referencing the failover group. The volumes (LUNs) defined to the failover server are not used unless the failover event is triggered. When the failed server is restored and back online, the video archives on the failover server are migrated to the primary server. This recovery process adds additional I/O to the primary server and its LUNs during the migration process. The volumes (LUNs) defined to the failover servers must be of a sufficient capacity to store video for the length of time required to detect that a primary server has failed, resolve or replace the failed component, and bring the primary server online. This length of time varies based on the geographic location of the servers, availability of system administrators at the site, and network management practices implemented. NetApp recommends sizing the volumes (LUNs) of the failover servers to retain a minimum of three to five days of video from the recording server with the largest volume (LUN). To calculate the size of the failover volumes, divide the capacity of the primary volume by the site retention period and then multiply by the number of days to retain during failover. For example, given a 30-day retention, a volume (LUN) of 22TB (22/30*5) requires a 3.6TB volume (LUN) for the failover server. Because of the relatively small size of the failover volume (LUN) when compared to the size of the primary recording volume (LUN), creating a volume group for each failover volume (LUN) is inefficient. Allocating one volume (LUN) per volume group eliminates disk-head contention when constantly writing to separate volumes (LUNs) in the volume group. In the example where a 3.6TB volume (LUN) was 44 Video Surveillance Solutions Using NetApp E-Series Storage

45 required for the failover server, creating a separate volume group for each failover server would require the following number of drives to meet the capacity required: 4 drives: RAID 10 (5.5TB) 3 drives: RAID 5 (5.5TB) 5 drives: RAID 6 (8.3TB) Failover volumes (LUNs) are idle unless a failover server is engaged. They are used only if a recording server is taken out of service for an upgrade or if a failure occurs. NetApp recommends creating a volume group or disk pool for failover volumes and allocating volumes (LUNs) to meet the requirements of the failover servers. As an example, a disk pool with 12 drives provides for TB of capacity. From that capacity, five volumes can be created, each with a capacity of approximately 5TB. Volume Capacity DA Enabled POOL_2_FAILOVER_VOL2A 5, GB No POOL_2_FAILOVER_VOL2B 5, GB No POOL_2_FAILOVER_VOL2C 5, GB No POOL_2_FAILOVER_VOL2D 5, GB No POOL_2_FAILOVER_VOL2E 5, GB No Using a DDP for failover volumes provides a RAID 6 level of protection and inherent hot spare coverage with an efficient use of disk drives. Multicast and Secondary Streams An alternate method to achieve high availability is to use multicast and secondary streams. Most IP cameras support the transport of video streams as IP multicast packets. IP multicast is a bandwidth conservation technique where the network devices replicate these packets and deliver them to multiple receivers. Some video management systems can be configured to define a single IP camera for IP multicast transport and receive the video streams on two or more recording servers. Each recording server archives the video independently. This is a means to maintain two or more copies of a video feed from a single IP camera. It is an alternative means of providing high availability without the need for primary and failover servers. Not all video management systems support this configuration. Another means of recording the same images from an IP camera is through the use of secondary video streams. Most IP cameras have the ability to unicast a primary video stream to one recording server and have a second recording server access a secondary stream. In some instances the secondary stream is at a lower frame rate, resolution, or compression algorithm than the primary stream. For example, H.264 might be supported as the primary stream, but M-JPEG is only supported on the secondary stream. When deploying cameras overlooking scenes of critical importance, using two or more cameras with overlapping field of views and defining these cameras to separate recording servers is a best practice for high availability. The implementation of multiple unicast video streams or IP multicast transport is an alternative to providing high availability without the need for primary and failover video servers. 6 Sizing E-Series for Video Surveillance To properly size the storage array, three fundamental questions must be answered: What is the data rate of video received per server? What is the number of servers required? What is the retention policy? 45 Video Surveillance Solutions Using NetApp E-Series Storage

46 After the data rate per server is determined, the size and number of volumes (LUNs) required by each server can be calculated based on the retention period. This process is illustrated in Figure 21. Figure 21) Sizing fundamentals. If all the cameras in the deployment are configured identically, the data rate is the same for all cameras, and the discussion can be simplified as a number of cameras per server. Each server has the same number of cameras, with the resulting data rate at or below the specified maximum for the server. An example of this sizing methodology is described in examples later in this document. The VMS vendor documentation should provide a recommendation for performance and scalability. The number of cameras supported per server is a function of the version of the VMS system, the performance characteristics of the server, and the average data rate of the cameras. Many video surveillance deployments, however, encompass a variety of camera models at different resolutions, frame rates, compression ratios, and image complexities. In such deployments, the sizing exercise becomes more complex due to the increased number of variables. 6.1 Storage, Operating System, and File System Capacity Considerations Video surveillance solutions based on E-Series are configurable with either an E5460 or E2660 controller configuration. Use of either traditional volume groups or DDP is supported. The supported disks are 3TB NL-SAS (4TB drives will soon be available) and 900GB 10K SAS. Performance for both traditional volumes and DDP on a volume-by-volume basis is more consistent if there is one volume per volume group or disk pool. Provisioning the storage array in this manner eliminates disk drive head contention when compared to provisioning multiple volumes (LUNs) per volume group. If using traditional volume groups, NetApp recommends allocating one hot spare drive for every 30 disk drives. A 60-drive shelf would therefore need at least two hot spare drives. DDP does not require a separate hot spare drive to be allocated. Spare capacity is reserved on all the drives in the disk pool to provide for high availability in the event that a disk drive in the pool fails. Table 5 provides a reference for sizing the solution. 46 Video Surveillance Solutions Using NetApp E-Series Storage

47 Table 5) Usable capacity by RAID level. Number of Disks Dynamic Disk Pool (RAID 6): 3TB RAID 6: 3TB RAID 5: 3TB RAID 10: 3TB RAID 10: 900GB GB GB GB GB GB GB GB TB TB TB GB GB TB TB TB TB TB GB TB TB TB TB TB GB TB TB TB TB TB TB TB GB TB TB TB TB TB TB TB 5865GB TB TB TB TB TB TB TB GB TB TB TB TB TB TB TB GB RAID 10 or disk mirroring and striping are configured by selecting RAID 1 with four or more drives. The usable capacity of a 3TB drive is GB. The gray cells in the preceding table are unsupported selections. The maximum number of disk drives in a volume group is 30. The minimum number of drives in a DDP is 11; the maximum number of disks in the pool is the total number of drives in the array. After calculating the number of disks required supporting the volumes (LUNs) required for all servers in the deployment from Table 5, use Table 6 to determine how many controller shelves and expansion shelves are required. Table 6) E-Series disk shelves for video surveillance deployments. Category E5460 E2660 Form factor 4U/60 drives 4U/60 drives Maximum disk drives Controller shelf 1 1 Maximum expansion shelves 5 2 Total (maximum) number of disk shelves Video Surveillance Solutions Using NetApp E-Series Storage

48 The following example illustrates this process. A deployment for 640 cameras, 64 cameras per server, and 10 servers is assumed. Each camera on an average generates a 1.8Mbps video feed. The physicalsecurity integrator has determined that the number of 3TB disks required to meet the recording, archive, and failover databases for 30-day retention with one hot spare for each of the 30 drives is 231. Because the E2660 can only support 180 drives, the E5460 with a DE6600 controller shelf and three expansion shelves can house 240 disks. For this deployment, the E5460 would be the recommended choice. The alternative would be to deploy two E2660 chassis, each with one expansion shelf. 6.2 New Technology File System New technology file system (NTFS) is the preferred file system for Microsoft Windows operating systems. The majority of VMS systems are built on Windows Server 2008 R2 or later. Many VMS software packages recommend formatting disks with an allocation unit size of 64KB. Given this recommendation, the maximum NTFS volume size is approximately 256TB. Most VMS systems store video from a single camera to a file with either a maximum size or a maximum number of minutes of video. These parameters might or might not be configurable by the end user. Typically, the size of the file is 3 to 5 minutes of video from a single camera or in some cases up to 30 minutes. The documented maximum NTFS file size is approximately 16TB, which poses no practical limitation. 6.3 VMware ESXi 5.1 Implementations using VMware ESXi 5.1 have a 2TB volume size limit for VMFS-5 datastores. Based on the number of cameras per server and retention period of most deployments, a recording server running in a virtual machine will typically need a volume (LUN) in the 10TB to 30TB range. VMware RDM provides access to a volume (LUN) on the storage array. RDMs require the mapped device to be a whole LUN. The documented maximum RDM volume (LUN) size is 64TB. 7 Sizing Examples This section illustrates several sizing scenarios. They increase in complexity to show various situations and how they may be sized by the video surveillance integrator. 7.1 Sizing Example 1: A Simple Deployment The first example is for a base configuration for a video surveillance solution provisioned with two physical servers hosting four virtual machines and one E2660 shelf with 60 3TB disks. This sizing example assumes there are no failover servers implemented and the video management software runs under Windows 2008 R2. The sample deployment assumes 64 Axis P1346 cameras per server for a total of 256 cameras in the deployment. The Axis design tool is used to estimate the bandwidth and storage required for this model of camera. In Figure 22, a per-server estimate is shown, with line items for a single camera and then the additional 63 cameras to allow the report to calculate the totals. 48 Video Surveillance Solutions Using NetApp E-Series Storage

49 Figure 22) Axis design tool bandwidth estimate. This profile uses 30-day retention, recording 24 hours per day at 12 frames per second, and 1080p resolution with H.264 at 50% compression. The image complexity scenario is a schoolyard. Note that this deployment requires approximately 0.5TB per camera per month. Given this estimate, each server requires a volume (LUN) capable of storing 28.4TB. Referring to Table 5, configuring a 14-disk RAID 6 volume (LUN) provides 32.7TB of usable space. This provides a margin of 14% free space (28.4/32.7). When SANtricity ES is used to configure one 14-disk RAID 6 volume per volume group, the volume summary and array summary for the storage array would be as follows: PROFILE FOR STORAGE ARRAY: stle _34 (Tue May 21 11:00:31 EDT 2013) Number of standard volumes: 4 NAME STATUS CAPACITY RAID LEVEL VOLUME GROUP LUN ACCESSIBLE BY VOL_RACK_1 Optimal TB 6 VG_1 1 Default Group VOL_RACK_2 Optimal TB 6 VG_2 2 Default Group VOL_RACK_3 Optimal TB 6 VG_3 3 Default Group VOL_RACK_4 Optimal TB 6 VG_4 4 Default Group SUMMARY Number of drives: 60 Mixed drive types: Enabled Current media type(s): Hard Disk Drive (60) Current interface type(s): Serial Attached SCSI (SAS) (60) Total hot spare drives: 4 Standby: 4 In use: 0 There are 56 disks in use with four hot spares. This example does not include volumes (LUNs) for failover recording servers or centrally stored video clips (bookmarks). In this sizing example, the assumption is made that a single camera model is deployed with the same image complexity, frame rate, and compression factor. Because there is consistency in the video ingress 49 Video Surveillance Solutions Using NetApp E-Series Storage

50 to the server, it is very easy to determine the number of cameras per server virtual machine. There are no failover servers configured in this example. Many deployments will use a variety of camera models at a variety of resolutions and frame rates. Failover sizing must also be included. The following sizing example illustrates a more complex deployment. 7.2 Sizing Example 2: Larger System with Failover and RAID 10 This section illustrates a sizing example that implements sufficient failover recording servers to recover recording of the cameras in the event one physical machine fails or is taken out of service. This example uses an E2660 controller and two expansion shelves with a total disk population of 150 3TB disks and GB disks. The VMS is OnSSI Ocularis ES that runs under Windows 2008 R2. The deployment has a requirement to write the recording database to a RAID 10 volume group composed of 900GB 10K RPM SAS drives. The recording database contains the first 24 hours of recorded video, and the archive database is defined on RAID 6 volume groups composed of 3TB 7200 RPM NL-SAS drives. The deployment uses the E2660 fully populated configuration with 4 physical servers hosting 10 virtual machines for day-to-day recording and 4 failover virtual machines. The number of cameras per virtual machine is targeted at 64, for a total camera count of 640. This configuration is illustrated in Figure 23. Figure 23) Physical and virtual machines sizing example. The proposed camera configuration is Axis models M3204 and P1346 at an HDTV resolution of 1280x720 with H.264 encoding and RTP/UDP transport, 12 frames per second with 30% compression. As a point of reference, the Axis design tool estimates the intersection (night option) image complexity for an M3204 in the planned configuration to be 1.3Mbps, requiring 448.2GB for 31 days of retention. Changing the image complexity to reception area (night option) results in an estimated 726Kbps (231.8GB for 31 days of retention). 50 Video Surveillance Solutions Using NetApp E-Series Storage

51 Given these estimates, the physical-security integrator plans to implement 12+2 volume groups (14 disks) for the archive database and a 1TB volume for the recording database. To accommodate the volumes for the failover servers, the first four volume groups will contain two volumes: a failover volume and an archive volume. Due to space limitations, the failover servers will not be configured with a separate recording database on the RAID 10 10K RPM drives. The failover volumes are sized to hold approximately three to five days of video archives. Because the failover volumes are contained in the first four archive volume groups, these volumes are necessarily smaller than the archive volumes on archive volume groups numbered 5 through 10. To illustrate this, volume group 1 and volume group 5 are shown in this example: - Volume Group VG_ARCHIVE_1 (RAID 6) ( TB) - Volume VOL_ARCHIVE_1 ( TB) - Volume VOL_FAILOVER_1 (4, GB) - Volume Group VG_ARCHIVE_5 (RAID 6) ( TB) - Volume VOL_ARCHIVE_5 ( TB) Because of this accommodation for the failover volumes, the physical-security integrator should define cameras with lower data rates to the smaller archive volumes if possible. The Ocularis base machine has a volume (LUN) for bookmarks, which is a central repository for long-term storage of video clips. Given these assumptions, the physical-security integrator has configured the storage array with 26 standard volumes in the following configuration: PROFILE FOR STORAGE ARRAY: stle _34 (Thu May 21 15:47:28 EST 2013) STANDARD VOLUMES Number of standard volumes: 26 Name Capacity Accessible by Source VOL_ARCHIVE_ TB Host stlc220m3-10 Volume Group VG_ARCHIVE_1 VOL_ARCHIVE_ TB Host stlc220m3-10 Volume Group VG_ARCHIVE_2 VOL_ARCHIVE_ TB Host stlc220m3-10 Volume Group VG_ARCHIVE_3 VOL_ARCHIVE_ TB Host stlc220m3-11 Volume Group VG_ARCHIVE_4 VOL_ARCHIVE_ TB Host stlc220m3-11 Volume Group VG_ARCHIVE_5 VOL_ARCHIVE_ TB Host stlc220m3-11 Volume Group VG_ARCHIVE_6 VOL_ARCHIVE_ TB Host stlc220m3-12 Volume Group VG_ARCHIVE_7 VOL_ARCHIVE_ TB Host stlc220m3-12 Volume Group VG_ARCHIVE_8 VOL_ARCHIVE_ TB Host stlc220m3-12 Volume Group VG_ARCHIVE_9 VOL_ARCHIVE_ TB Host stlc220m3-12 Volume Group VG_ARCHIVE_10 VOL_BOOKMARKS 2, GB Host stlc220m3-9 Volume Group VG_BOOKMARKS VOL_FAILOVER_1 4, GB Host stlc220m3-9 Volume Group VG_ARCHIVE_1 VOL_FAILOVER_2 4, GB Host stlc220m3-9 Volume Group VG_ARCHIVE_2 VOL_FAILOVER_3 4, GB Host stlc220m3-10 Volume Group VG_ARCHIVE_3 VOL_FAILOVER_4 4, GB Host stlc220m3-11 Volume Group VG_ARCHIVE_4 VOL_LIVE_1 1, GB Host stlc220m3-10 Volume Group VG_LIVE_1_2 VOL_LIVE_2 1, GB Host stlc220m3-10 Volume Group VG_LIVE_1_2 VOL_LIVE_3 1, GB Host stlc220m3-10 Volume Group VG_LIVE_3_6 VOL_LIVE_4 1, GB Host stlc220m3-11 Volume Group VG_LIVE_3_6 VOL_LIVE_5 1, GB Host stlc220m3-11 Volume Group VG_LIVE_3_6 VOL_LIVE_6 1, GB Host stlc220m3-11 Volume Group VG_LIVE_3_6 VOL_LIVE_7 1, GB Host stlc220m3-12 Volume Group VG_LIVE_7_10 VOL_LIVE_8 1, GB Host stlc220m3-12 Volume Group VG_LIVE_7_10 VOL_LIVE_9 1, GB Host stlc220m3-12 Volume Group VG_LIVE_7_10 VOL_LIVE_10 1, GB Host stlc220m3-12 Volume Group VG_LIVE_7_10 This configuration has 8 unassigned 3TB drives that can be defined as hot spares. The volume groups utilizing the 900GB 10K RPM drives have 10 drives defined as RAID 10, with a total capacity of GB. There are up to four 1TB volumes defined in these volume groups. During the burn-in phase of the implementation, the physical-security integrator sampled and averaged the data rates from the switch ports supporting the cameras and from the PortChannel (EtherChannel) to each physical server. The results are shown in the following tabulation: 51 Video Surveillance Solutions Using NetApp E-Series Storage

52 Physical Host Name Server Virtual Machine Number of Cameras Portchannel and Data Rate Camera Ingress Rate Server 4 Stlc220m3-12: 275 Mbps RACK-SVR cameras each 1,383,000 bps AXIS M3204 RACK-SVR cameras each 1,383,000 bps AXIS M3204 RACK-SVR cameras each 1,678,000 bps AXIS M3204 RACK-SVR cameras each 1,678,000 bps AXIS M3204 Server 3 Stlc220m3-11: 142 Mbps RACK-SVR cameras each 835,000 bps AXIS P1346 RACK-SVR cameras each 1,161,000 bps AXIS M3204 RACK-SVR cameras each 1,161,000 bps AXIS M3204 RACK-SVR-33 failover recording server Server 2 Stlc220m3-10: 225 Mbps RACK-SVR cameras each 835,000 bps AXIS P1346 RACK-SVR cameras each 835,000 bps AXIS P1346 RACK-SVR cameras each 1,454,000 bps AXIS M3204 RACK-SVR-23 failover recording server Server 1 Stlc220m3-9: RACK-SVR-10 RACK-SVR-11 RACK-SVR-12 RACK-SVR-13 OnSSI Ocularis Base OnSSI Ocularis Manager failover recording server failover recording server The highest aggregate data rate is on physical server 4, on which all virtual machines have the TB volumes (LUNs). The 28TB volumes (LUNs) are defined to the three virtual machines on physical server 2 and one virtual machine on physical server 3. During the implementation phase of the project, as a best practice, camera data rates should be monitored, and cameras should be migrated between recording servers to allocate the load as equally as practical. When the retention period has been reached and the VMS has begun to groom files, the physical-security integrator must verify that the desired retention period is being met or has been exceeded. 7.3 Sizing Example 3: Complex Deployment for a Multiuse Center In this example, the assumption is that the video surveillance integrator is sizing the storage array to support a multiuse center containing a retail component, self-storage, and offices. The center manager provides physical-security services for the complex and, in conjunction with the physical-security integrator, has completed a site survey and determined the number and model of cameras along with the requirement and type of scene. The physical-security integrator has specified the model and manufacturer of network video cameras and their resolution, frame rate, compression standard and ratio, and image complexity. Axis Communications cameras have been selected for the project. The data rates and storage requirements have been estimated using the Axis design tool. The cameras use VBR, and the retention period required is 30 days. The values used to determine the data rates and storage required are shown in Table 7. Table 7) Multiuse project. Camera Model Number Requirement/ Scene Resolution Frame Rate Compression Standard/ Ratio Image Complexity Data Rate Mbps Storage for 30 Days of Retention Axis Q Identification cashier 1920x H.264/10 Intersection TB Axis 30 Overview 1920x108 6 H.264/30 Intersection TB 52 Video Surveillance Solutions Using NetApp E-Series Storage

53 Camera Model Number Requirement/ Scene Resolution Frame Rate Compression Standard/ Ratio Image Complexity Data Rate Mbps Storage for 30 Days of Retention Q1755- E parking lot 0 Axis P3344- VE Axis M Identification entrances 100 Overview hallways 1280x H.264/30 Stairway TB 1280x720 6 H.264/50 Stairway TB Total TB Based on the recommendation of the VMS vendor and the experience of the physical-security integrator, four virtual recording servers are deployed on two physical servers. Each physical server will have two failover server virtual machines. In the event one physical machine fails, the failover virtual machines will recover the video streams from the failed server. In Table 8, the aggregate data rates and storage are shown on a per-camera basis. This data is used to determine the storage required per server. Table 8) Data rate and storage per camera. Camera Model Data Rate per Camera Storage per Camera for 30 Days of Retention Axis Q Mbps 1843GB Axis Q1755-E 1.0Mbps 323.2GB Axis P3344-VE 826Kbps 255.2GB Axis M Kbps 114.9GB The cameras are distributed across the four virtual machines as shown in Table 9. Based on the values in Table 8, the storage requirements of the number and type of camera are calculated and listed in the minimum storage required column. This column includes the assumption of 20% free space per volume (LUN). Table 9) Camera assignment per server. Physical Server Virtual Machine 1 Number/Model Camera Minimum Storage Required (TB) Virtual Machine 2 Number/Model Camera Recorders Failover 1 Failover 2 One 5 Q Q1755-E 13 P3344-VE 25 M TB 5 Q Q1755-E 12 P3344-VE 25 M TB 3.6TB 3.6TB Two 5 Q Q1755-E 13 P3344-VE 25 M TB 5 Q Q1755-E 12 P3344-VE 25 M TB 3.6TB 3.6TB 53 Video Surveillance Solutions Using NetApp E-Series Storage

54 Assuming 20% free space, 17.3TB/0.8 = TB. The failover sizing assumes five days of retention based on the TB value, that is, x (5/30) or 3.6TB. Using Table 5 to estimate the number of disks required to meet the storage needs based on a 60-disk shelf, RAID 6 can be used for the volumes (LUNs) of the active recording servers and RAID 5 for the failover servers and to maintain 4 hot spares. Using a RAID 5 level of protection saves 8 disks compared to using RAID 6 for the failover volumes (LUNs), while still maintaining one volume per volume group. An alternative is to use an 8-disk (6+2) RAID 6 volume group and allocate the four volumes in the volume group. The solution using a combination of RAID 6 and RAID 5 is shown in Table 10. Table 10) Sizing solution. Physical Server VM1 Volume (LUN) VM2 Volume (LUN) Failover 1 Failover 2 One RAID 6 (9+2) = 11 disks TB RAID 6 (9+2) = 11 disks TB RAID 5 (2+1) = 3 disks 5.5TB RAID 5 (2+1) = 3 disks 5.5TB Two RAID 6 (9+2) = 11 disks TB RAID 6 (9+2) = 11 disks TB RAID 5 (2+1) = 3 disks 5.5TB RAID 5 (2+1) = 3 disks 5.5TB This configuration uses a total of 56 drives, and the four remaining disks can be provisioned as hot spares. Because this exercise maintains a minimum of 20% free space when selecting the number of drives per volume group, the actual amount of free space per volume (LUN) approaches 30% because it is rounded up to the next number of disk drives. 8 Sizing Checklist The following checklist describes the data that must be collected and analyzed to determine the capacity requirements of the implementation for accurate solution sizing. Item Retention period Reserve capacity Comments Determines the required minimum retention period of the organization. Estimates how much additional capacity should be allocated in excess of the minimum retention period (for example, retention period plus 20%). RAID level Determines what RAID levels are required (for example, RAID 5 or RAID 6). Disk characteristics Video management software Volume limitations Data rate per camera Number of cameras Continuous recording Determines the size and type of disks (for example, if 3TB NL-SAS or 900GB 10K RPM disks are required). Determines which video management system will be implemented. Analyzes file system limitations of the video management system or operating system (for example, the Linux 16TB volume limitation). Estimates the data rate expected for each type of camera deployed. Determines the total number of cameras and categorizes them by like data rates. Determines if video is recorded continuously or records on motion or on some 54 Video Surveillance Solutions Using NetApp E-Series Storage

55 Item Frequency of viewing Failover requirements Cameras per recording server Number and size of volumes per recording server Calculation of total disks required Additional volumes required Hot spares Disk shelves required Comments other analytic trigger. Determines the frequency, number of concurrent cameras, or video streams being viewed and the number of client-viewing workstations. Determines if failover recording servers will be implemented, how many servers, and the size of the volumes required per server. Estimates the number of servers needed for the specified cameras and the resulting data rates. Determines the size and number of volumes per recording server based on the VMS requirements, retention period, and reserve capacity. Calculates the number of physical disks required based on the volume sizes per recording server and number of recording servers. Determines if nonrecording volumes are required and what capacity (for example, bookmark volumes). Determines if sufficient disks are available as hot spares. Determines the number of disk shelves required based on the total number of disks required. 9 Performance Considerations This chapter discusses the video surveillance storage product selection and performance evaluation and provides results, recommendations, and conclusions that can be used as design parameters when planning and implementing the solution. 9.1 Overview The primary objective of a video management recording server is to receive video feeds over an IP network from video surveillance cameras and record all or portions of this video content to disk for a given retention period. The workload for this function is primarily a write I/O at a relatively constant data rate on a server-to-server basis. The secondary objective is to allow surveillance operators to view, search, and analyze the video written to the archive of the video server. This workload is primarily read I/O at a rate that might be substantially higher than the rate at which it was originally written. This workload might be infrequent and transient and depends on the deployment model. For example, public school deployments might only view recorded video once or twice per week, whereas gaming deployments will utilize scores of operators to continuously analyze activities on the casino floor. A tertiary workload is the management of the video files by the VMS. Video files are deleted when they exceed the configured retention period or if the volume reaches the minimum free space, which is a configurable parameter. Some VMS implementations write video feeds to a temporary directory for a few minutes and then copy these files to a permanent directory. Other implementations store the first 24 hours of video in a live directory location and then move these files to one or more archive locations on a configurable, periodic basis. The three video workloads are: Recording Viewing 55 Video Surveillance Solutions Using NetApp E-Series Storage

56 Management Video recording is constant write I/O, viewing is transient read I/O, and management is both read and write I/O on a periodic basis. 9.2 Operational Considerations Virus-Scanning Software Virus-scanning software might have a negative effect on performance because of the system resources consumed, and the software might temporarily lock files during scanning. These file locks might affect performance or cause file corruption. Do not use virus scanning on recording or archiving directories of the recording servers and on the management servers in general. User Access and Third-Party Software Video surveillance recording servers should not have third-party software such as DVD-burning software installed because these packages might have a detrimental effect on performance. Also, using Windows Explorer or other applications to view (open) archive files might have the same performance and file corruption issue as virus-scanning software. Disk Full Conditions Most VMS applications define both a retention period for video files and a software-defined maximum size of the archive location. For example, the volume (LUN) defined for video storage might be 29.3TB in usable capacity, but the maximum size defined in the application is 28TB. When the storage location reaches the 28TB mark, the application will begin to delete the oldest video files regardless of the configured retention period. When the tiered storage approach is used with both OnSSI Ocularis and Milestone XProtect, the recording server attempts to move files from the initial location to the archive location prior to the scheduled archive function to free space. Disk full conditions affect performance by adding additional workload to the archive function. These emergency archive functions occur (as observed every two minutes) outside the normal schedule. In deployments that do not implement the archive function, some additional workload will occur as a result of the emergency file deletion, but the performance implications should be minimal. As a best practice, accurately sizing and configuring the application to maintain adequate performance and free space provide more deterministic performance. Database Corruption and Repair The file structure of an archive location might become corrupted in the event of a recording server failure or ungraceful shutdown. If failover recording servers are implemented, their function is to assume the video archiving function while the primary recording server is out of service. When the primary recording server is restored, the database structure must be repaired. The workload might change significantly during this recovery because the corrupted database files might be moved to subfolders and repaired in the background. Additionally, video files stored on the failover servers must be moved from the failover server to the primary recording server. The repair and recovery process might take 30 minutes or more, and the additional workload might alter normal system performance. 56 Video Surveillance Solutions Using NetApp E-Series Storage

57 9.3 E-Series Storage Array Video surveillance solutions can be implemented by either of the following controller options, both based on the DE6600 disk enclosure using 3TB NL-SAS drives: E5400 controller with 8Gbps Fibre Channel host interface card (HIC) and 12GB or 24GB cache per system E2600 controller with the host systems connected to directly attached SAS at 6Gbps with 4GB or 8GB cache per system The NetApp E-Series storage array is targeted at the video surveillance market through its price and performance characteristics. Figure 24 provides an overview of the number of cameras and disk shelves supported by the E2660 and E5460 storage arrays. Figure 24) E-Series raw storage capacity. The E5460 and E2660 are sixth-generation storage arrays that include patented mechanical engineering, providing dense, scalable, and highly reliable bandwidth and capacity. The disk controller firmware supports an optimal mix of high-bandwidth, large-block streaming and small-block random I/O. Controllers. The controller for this solution is the E5460 or E2660. The E5460 is targeted at FC deployments, and the E2660 target deployment is direct SAS attachment. The solution deploys dual controllers for high availability. All components of the E-Series are hot swappable; firmware upgrades can be completed while the system is operational. Both controllers have a data path to all shelves and drives in the array. Both controller models deploy cache memory for read and write buffering. Disk shelves. The DE6600 is a 4 rack unit (RU) shelf holding up to ʺ 4TB NL-SAS drives. The E5460 configuration can support the controller shelf plus 5 expansion shelves for a total of 360 drives. The E2660 can support the controller shelf plus 2 expansion shelves for a total of 180 drives. Using 4TB drives, this gives a total raw capacity of 1440TB and 720TB, respectively. This is represented in tabular format in Table Video Surveillance Solutions Using NetApp E-Series Storage

58 Table 11) E-Series controllers and disk shelves. Category E5460 E2660 Form factor 4U/60 drives 4U/60 drives Maximum disk drives Controller shelf 1 1 Maximum expansion shelves 5 2 Total maximum number of disk shelves 6 3 Disk drives. Each shelf can be populated with near-line SAS (NL-SAS), SAS, or solid state drives (SSDs) of varying sizes and rotational speeds. Because of the continuous workload placed on drives used to store video surveillance archives, enterprise-class drives are required for this solution. Enterprise-class drives are designed to be vibration tolerant and rated for 24/7 duty with a five-year or more warranty. 9.4 Configurable Performance Options The following items are performance-related recommendations common to all operating systems, hypervisors, and VMS packages. A checklist is provided in section 9.5 to assist in the implementation of the recommended values. Data Assurance Data assurance is a configuration option supported on certain disk drives. It adds a checksum to every block of data written to the volume. The feature incurs a performance penalty, which might be acceptable for some applications, but is considered unnecessary for most video surveillance applications. Read and Write Cache Both read and write cache should be enabled. These parameters must be configured on a volume-byvolume basis. Dynamic read prefetch should be enabled. The read prefetch is more commonly called read-ahead. The prefetch or read-ahead function might increase read throughput by preloading cache with data anticipated to be requested in the future. NetApp recommends that the value of the write cache without batteries be set to disabled. In the event of a power failure, the battery on the controller maintains power to the controller to flush the write cache to an onboard flash memory. When power is restored, the I/O in the write cache can then be completed to disk. Failure of the controller battery is logged in the event log and should be corrected as soon as practical. Cache Mirroring NetApp recommends that the value of cache mirroring be set to disabled. Cache mirroring effectively decreases the available cache by 50%, because I/O is mirrored on both controllers. Cache mirroring incurs a performance penalty for the mirror operation, in addition to the reduction of available cache. Because the failure of a controller incurs a loss of video recording for seconds to minutes regardless of the value of cache mirroring, NetApp advises that cache mirroring be disabled. Cache Block Size The common pool of cache for each controller is organized into blocks of a configurable size. The allowable sizes are 4KB, 8KB, 16KB, and 32KB. All volumes share the common pool of cache for the controller, and thus the size is constant for all volumes. All I/O in the system must pass through the cache, and the block size determines how many blocks are required to hold each I/O. If the server issues 58 Video Surveillance Solutions Using NetApp E-Series Storage

59 an I/O that is 12KB in size and the cache block size is configured at 16KB, two blocks are allocated, and the second block has 4K of wasted space. Because the I/O size of VMS packages generally is greater than 256KB, NetApp recommends using a 32KB cache block size. Cache Flushing The E-Series manages the pool of cache based on demand and time. Cache is used for both read and write I/O. By default, the cache blocks containing write I/O are flushed at 10 seconds or more frequently if the cache gets filled. The demand parameter is a high-/low-watermark value that NetApp recommends to initially be at 80%. These values instruct the algorithm to attempt to maintain the cache utilization at 80%. Media Scan The media scan feature provides error detection before the condition disrupts read and write activity to the disk. NetApp recommends enabling this feature with a frequency of 30 days and recommends disabling redundancy check. This option is configured on a volume-by-volume basis. If errors are detected, the condition is recorded in the event log for storage administrator action. Segment Size (Traditional Volumes) The segment size parameter in E-Series is the amount of data written to one disk drive before moving to the next disk in the volume group. The default value is 128KB, which is suitable for most video surveillance deployments. E-Series traditional volume groups have configurable segment sizes of 8KB, 16KB, 32KB, 64KB, 128KB, 256KB, and 512KB. SANtricity provides a means to increase or decrease the segment size up or down by one increment at a time. For example, if the current segment size of the volume group is 128K, the segment size can be migrated down to 64K or up to 256K. Note: Changing the segment size takes a long time to complete and cannot be cancelled after it is started. As an example, assume a volume group is configured for RAID 6 using 14 disks (12+2 configuration) in the volume group, and the segment size is 128KB. A full stripe write would be (12 x 128KB) = 1536KB, or 1.536MB. Segment Size (DDP) A DDP is similar to a traditional volume in an 8+2 RAID 6 configuration. DDP uses a stripe size (D-stripe) of 4GB. Ten disks are always used to store the individual pieces (D-pieces), each of which is 512MB; 8 data disks x 512MB = 4GB. Volumes (LUNs) are made up of enough 4GB D-stripes to accommodate the requested size. Segment size is not configurable to changeable with DDP as it is with a traditional volume. The segment size is 128K, and 4,096 segments are written to a disk (512MB) before writing to the next disk. DDP derives the most benefit from allocating all the disks in the storage array to the pool and then creating individual volumes out of the pool. This configuration incurs some degree of contention between the volumes in the pool. For traditional volumes, creating a single volume in a volume group eliminates this contention between volumes. Performance might be more deterministic with traditional volumes than DDP. 59 Video Surveillance Solutions Using NetApp E-Series Storage

60 9.5 E-Series Performance Checklist Table 12 describes the storage array global parameters. Table 12) Storage array global parameters. Parameter Start cache flushing at (in percentage) Stop cache flushing at (in percentage) Cache block size (in KB) Media scan frequency (in days) Failover alert delay Recommended Value 80% (default) 80% (default) 32KB 30 days 5 minutes (default) Table 13 describes the volume and volume group parameters. Table 13) Parameters specific to volume and volume group. Parameter Data assurance (DA) enabled Segment size Capacity reserved for future segment size changes Maximum future segment size Modification priority Read cache Write cache Write cache without batteries Write cache with mirroring Flush write cache after (in seconds) Dynamic cache read prefetch Enable background media scan Preread redundancy check Recommended Value No 128KB (default) No Not applicable Lowest Enabled (default) Enabled (default) Disabled (default) Disabled 10 seconds (default) Enabled Enabled Disabled Note: As of February 2013, Genetec Omnicast, Verint Nextivia, OnSSI Ocularis, and Milestone XProtect video management software applications have been tested on the E-Series (both E2600 and E5400). 9.6 Example 1: E-Series Storage Array E2600 An example illustrates the performance principles in this document. Here is a sample video surveillance environment using the E2600 and SAS host interfaces. This example has been created and tested in NetApp s RTP labs. It provides a two to four physical server configuration for up to approximately 640 network video cameras. VMware ESXi is utilized to provide up to 16 virtual machines in this example. The hardware and software components are shown in Figure Video Surveillance Solutions Using NetApp E-Series Storage

61 Figure 25) E2600 hardware and software components. General Performance Considerations The E2600 controller supports up to four SAS interfaces per controller, or eight per duplex controller storage array. Up to four physical servers, each with a dual-port SAS HBA, can be directly attached to each controller, providing one active and one redundant path to the array. The E2600 controller can be ordered with either 2GB or 4GB of cache memory per controller. The theoretical maximum write performance is approximately 11Gbps. A typical deployment scenario for this configuration would entail video ingress between 0.5Gbps to 2Gbps. Typical Data Rates for This Example The deployments included: 640 Axis M3204 network video cameras 1280x720 (HDTV format) resolution 12 frames per second 30% compression H.264 UDP/RTP transport Each camera generates a data rate from 0.8Mbps to 1.2Mbps or an aggregate data rate from 512Mbps to 768Mbps. This deployment has been validated with 10 recording servers recording 64 video cameras per server. This deployment will require a minimum of 270TB of storage for a 30-day retention using the Axis design tool. Given the recording servers are running in virtual machines on four physical hosts, the SAS HBAs from each of the four hosts and four 6Gbps SAS ports on each E2600 controller, there is ample capacity in both the host interfaces and controller throughput to support the implementation. 61 Video Surveillance Solutions Using NetApp E-Series Storage

62 E2600 Performance Summary for Example 1 Each physical server had four virtual machines. Server 4 has four active recording servers during normal operations and would be the busiest of the four servers. Server 2 and 3 each contains a failover server that would only have four active recording servers during a failure recovery. The video ingress network interface for each physical server is composed of a quad-port 1Gbps adapter with links aggregated across two physical Cisco Nexus 3048 switches in a virtual PortChannel. This test solution has been validated to function with only one of the four member links active. The video ingress network is not expected to present a performance bottleneck. The physical servers ran the ESXi hypervisor, which can be configured to implement a virtual machine environment that exceeds the VMS recommended hardware specifications. The E-Series host interfaces were SAS that were demonstrated through test tools and solution validation to exceed the performance requirements of the solution by a factor of 3 4 times, even during periods when previously recorded video moved from a recording volume to one or more archive volumes. The Cisco Nexus 3048 switches have sufficient backplane and uplink capacity when properly configured to transport IP video traffic without packet loss. The hardware and software components in this configuration met or exceeded the performance requirements of the VMS software packages tested. 9.7 Example 2: E-Series E5400 Storage Array NetApp has built and tested another sample video surveillance configuration in its RTP labs based on the E5460 system. The E5400 test configuration differed from the E2600 test environment in the following ways: The storage array was an E5460 with FC host interfaces. Common off-the-shelf servers were deployed with or without a hypervisor. Additionally, the E5460-based configuration offers substantially higher storage capacity than the solutions using the E2600, because it supports up to six DE6600 shelves with a total capacity of 360 drives. Using 3TB drives as an example, this solution scales to 1,080TB of raw disk capacity. The hardware and software components of a tested video surveillance solution using the E5400 are illustrated in Figure Video Surveillance Solutions Using NetApp E-Series Storage

63 Figure 26) E5400 hardware and software components. General Performance Considerations In addition to the increased disk capacity offered by the E5460, the E5400 controller supports up to 16 Fibre Channel interfaces, a maximum of eight per controller. Although this configuration could support eight dual-attached servers, a more typical deployment would be to implement a dual-fabric SAN. The NetApp recommended configuration for a dual-fabric SAN is to connect server to the fabric with dual-port Fibre Channel host bus adapters (HBAs), one port to each switch, and connect at least two Fibre Channel ports from each controller to separate switches. This configuration provides two active and two standby paths to the storage array for a total aggregate bandwidth of 16Gbps. Optimally, four Fibre Channel ports may be used from each controller, providing a total aggregate bandwidth of 32Gbps. The E5400 controller can be ordered with either 6GB or 12GB of cache memory per controller. The theoretical maximum write performance is approximately 24.8Gbps. Typical Data Rates The deployment included: 640 Axis P1346 network video cameras 1920x1080 (full HDTV format) resolution 30 frames per second 30% compression H.264 UDP/RTP transport A stairway scene complexity Each camera generates 4.9Mbps or an aggregate data rate of 3.1Gbps. This deployment is estimated to require 20 recording servers recording 32 video cameras per server. This deployment will require a 63 Video Surveillance Solutions Using NetApp E-Series Storage

64 minimum of 983TB of storage for a 30-day retention period using the Axis design tool to calculate the storage requirement. Using 8Gbps FC HBAs from each of the 20 hosts and four Fibre Channel ports on each E5400 controller, there is ample capacity in both the host interfaces and controller throughput to support the implementation. This analysis demonstrates that E-Series performance throughput is not a limiting factor for high frame rate HDTV/megapixel deployments. Rather, the total disk capacity required to meet the video retention policy is the limiting factor. E5400 Performance Summary A video surveillance solution using the E5400 offers the capability for larger deployments with higher camera counts because it supports three additional DE6600 disk shelves compared to the E2600-based video surveillance solution. The E5400 also has a theoretical maximum throughput almost twice that of the E2600. Either solution provides substantially higher performance than required by the video management applications deployed in these examples. 10 Hypervisor: VMware ESXi The video surveillance storage solution using the E2600 has been validated as a predetermined hardware and software configuration that uses the VMware ESXi 5.1 hypervisor. The video surveillance storage solution using the E5400 has been tested and deployed with both Microsoft Hyper-V and VMware ESXi. Volumes (LUNs) presented to the guest machines as Hyper-V pass-through disks or as vsphere RDMs. The use of pass-through disks or RDMs is a capacity consideration and not a performance consideration. The maximum volume (LUN) size for VMware VMDKs and RDM in virtual compatibility mode is approximately 2TB, which would only provide sufficient space to retain video archives for a few cameras. For this reason, RDM in physical compatibility mode is implemented to support volumes (LUNs) over the 2TB limit. Physical compatibility mode has minimal SCSI virtualization overhead for the device. All SCSI commands are passed directly to the device with the exception of the REPORT LUN command. Note: The VMware ESXi native multipath drivers provide for path redundancy and load sharing. The guest virtual machines only need the SANtricity utilities installed. Do not install the multipath support component of SANtricity on guest virtual machines. The guest operating system is presented with a single path to the device by the hypervisor. Each virtual machine should be configured with virtual memory and CPU that meets or exceeds the minimum recommended memory and CPU hardware configuration recommended by the software vendor. Video recording servers are high-resource consuming processes. Do not implement nonvideo management virtual machines on the same physical machine as the recording servers. For example, do not colocate an server virtual machine on the same physical machine as a video recording server. This is also true of the storage subsystem; do not provision volumes (LUNs) on a storage array for applications other than the video management system Hypervisor: Virtual Machine Layout The video surveillance storage solution using E2600 has been validated with a virtual machine configuration designed to support the application-based high-availability feature of OnSSI Ocularis and Milestone XProtect. Both OnSSI Ocularis and Milestone XProtect implement pooled failover servers in the event a primary recording server fails. 64 Video Surveillance Solutions Using NetApp E-Series Storage

65 From a performance planning standpoint, the assumption must be that all four virtual machines are recording all cameras continuously. Figure 27 shows two-, three-, and four-server configurations. Figure 27) OnSSI and Milestone virtual machine layout. The designation of 128, 384, and 640 cameras is based on an estimate of 64 cameras defined per recording server virtual machine. The number of cameras might be more or less depending on the data rate from the camera, the performance characteristics of the application, and what features are enabled. These virtual machines are identified in Figure 27 by the green rectangles with the word recorder. The physical machines are identified by gray rectangles. The failover recording servers are identified by yellow rectangles designated as Failover. The blue rectangles are management virtual machines required by OnSSI Ocularis and Milestone XProtect. For example, the top server in the 640-camera configuration has four virtual machines, each recording video streams from up to 64 network video cameras. If that physical machine fails, there are four failover servers available on the remaining three physical machines to continue recording the cameras of the failed server. Note: Given this design, there will never be more than three physical machines recording video feeds to the storage array at one time Hypervisor: Performance Monitoring The video surveillance storage solution has been validated with VMware ESXi 5.1 as the hypervisor supporting up to four virtual machines per physical machine. One advantage of deploying a hypervisor is the ability to utilize the performance reporting of the VMware ESXi host shell and the vsphere client. For example, the utility esxtop can be used to display performance statistics of the CPU, disk adapters, network interfaces, and disk devices. Additionally, performance of the physical server can also be monitored through the vsphere client and selecting the Performance tab on the main screen. One example is shown in Figure. An example of using esxtop to verify the network interface traffic is described in the section Esxtop Hypervisor: Virtual Servers The video surveillance storage solution using the E2600 has been validated with two, three, or four Cisco UCS C220-M3 servers. They support the Intel Xeon processor E family (2GHz or greater) and are provisioned with 64GB memory. Each of the four virtual machines is configured with 8GB of memory and 65 Video Surveillance Solutions Using NetApp E-Series Storage

66 four virtual CPU cores. These virtual machines support recording servers, failover recording servers, and management servers. As an example, the CPU and memory utilization of one server supporting three active recording servers and one failover server is shown in Figure 28. Figure 28) CPU and memory usage. The CPU utilization is generally approximately 15% of 16 CPUs (clocked at 2.4Ghz), and memory utilization is approximately 50%. Each of the four virtual machines is allocated 8GB memory. This physical server is recording video from 144 cameras with the configuration shown in the section Performance Validation of Tiered Storage : 30 fps at 720p resolution. These servers meet or exceed the hardware recommendations for: OnSSI Ocularis base machine and RC-E recording server. Refer to Milestone XProtect Corporate minimum system requirements. Refer to eet_v1.pdf. The virtual machines are configured to meet or exceed the recommended video management software hardware specifications. As shown in Figure 28, sufficient CPU and memory are available to support the guest machines; there is no expectation of any performance limitations based on CPU and memory for the video surveillance solution. The number of cameras supported per recording server is a function of the capabilities of the software, the configuration and data rate of the network video cameras, and other features such as server-side motion detection Hypervisor: Guest OS: Windows 2008 R2 Server The majority of VMS packages require Microsoft Windows Server 2008 R2. The file system for these deployments is NTFS with a maximum LUN size of approximately 256TB. NetApp s recommendation is to configure the cluster (allocation unit) size of 64KB. The NTFS cluster (allocation unit) size does not specify the size of the I/O; rather, it specifies a basic logical unit of storage on a disk volume. Video archive files tend to be written as large records (typically 256KB to 512KB or greater), and NetApp recommends using a 64KB allocation unit size for large files. Refer to the section Verify NTFS Cluster Size for an example of how to verify the cluster size. 66 Video Surveillance Solutions Using NetApp E-Series Storage

67 10.5 Management Network The Cisco UCS C220-M3 has three ports for management of the hardware chassis: the Cisco Integrated Management Controller (CIMC), ESXi, and the guest virtual machines. The CIMC port is a 10/100/1000 Base-T Ethernet dedicated management port. The two additional LAN on motherboard (LOM) ports are 1Gbps Ethernet ports. One port is defined as the ESXi VMkernel port. This port and IP address are used for VMware client access to the hypervisor. VMware recommends this port be segregated, because it handles VMware vmotion, iscsi, and NFS traffic. Note: Although vmotion, iscsi, and NFS are not part of this solution, the best practice configuration is implemented in this solution. The remaining LOM port is used as an interface to manage the guest virtual machines. A server management (SERVER_MGMT) virtual switch (vswitch2) is configured for this purpose. Each virtual machine is configured with a network adapter on the virtual switch for use with Microsoft Remote Desktop Session Host or Linux Xterm/SSH access to the guest machine. The management virtual switch network interfaces are shown in Figure 29 and Figure 30. Figure 29) VMkernel port management network. Figure 30) Server management network. Because these interfaces are used for server management traffic, the performance implications are minimal. Note: When implementing recording servers with failover servers, the failover servers must communicate with the management server and the recording servers. There are both control 67 Video Surveillance Solutions Using NetApp E-Series Storage

68 plane (keepalives) and data plane (configuration exchange and video database updates) between servers. During the VMS installation, the IP addresses of the servers should reference the VIDEO_INGRESS network addresses and not the management interfaces Video Ingress Network The video surveillance storage solution using the E2600 has been validated using a video ingress network configuration based on Broadcom quad-port 1Gb interfaces, aggregated as an EtherChannel (PortChannel) defined on the network switches and configured to an ESXi virtual switch. In the sample configuration, the virtual switch is identified as VLAN_2020 as shown Figure 31. Figure 31) Video ingress network. The video ingress network is deployed using the virtual PortChannel feature with two links connected to each Cisco 3048 switch. The two Cisco 3048 switches are configured with the virtual PortChannel (vpc) feature, providing layer 2 multipathing and high availability in the event of a switch failure. Each physical server has a Broadcom quad-port adapter with a combined link capacity of 4Gbps. As much as practical, the ingress video traffic should utilize at least two of the four links. Following implementation of the network video cameras, the degree of load balancing should be verified. An example of this process is described in the section Verify Cisco Nexus 3048 Switch Load-Balance Configuration. The vsphere vswitch NIC teaming properties for VLAN_2020 load balancing should be selected and configured as Route based on IP hash. This is shown in Figure Video Surveillance Solutions Using NetApp E-Series Storage

69 Figure 32) vswitch NIC teaming load balancing. When implementing OnSSI Ocularis or Milestone XProtect in the video surveillance solution, there are at the most four virtual recording servers per physical machine. Assuming 64 network video cameras per virtual machine at an average data rate of 2Mbps, the maximum expected video ingress data rate per physical machine is (4 x 64 x 2) or 512Mbps (0.5Gbps) per EtherChannel. Assuming two of the four member links in the EtherChannel are utilized for ingress video traffic, the maximum capacity of these member links is 2Gbps with an expected offered load of approximately 512Mbps. In solution validation testing, three of the four member links failed, with all video ingress traffic traversing the one remaining link, with no loss of video. The video ingress network configuration should not be a potential performance bottleneck to the solution Uplinks The video surveillance storage solution using E2600 has been validated using two Cisco 3048 top-of-rack server access switches. The Cisco Nexus 3048 switch has 176Gbps switching capacity with a forwarding rate of 132mpps and line-rate traffic throughput (both layer 2 and 3) on all ports. The switch configuration implements the vpc feature for high availability. These switches connect to customer-supplied core/distribution-layer switches for transporting ingress video traffic from the network video cameras and provide access to viewing workstations for monitoring live or browsing archived video. As a best practice, the video surveillance storage solution using the Cisco Nexus 3048 switches must have at least two uplinks for high availability. If the uplinks are layer 2 interfaces, they should be configured as layer 2 trunked PortChannels connected across two physical switches and configured as a virtual switch. If the uplinks are configured as layer 3 interfaces, each interface can be connected to a layer 3 switch, and a routing protocol such as Enhanced IGRP (EIGRP) or Open Shortest Path First (OSPF) is used for load sharing and path redundancy. The Cisco Nexus 3048 switch supports both layer 69 Video Surveillance Solutions Using NetApp E-Series Storage

70 2 and layer 3 features. The system default feature set supports layer 2 connectivity to the distribution/core network, whereas the base license or LAN enterprise license supports layer 3 IP routing. The NetApp recommended uplink data rate is two 10GbE links using the Enhanced Small Form-Factor Pluggable (SFP+) ports. Optionally, four GbE links may be used. However, there is a greater likelihood of network congestion for a 640-camera deployment using four 1Gbps uplinks, unless the traffic is optimally load-shared over the four links. An illustration of the video surveillance storage solution network topology using layer 2 trunked PortChannels is shown in Figure 33. Figure 33) Video surveillance uplinks. If properly provisioned and implemented, the network uplinks should not be a potential performance bottleneck to the solution. For additional information on campus LAN design, refer to the Cisco Design Zone for Campus. There also is a Webinar on Campus LAN Design for Converged Facility. 11 Performance Validation This section addresses these distinct performance signatures: Validation of baseline performance: serial attached SCSI (SAS) Video recording and viewing Tiered storage Archiving function Record function I/O latency 70 Video Surveillance Solutions Using NetApp E-Series Storage

71 11.1 Baseline Performance: Serial Attached ISCSI The video surveillance storage solution using E2600 supports either two, three, or four physical servers directly attached to the storage array. Each physical server is connected to the A and B controllers by the dual-port LSI SAS E HBA. The E2660 has a maximum of four SAS host interface cards (HICs) per controller, allowing redundant connectivity from a maximum of four physical machines. These interfaces have a data rate of 6Gbps per lane, and up to four lanes may be utilized for a theoretical throughput of 24Gbps. The E2660 has an estimated theoretical throughput of approximately 11.2Gbps write performance, whereas the E5460 write performance estimate is approximately 24.8Gbps. To validate these estimates in a solution validation test configuration, the IOMETER test tool was installed on 10 virtual machines over the three physical machines in the video surveillance storage solution deployment. Each virtual machine was configured with one IOMETER worker issuing I/O to a volume (LUN) configured for live recording, and a second worker issuing I/O to a volume configured for archive recording. In this scenario, 80% of the workload specification was 100% write, 30% random, burst 1 configuration, and 20% was 100% read, 70% random, burst 1. The sustained throughput observed ranged from 9 through 11Gbps. The SANtricity table view performance monitor output from this test is shown in Figure 34. Figure 34) Performance monitor output from IOMETER test. Note: The maximum throughput for the storage array is reported as Mbps, or approximately 11Gbps. These performance results are not intended to correlate on a volume-by-volume basis with what is observed with a VMS package implementation. However, the same virtual machine layout, live recording volume (LUN), and archive recording volume (LUN) that were used in this test would be deployed for OnSSI Ocularis or Milestone XProtect. 71 Video Surveillance Solutions Using NetApp E-Series Storage

72 Given the assumption of 640 network video cameras generating 2 to 4Mbps per camera, the video ingress data rate for the solution ranges from 1.3 to 2.6Gbps of write performance required per controller. This validates that the physical and logical hosts using SAS connectivity and E-Series 2660 controllers can exceed the I/O rates that are expected to be observed in a typical video surveillance storage solution deployment of 640 cameras, at the estimated data rate by a factor of 3 to 4 times Performance Validation: Recording and Viewing In the previous example, the theoretical workload of the four SAS attached servers was examined as a baseline. In this section, the workload during normal recording with client viewing is examined. The configuration is 10 recording servers with 48 cameras per server, for a total of 480 cameras. The cameras are 720p, 30 frames per second, 30% compression in H.264 UDP/RTP transport, and Milestone XProtect Corporate is the video management software. The SANtricity performance monitor tabular view is shown in Figure 35. Figure 35) Recording and viewing workload. The storage array total is maximum I/O/second with a throughput maximum of 116.4MB/sec or 931 Mbps. This workload is slightly under 1Gbps, which is approximately 1/10 the throughput demonstrated using the IOMETER test tool in the previous section. To summarize, the E-Series storage array deployed in the video surveillance storage solution has approximately ten times the throughput capabilities required to record live video in this validated deployment. In section 11.3, changes to the workload changes are examined when the archive function is used, which is a tiered storage implementation. 72 Video Surveillance Solutions Using NetApp E-Series Storage

73 11.3 Performance Validation of Tiered Storage Both Milestone XProtect Corporate and OnSSI Ocularis ES provide the option for a tiered storage approach, where video is initially written to a recording volume and then optionally written to a separate volume or directory (within the same volume) for a configured retention period. This feature enables using different RAID levels or disk types for different storage tiers. The performance of a typical video surveillance workload does not vary dramatically based on scheduled functions or usage based on time of day. However, when implementing tiered storage, the workload is not a constant throughout the day Archive Function The archive function of the tiered storage approach can be configured to move video files from the recording volumes (LUNs) to the archive volumes on a periodic basis. The archive function can be scheduled to run every eight hours, four hours, or hourly. The duration of the archive function is determined by the amount of video files that must be moved from tier to tier. In the validated test configurations described previously, the archive function was scheduled to initiate every hour at the top of the hour. The duration of the archive process was found to be typically 20 to 30 minutes. Because of this configuration, the performance characteristics of the system are dramatically different between the first 30 minutes of each hour and the last 30 minutes of each hour. To contrast the performance characteristics using a load test tool that runs at a relatively constant data rate, the performance characteristics of a live deployment were examined Recording Server The characteristics of a single recording server from the test configuration RACK-SVR-37 is looked at first. This server manages 48 simulated Axis M3204 configured for 1280x720p at 30 frames per second, 30% compression with RTP/UDP transport. The aggregate input video data rate for these cameras is approximately 128Mbits/sec (16Mbytes/sec) at 12,000 packets per second. The average data rate for each of the 48 cameras is approximately 2.6Mbps. This rate approximates the Axis design tool image scenario of intersection night option (2.7Mbps). The size of the archive volume (LUN) attached to this server is 29.3TB. At the observed data rate, the volume (LUN) would maintain approximately 22 days of video from the 48 cameras. If the goal is to meet a 30-day retention period, either the number of cameras supported by this server would need to be reduced to 35 or the archive volume would need to be increased to over 40TB. The storage configuration of recording server RACK-SVR-37 is shown in Figure. 73 Video Surveillance Solutions Using NetApp E-Series Storage

74 Figure 36) Storage configuration RACK-SVR-37. The recording volume (LUN) is 1TB in size, and video recordings over 12 hours old are moved to the archive volume (LUN) hourly, at the top of the hour. The archive volume is configured for 30-day retention, but as shown previously, the video files will need to be deleted after approximately 22 days at the observed data rate. Given that the SANtricity ES custom installation with the utilities has been installed on the virtual machine, the LUN numbers of the two volumes can be determined by executing the smdevices command as shown: C:\Program Files (x86)\storagemanager\util>smdevices SANtricity ES Storage Manager Devices, Version Built Tue Aug 28 04:07:49 CDT 2012 Copyright (C) NetApp, Inc. All Rights Reserved. \\.\PHYSICALDRIVE1 [Storage Array stle _34, Volume VOL_ARCHIVE_6, LUN 6, Volume ID <60080e50002e d550644b55>, Prefe rred Path (Controller-A): Owning controller - Active/Optimized] \\.\PHYSICALDRIVE2 [Storage Array stle _34, Volume VOL_LIVE_6, LUN 16, Volume ID <60080e50002e de50644e09>, Preferr ed Path (Controller-A): Owning controller - Active/Optimized] As a best practice, the volume names represent the function of the volume (LUN). It is obvious that the function of the recording (live) volume is LUN 16, and the archive volume is LUN 6. Because the testing is done in a VMware ESXi environment, VMware performance analysis tools can be used. Use the VMware vsphere client to highlight the physical server and select the performance tab, chart options, and storage path in real time. All HBA storage Cisco Nexus switches are selected, and the read and write rate parameters are checked. The write rate for the active path of LUNs 6 and 16 can be selected to highlight these lines in the graph. These values represent the write data rate for the two LUNs encompassing the archive function beginning at the top of the hour. This graph is shown in Figure. 74 Video Surveillance Solutions Using NetApp E-Series Storage

75 Figure 37) Write rate during archive. The write data rate to the recording (live) LUN 16 is a constant rate at a maximum write rate of 24,530KBps (196Mbps), whereas the archive LUN 6 is active for approximately 30 minutes and reaches a peak write rate of 63,723KBps (509Mbps). An additional observation from reviewing the graph is that the write rate varies more during the archive function, due to the increased I/O and workload on the recording server during the archive E-Series Array Performance Monitoring While Archiving The performance characteristics of a single recording server were examined. The overall system performances were examined at the top of the hour while the archive function was active. Figure is a SANtricity performance monitor table view of the total array, each controller, and their respective volumes. 75 Video Surveillance Solutions Using NetApp E-Series Storage

76 Figure 38) Performance monitor while archiving. This example represents 480 cameras recorded across ten recording servers. Each camera is 720p resolution, 30 frames per second, 30% compression H.264 in UDP/RTP transport. The performance monitor table illustrates data being read and written to the recording LIVE volumes. This represents the normal ingress video feeds being written while the read workload is the process of moving files from the recording volume to the archive volume. Accordingly, the write percentage is almost 100% on the archive volumes, and the read percentage of the live volumes is in the 80% to 90% range. This period of time when the archive process is active represents the worst case load on the storage system due to the tiered storage configuration of the video management application. In this example, the maximum I/O per second is 4,158, with a maximum data rate of approximately 6.2Gbps. Actual application throughput performance maximum during the worst case load is slightly over half of the storage array s theoretical maximum. Given the workload of an actual deployment using Milestone XProtect, the E-Series storage array exceeds the application performance requirements I/O Latency In the same manner that throughput varies at different times of the day due to the changes in the workload when tiered storage is implemented, so also does the observed latency for reads and writes to the respective volumes (LUNs). Milestone utilizes CONNEX International as a third-party testing agency for validating storage vendors with the XProtect product. The CONNEX test plan states a goal of less than 0.1% of frame loss and write latency less than 200ms between the recording server and the storage array. The recording server RACK-SVR-37 manages 48 simulated Axis M3204 cameras configured for 1280x720p at 30 frames per second, 30% compression with RTP/UDP transport. The average data rate for each of the 48 cameras is approximately 2.6Mbps. 76 Video Surveillance Solutions Using NetApp E-Series Storage

77 The VMware vsphere client is used to monitor the real-time data for the storage path for LUNs 6 (VOL_ARCHIVE_6) and 16 (VOL_LIVE_6) for this recording server. The one-hour length of time on the X-axis of the chart includes the entire bottom half of the hour, when only recording and viewing are active, as well as portions of the top half of two hours, when the archive function is active. This is shown in Figure. Figure 39) I/O latency RACK-SVR-37. One key concept derived from this chart is that average latency does not provide a useful metric when the range includes both the top half and bottom half of the hour. The workloads are very different during those two time periods, and therefore the average latency value is skewed. However, the chart shows the maximum latency, and for both LUNs 6 and 16 the reported write latency value is less than 50ms. During this interval, no buffer overflows representing frame loss were reported in the XProtect manager system log. Given this validation, it is seen that the E-Series latency performance is well within the partner specifications for latency under a real-time workload. 12 Other Performance Considerations 12.1 Performance Validation: Grooming The workload changes when the archive function is active compared to the normal recording and playback workload. This archive function is specific to OnSSI Ocularis and Milestone XProtect because they share the same recording server code base. Other video management applications such as Verint Nextivia and Genetec Omnicast do not implement multistage storage architecture. Video files are written to a volume (LUN) and then subsequently deleted at the expiration of the retention period or if the defined storage reaches a full condition. Although the tiered storage design is commonly deployed, it is not a requirement for OnSSI Ocularis ES and Milestone XProtect Corporate. 77 Video Surveillance Solutions Using NetApp E-Series Storage

78 The Milestone XProtect System Migration Guide: Migration from XProtect Enterprise to XProtect Corporate states the following: Basically, archiving is not necessarily a must when using XProtect Corporate. In case the hard disks you have allocated for the live database are fast enough and able to contain the expected amount of data, the system can run without archiving. This is possible due to the automatic 1 hour segment division of the live database, which keeps a potential database repair after a failure as short as possible, as only the last (hour) segment of the database needs to be repaired. The advantages of not implementing a separate live database and one or more archive locations are more efficient use of the storage array and simplicity in deployment. A video surveillance configuration using E2660 was tested with 900GB 10,000 RPM SAS drives for the recording volume (LUN) and 3TB 7,200 RPM NL-SAS drives for the archive volumes (LUNs). Another video surveillance configuration using E5460 was tested using 3TB 7200 RPM NL-SAS drives, with the recording and archive location on the same volume (LUN) and using the 3TB drives on separate volumes (LUNs). Using the 900GB 10,000 RPM SAS drives for the recording volume (LUN) might be desired when deploying the solution in the gaming market, where there is a high degree of forensic analysis. The physical-security integrator might, however, choose to implement the tiered storage approach to take advantage of the ability to implement the additional features of archiving: digital signing, encryption, and grooming of video. Grooming is the reduction in data rate, through reductions in frame rate, compression, and other parameters, depending on the VMS features. To maximize the storage efficiency, the E5460 may be deployed with all 3TB NL-SAS disks. In addition to the existing validation tests of video surveillance solutions, a validation of tiered storage using a RAID 1 recording location with 3TB disks is shown in the following section Recording on 3TB NL-SAS Using RAID 1 The assumption of this performance validation is the use of a recording volume (LUN) in a two 3TB disk RAID 1 traditional volume group. Archive volume groups are also configured with the 3TB NL-SAS disks. The VMS is configured to groom (lower) the frame rate during each archive process. This storage conservation technique allows the retention of video archives for a longer period of time, albeit at a lower frame rate. This recording volume (LUN) has sufficient capacity to maintain 24 hours of video from 32 Axis P1346 cameras at 1920x1080 (full HDTV) 30 frames per second, using 30% compression and H.264 RTP/UDP transport. This configuration generates approximately 4.68Mbps per camera with an aggregate load on the recording server of approximately 150 Mbps. The recording server is a Milestone XProtect Corporate recording server (RACK-SVR-7) that is installed in an ESXi 5.1 virtual machine on a Cisco UCS C220-M2 (Intel Xeon E5504 2GHz) server. The virtual machine is allocated 4GB of memory and 4 virtual CPUs. The hourly archive function reduces (grooms) the frame rate from the 30 frames per second to 18 frames per second when going to the first archive location and subsequently grooms the video from 18 frames per second to 5 frames per second when moving from the first archive location to the second archive location. This configuration is shown in Figure. 78 Video Surveillance Solutions Using NetApp E-Series Storage

79 Figure 39) Archiving with grooming. Both DDP and traditional volume groups are configured for the archive volumes (LUNs). The graph in Figure illustrates the write latency and write rate before, during and after the scheduled archive with grooming. Figure 40) Recording latency and rate for 3TB RAID Video Surveillance Solutions Using NetApp E-Series Storage

80 During the period before and after the archive function (at the top of the hour), the maximum write data rate per second is approximately 163Mbps. This is consistent with the average ingress video data rate to this server of approximately 150Mbps. The write latency average is 33ms with a maximum of 122ms. There were no buffer overflows logged for the charted time period of the archive function. The first archive volume (LUN) that is the target for the groomed video files is a DDP. The pool is composed of 20 3TB drives for a total capacity of TB; the volume in the pool is 28TB. To verify the archive process and to observe the I/O activity, refer to Figure. Figure 41) Recording latency and rate for DDP archive volume. In this example, the maximum write data rate observed is approximately 80Mbps, and the write latency is in the 12ms range, with a maximum latency of 56ms. From these observed data rates, it can be concluded that the additional CPU workload of the grooming process reduces the data transfer rate between the recording and archive volumes (LUNs) when compared to tiered storage configuration, which does not implement grooming Performance Summary In section 11, the performance of a video surveillance solution using the E2660 was compared to the marketing throughput estimate for the E2660 storage array. The tested configuration validated the product marketing throughput estimate. It was also validated that the expected workload for a typical deployment of video archiving and viewing using a commonly used retention period was only a fraction of the product marketing throughput estimate. In the last two benchmarks, it was validated that the tiered storage function, which has an increased performance requirement, is well within the capabilities of the E-Series storage array. 80 Video Surveillance Solutions Using NetApp E-Series Storage

81 13 Video Management System Partners This chapter provides URLs linking to product marketing and test reports that describe the performance validation verification conducted both internal to NetApp and by the partner. A list of all partners and related testing can be found at this location Milestone XProtect Corporate Milestone has validated the performance characteristics of the E5400 deployment at its corporate headquarters facility as part of the E-Series Video Management System Validation Program. The results of that validation and links to the Milestone and NetApp solution integration are available at: Video Surveillance Storage and Milestone XProtect: NetApp Video Surveillance Storage Solution Milestone XProtect Corporate on NetApp Video Surveillance Storage Solution Application Test Report 13.2 On-Net Surveillance Systems Inc. Ocularis ES (OnSSI) The technical report highlighting the results of testing OnSSI Ocularis ES application with the NetApp video surveillance storage solution is available at: OnSSI Ocularis ES on NetApp Video Surveillance Storage Solution Application Test Report 13.3 Verint Nextiva The technical report detailing the performance characteristics of Verint Nextiva is available at: Video Surveillance Storage and Verint Nextiva 13.4 Genetec Omnicast Infrastructure equipment was tested by NetApp with a Genetec Infrastructure self-certification package. Omnicast version 4.8 was validated. Data rates shown in Table 14 are from a single server measuring write throughput in Mbps. Table 14) Genetec Omnicast version 4.8 validation. NetApp Design Guidelines Continuous Recording Design Guidelines Record on Motion E2660 iscsi (1Gbps) 300 cameras/350mbps 93Mbps E5400 FC (8Gbps) 300 cameras/353mbps 185Mbps Enabling server-side motion detection increases the workload of the recording server, effectively reducing the effective throughput. 14 Software Releases NetApp tested various VMS applications as previous described. This chapter lists the software releases for components that were used in the solution validation at the time of this testing and provides a link to all the caveats identified by the validation Solution Software Releases Validated The software releases seen in Table 15 were used in scalability and performance validation testing. 81 Video Surveillance Solutions Using NetApp E-Series Storage

82 Table 15) Software releases validated. Component Software Release Validated E-Series E5400 controller firmware Current package version: E2600 controller firmware Current package version: SANtricity ES Storage Manager Management station version G0.32 Network Related Cisco Nexus NX-OS Cisco Catalyst IOS System version: 5.0(3)U5(1a) cat4500-entservicesk9-m Version 12.2(54)SG1 Video Surveillance Cameras and Software Axis virtual camera Axis M Axis P Axis Q OnSSI Ocularis Ocularis ES v3.5 Milestone XProtect XProtect Corporate 5.0b Genetec Omnicast Omnicast 4.8 Verint Nextivia n/a Server Related Cisco Integrated Management Console Cisco UCS C220-M3 BIOS Broadcom 5709 quad-port Ethernet adapter 1.5(b) C220M c A0906GT Intel I350 LOM Ethernet adapter UCSC 2008M-8e SAS mezzanine card LSI SAS E Firmware: , BIOS: VMware ESXi and vsphere client Operating system (video management applications) Operating system (client viewing station) Windows Server 2008 R2 SP1 64-bit Standard ( ) Windows 7 Professional ( SP1) 14.2 Solution Caveats For a current list of solution caveats, contact NetApp Global Support or refer to the NetApp Global Support Wiki. This chapter provides detailed steps on how to configure an example video surveillance storage solution, with NetApp E2660 storage and Cisco UCS servers/switches in a virtualized environment, using VMware 82 Video Surveillance Solutions Using NetApp E-Series Storage

83 ESXi. NetApp tested the following configuration in our lab in RTP. This chapter is separated into the following principal sections: Site-specific parameters IP addressing examples Cisco Nexus 3048 switches E-Series storage array Cisco UCS servers and ESXi 14.3 Site-Specific Parameters The example configurations illustrate a sample deployment in a lab-tested and verified environment. The following parameters need to be identified and substituted in the sample configurations: User name and password Host name Management IP address and netmask Telnet and/or SSH enabled and key type and length MOTD banner NTP servers IP addresses and vrf used Gateway address for management interface Unused port VLAN number Native VLAN number Device management VLAN number, SVI IP addresses, netmask, and gateway (HSRP) address Addressing scheme (for servers, E2660 management ports) vpc keepalive VLAN number Video ingress VLAN number SVI IP addresses, netmask, and gateway (HSRP) address Addressing scheme (for servers) vpc VRF name, domain number, IP addresses, netmask for vpc SVI IP address of FTP server, user name, and password to save config files Optional Loopback IP addresses and netmask L3 uplink IP addresses and netmask EIGRP process tag, AS autonomous system, hello interval, and hold time License files 14.4 IP Addressing Examples The IP addresses used in this document are special use IPv4 addresses as defined in RFC5735 and are not routable addresses in the Internet. These addresses are for illustration purposes only and must be replaced with IP addresses assigned by the customer for this deployment. Sample IP Address Allocation for VIDEO_INGRESS Network The following configuration output represents a sample IP addressing scheme for the VIDEO_INGRESS network: IP Address Description Switch and Port number Video Surveillance Solutions Using NetApp E-Series Storage

84 /24 mask VSS VLAN VSS VLAN Default Gateway (HSRP address) Both switches SVR1 RACK-SVR-10 Base port-channel1 - Members: Eth1/1, Eth1/ SVR1 RACK-SVR-11 Manager SVR1 RACK-SVR-12 Failover SVR1 RACK-SVR-13 Failover SVR2 RACK-SVR-20 Recorder port-channel2 - Members: Eth1/25, Eth1/ SVR2 RACK-SVR-21 Recorder SVR2 RACK-SVR-22 Recorder SVR2 RACK-SVR-23 Failover SVR3 RACK-SVR-30 Recorder port-channel3 - Members: Eth1/5, Eth1/ SVR3 RACK-SVR-31 Recorder SVR3 RACK-SVR-32 Recorder SVR3 RACK-SVR-33 Failover SVR4 RACK-SVR-40 Recorder port-channel4 - Members: Eth1/29, Eth1/ SVR4 RACK-SVR-41 Recorder SVR4 RACK-SVR-42 Recorder SVR4 RACK-SVR-43 Recorder Device Management VLAN Sample IP Addressing and Port Assignment IP Address Description Switch and Port number / VSS VSS HSRP address (default gateway) E2660-A:DEVICE_MANAGEMENT VSS Ethernet1/ E2660-B:DEVICE_MANAGEMENT VSS Ethernet1/14 SVR1 vmnic1 VSS Ethernet1/ SVR1 vmnic1 RACK-SVR SVR1 vmnic1 RACK-SVR SVR1 vmnic1 RACK-SVR SVR1 vmnic1 RACK-SVR SVR1 CIMC:DEVICE_MANAGEMENT VSS Ethernet1/ SVR1 vmnic0 VMkernel port VSS Ethernet1/18 SVR2 vmnic1 VSS Ethernet1/ SVR2 vmnic1 RACK-SVR SVR2 vmnic1 RACK-SVR SVR2 vmnic1 RACK-SVR SVR2 vmnic1 RACK-SVR SVR2 CIMC:DEVICE_MANAGEMENT VSS Ethernet1/ SVR2 vmnic0 VMkernel port VSS Ethernet1/18 SVR3 vmnic1 VSS Ethernet1/ SVR3 vmnic1 RACK-SVR SVR3 vmnic1 RACK-SVR SVR3 vmnic1 RACK-SVR SVR3 vmnic1 RACK-SVR SVR3 CIMC:DEVICE_MANAGEMENT VSS Ethernet1/ SVR3 vmnic0 VMkernel port VSS Ethernet1/24 SVR4 vmnic1 VSS Ethernet1/ SVR4 vmnic1 RACK-SVR SVR4 vmnic1 RACK-SVR SVR4 vmnic1 RACK-SVR SVR4 vmnic1 RACK-SVR SVR4 CIMC:DEVICE_MANAGEMENT VSS Ethernet1/ SVR4 vmnic0 VMkernel port VSS Ethernet1/24 84 Video Surveillance Solutions Using NetApp E-Series Storage

85 Available Ports VSS Eth1/28, Eth1/30, Eth1/32, Eth1/34, Eth1/36 VSS Eth1/28, Eth1/30, Eth1/32, Eth1/34, Eth1/ Service Technician Laptop to Service Technician Laptop 14.5 Cisco Nexus 3048 Switches This section provides steps to implement a sample configuration on a pair of Cisco Nexus 3048 switches to provide network access for video recording server video ingress, as well as management network connectivity to the servers and E-Series controllers. Also provisioned are available management ports to be used by a service technician for configuration and troubleshooting. Console, Management, and Power-On of Cisco Nexus 3048 Switches Figure illustrates the console and management Ethernet interfaces of the Cisco Nexus 3048 switch. Figure 42) Cisco Nexus 3048 switches console and management interfaces. To set up Cisco Nexus 3048 switches 1 and 2, complete the following steps: 1. Power on the switch. 2. Use HyperTerm or another terminal emulator configured and attached to the console, using these settings: none 1 with flowcontrol hardware. 3. Run these settings for the initial setup of each switch, substituting the appropriate values for the switch-specific information as follows: Abort Power On Auto Provisioning and continue with normal setup?(yes/no)[n]: yes ---- System Admin Account Setup ---- Do you want to enforce secure password standard (yes/no): no Enter the password for "admin":<<var_admin_passwd>> Confirm the password for "admin":<<var_admin_passwd>> ---- Basic System Configuration Dialog ---- Would you like to enter the basic configuration dialog (yes/no): yes Create another login account (yes/no) [n]: Configure read-only SNMP community string (yes/no) [n]: 85 Video Surveillance Solutions Using NetApp E-Series Storage

86 Configure read-write SNMP community string (yes/no) [n]: Enter the switch name : <<var_switch_hostname>> Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: Mgmt0 IPv4 address : <<var_mgmt0_ip_address>> Mgmt0 IPv4 netmask : <<var_mgmt0_netmask>> Configure the default gateway for mgmt? (yes/no) [y]: y enter default gateway address <<var_mgmt0_gateway>> Enable the telnet service? (yes/no) [n]: Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) : rsa Number of key bits < > : 1024 Configure the ntp server? (yes/no) [n]: n Configure CoPP System Policy Profile ( default / l2 / l3 ) [default]: As an example, the configuration applied is the following configuration commands: switchname VSS interface mgmt0 ip address no shutdown exit vrf context management ip route / exit no telnet server enable ssh key rsa 1024 force ssh server enable policy-map type control-plane copp-system-policy ( default ) Note: It is assumed that the management ports of both switches are connected to the customer management network. Verify and Upgrade NX-OS The video surveillance storage solution was tested with system version 5.0(3)U5(1a). Verify the version that was shipped with the switch and upgrade accordingly: VSS # show version Cisco Nexus Operating System (NX-OS) Software TAC support: Copyright (c) , Cisco Systems, Inc. All rights reserved. The copyrights to certain works contained herein are owned by other third parties and are used and distributed under license. Some parts of this software are covered under the GNU Public License. A copy of the license is available at Software BIOS: version loader: version N/A kickstart: version 5.0(3)U5(1a) system: version 5.0(3)U5(1a) power-seq: Module 1: version v4.4 BIOS compile time: 08/25/2011 kickstart image file is: bootflash:/n3000-uk9-kickstart u5.1a.bin kickstart compile time: 11/21/2012 1:00:00 [11/21/ :17:19] system image file is: bootflash:/n3000-uk u5.1a.bin system compile time: 11/21/2012 1:00:00 [11/21/ :03:31] The kickstart and system images are stored on an FTP server at the IP address in the DEVICE_MANAGEMENT VLAN (default VRF), in the home directory of the user download, and the password is Netapp123. Edit and issue these commands and then respond to the prompts appropriately. copy ftp://downloads:netapp123@ /n3000-uk u5.1a.bin bootflash: copy ftp://downloads:netapp123@ /n3000-uk9-kickstart u5.1a.bin bootflash: 86 Video Surveillance Solutions Using NetApp E-Series Storage

87 Install the images specifying the file names for the kickstart and system images. install all kickstart bootflash:n3000-uk9-kickstart u5.1a.bin system n3000- uk u5.1a.bin Enable Features From the configuration mode (config t), run the following commands on each switch: cfs eth distribute feature eigrp feature interface-vlan feature hsrp feature lacp feature vpc Note: The EIGRP feature is only applicable if configuring a routed access layer using EIGRP as the routing protocol. For more information, see High Availability Campus Network Design Routed Access Layer using EIGRP or OSPF. Install Licenses (Optional) If features such as EIGRP are required for the deployment, install the appropriate license files. For example, both LAN base and LAN enterprise services licenses are required for full layer 3 support in the Cisco Nexus No license files are required if deploying a video surveillance solution in a layer 2 environment. stl3048-f5-1# show license usage Feature Ins Lic Status Expiry Date Comments Count LAN_BASE_SERVICES_PKG Yes - Unused Never - LAN_ENTERPRISE_SERVICES_PKG Yes - Unused Never - Configure Loopback Interfaces and EIGRP (Optional) From the configuration mode (config t), run the following commands on switch1: interface loopback0 ip address /31 ip router eigrp 64 router eigrp 64 autonomous-system 64 address-family ipv4 unicast From the configuration mode (config t), run the following commands on switch2: interface loopback0 ip address /31 ip router eigrp 64 router eigrp 64 autonomous-system 64 address-family ipv4 unicast Configure Network Time Protocol (NTP) and Time Zone A reliable and accurate time source is critical for video surveillance deployments. Identify and configure both switches with one or more NTP servers and configure the time zone of the switches. From the configuration mode (config t), edit as appropriate, then run the following commands on each switch: 87 Video Surveillance Solutions Using NetApp E-Series Storage

88 ntp server use-vrf management clock timezone est -5 0 clock summer-time edt Configure MOTD Banner Configure the appropriate message of the day (MOTD) banner as per the customer security policies. An example of an MOTD banner is shown below. From the configuration mode (config t), edit as appropriate, then run the following commands on each switch: banner motd # UNAUTHORIZED ACCESS TO THIS NETWORK DEVICE IS PROHIBITED. You must have explicit permission to access or configure this device. All activities performed on this device are logged and violations of this policy may result in disciplinary action. Define VLANs To define and describe the layer 2 VLANs for Cisco Nexus 3048 switches 1 and 2, complete the following step: From the configuration mode (config t), run the following commands on each switch: vlan 2 name UNUSED_PORTS vlan 3 name NATIVE_VLAN vlan 7 name DEVICE_MANAGEMENT vlan 58 name vpc_keepalive vlan 2020 name VIDEO_INGRESS Apply Default Port Configuration Configure all ports for VLAN 2 (UNUSED_PORTS) and, optionally, shut down all ports. Active ports are reconfigured and can be enabled (no shutdown) as required. This prevents a rogue user from connecting a device to a port and gaining access to the network. From the configuration mode (config t), run the following commands on each switch: int ethernet 1/1-52 switchport access vlan 2 shutdown Note: At this point all ports are disabled; port groups are enabled (no shutdown) later in this procedure. 88 Video Surveillance Solutions Using NetApp E-Series Storage

89 Cisco Nexus 3048 Cabling Schematic Refer to Figure for help in completing the cabling of the Cisco Nexus 3048 switches to the Cisco UCS servers and E-Series housed in the video surveillance solution. Figure 43) Cisco Nexus 3048 switch cabling schematic diagram. Note: These switches connect to the customer distribution/core layer by either a layer 2 (switched) or layer 3 (routed) uplink. The vpc keepalive links are on ports Ethernet 1/2 and Ethernet 1/48, and the vpc peer links are Twinax SFP+ 10Gbit connections between Ethernet 1/49 and E1/50. The green circles on ports Ethernet 1/28 to Ethernet 1/36 are configured for the DEVICE_MANAGEMENT VLAN, but are not cabled. They are for on-site service use. Configure Virtual PortChannel (vpc) From the configuration mode (config t), run the following commands on switch1: vrf context vpc_peer-keepalive interface Vlan58 no shutdown description vpc_peer-keepalive vrf member vpc_peer-keepalive ip address /30 vpc domain 58 role priority 11 peer-keepalive destination source vrf vpc_peer-keepalive interface port-channel58 description vpc_peer-keepalive switchport access vlan 58 no negotiate auto interface port-channel59 description vpc peer link switchport mode trunk vpc peer-link switchport access vlan 3 89 Video Surveillance Solutions Using NetApp E-Series Storage

90 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network no negotiate auto interface Ethernet1/2 description vpc_peer-keepalive switchport access vlan 58 channel-group 58 mode active interface Ethernet1/48 description vpc_peer-keepalive switchport access vlan 58 channel-group 58 mode active interface Ethernet1/49 description vpc peer link switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network channel-group 59 mode active interface Ethernet1/50 description vpc peer link switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network channel-group 59 mode active From the configuration mode (config t), run the following commands on switch2: vrf context vpc_peer-keepalive interface Vlan58 no shutdown description vpc_peer-keepalive vrf member vpc_peer-keepalive ip address /30 vpc domain 58 role priority 12 peer-keepalive destination source vrf vpc_peer-keepalive interface port-channel58 description vpc_peer-keepalive switchport access vlan 58 no negotiate auto interface port-channel59 description vpc peer link switchport mode trunk vpc peer-link switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network no negotiate auto interface Ethernet1/2 description vpc_peer-keepalive switchport access vlan 58 channel-group 58 mode active interface Ethernet1/48 description vpc_peer-keepalive 90 Video Surveillance Solutions Using NetApp E-Series Storage

91 switchport access vlan 58 channel-group 58 mode active interface Ethernet1/49 description vpc peer link switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 channel-group 59 mode active interface Ethernet1/50 description vpc peer link switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 channel-group 59 mode active Configure Server Management Ports Each Cisco UCS C220-M3 server has seven network connections. There are three management network ports on the motherboard and a quad-port PCI network adapter. Figure illustrates the position of these network ports. Figure 44) Cisco UCS C220-M3 chassis. The slot labeled vmnic contains a quad-port Broadcom Corporation Broadcom NetXtreme II BCM Base-T Ethernet controller. The slot labeled SAS contains the LSI SAS E HBA. The port labeled 1 is a 10/100/1000 Ethernet dedicated management port, for CIMC. The ports labeled 2 and 3 are dual 1-Gb Ethernet ports. Port 2 is vmnic0 (vswitch0, VMkernel port), and Port 3 is vmnic1 (vswitch2, SERVER_MGMT) to the ESXi host. The three management ports are connected to switch ports in the management VLAN. They are identified in the switch interface description using Server 1 as an example: Server 1 - CIMC:DEVICE_MANAGEMENT Server 1 - vmnic0:device_management Server 1 - vmnic1:device_management Note: Servers 1 and 3 are connected on switch 1, and Servers 2 and 4 are connected to switch Video Surveillance Solutions Using NetApp E-Series Storage

92 From the configuration mode (config t), run the following commands on switch 1: interface Ethernet1/16 description SERVER 1 - CIMC:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/18 description SERVER 1 - vmnic0:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/20 description SERVER 1 - vmnic1:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/22 description SERVER 3 - CIMC:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/24 description SERVER 3 - vmnic0:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/26 description SERVER 3 - vmnic1:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable Configure available service ports on the switch for the service technicians laptops: int e 1/28, e 1/30, e 1/32, e1/34, e1/36 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable From the configuration mode (config t), run the following commands on switch 2: interface Ethernet1/16 description SERVER 2 - CIMC:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/18 description SERVER 2 - vmnic0:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/20 description SERVER 2 - vmnic1:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/22 description SERVER 4 - CIMC:DEVICE_MANAGEMENT switchport access vlan 7 92 Video Surveillance Solutions Using NetApp E-Series Storage

93 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/24 description SERVER 4 - vmnic0:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/26 description SERVER 4 - vmnic1:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable Configure available service ports on the switch for the service technicians laptops: int e 1/28, e 1/30, e 1/32, e1/34, e1/36 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable Configure E-Series Management Ports From the configuration mode (config t), run the following commands on switch 1: interface Ethernet1/14 description E2660-A:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable From the configuration mode (config t), run the following commands on switch 2: interface Ethernet1/14 description E2660-B:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable Configure Device Management Switch Virtual Interfaces On each switch, configure an Ethernet SVI on the management VLAN. From the configuration mode (config t), run the following commands on switch 1. If using the LAN_ENTERPRISE_SERVICES_PKG license and routed (layer 3) access layer: interface Vlan7 no shutdown description DEVICE_MANAGEMENT ip address /24 ip router eigrp 64 hsrp 1 preempt delay reload 120 priority 110 ip If using a switched (layer 2) access layer: interface Vlan7 no shutdown description DEVICE_MANAGEMENT ip address /24 From the configuration mode (config t), run the following commands on switch Video Surveillance Solutions Using NetApp E-Series Storage

94 If using the LAN_ENTERPRISE_SERVICES_PKG license and routed server access layer: interface Vlan7 no shutdown description DEVICE_MANAGEMENT ip address /24 ip router eigrp 64 hsrp 1 preempt delay reload 120 priority 120 ip If using a layer 2 switched access layer: interface Vlan7 no shutdown description DEVICE_MANAGEMENT ip address /24 Configure Video Ingress Switch Virtual Interfaces On each switch, configure an Ethernet SVI on the video ingress VLAN. From the configuration mode (config t), run the following commands on switch 1. If using the LAN_ENTERPRISE_SERVICES_PKG license and routed (layer 3) access layer: interface Vlan2020 no shutdown description VIDEO_INGRESS ip address /24 ip router eigrp 64 hsrp 1 preempt delay reload 120 priority 110 ip If using a switched (layer 2) access layer: interface Vlan2020 no shutdown description VIDEO_INGRESS ip address /24 From the configuration mode (config t), run the following commands on switch 2. If using the LAN_ENTERPRISE_SERVICES_PKG license and routed (layer 3) access layer: interface Vlan2020 no shutdown description VIDEO_INGRESS ip address /24 ip router eigrp 64 hsrp 1 preempt delay reload 120 priority 120 ip If using a switched (layer 2) access layer: interface Vlan2020 no shutdown description VIDEO_INGRESS ip address /24 Configure Server Video Ingress Ports The quad-port Broadcom Corporation Broadcom NetXtreme II BCM Base-T is used for video ingress. Two ports from each server are attached to Cisco 3048 switch 1, and two ports are attached to Cisco 3048 switch 2. These four ports are configured as aggregated links in a PortChannel 94 Video Surveillance Solutions Using NetApp E-Series Storage

95 (EtherChannel) from the ESXi host perspective. The PortChannel configuration on the Cisco Nexus 3048 switches is a vpc. The four ports are identified from left to right as vmnic5, vmnic4, vmnic3, and vmnic2 from the ESXi host perspective. Refer to Figure. From the configuration mode (config t), run the following commands on switch 1: interface port-channel1 description Server 1 vpc 1 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel2 description Server 2 vpc 2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel3 description Server 3 vpc 3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel4 description Server 4 vpc 4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface Ethernet1/1 description Server 1 - vmnic5 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 1 interface Ethernet1/5 description Server 3 - vmnic5 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 3 interface Ethernet1/13 description Server 1 - vmnic3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 1 interface Ethernet1/17 description Server 3 - vmnic3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 3 interface Ethernet1/25 description Server 2 - vmnic5 95 Video Surveillance Solutions Using NetApp E-Series Storage

96 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 2 interface Ethernet1/29 description Server 4 - vmnic5 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 4 interface Ethernet1/37 description Server 2 - vmnic3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 2 interface Ethernet1/41 description Server 4 - vmnic3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 4 From the configuration mode (config t), run the following commands on switch 2: interface port-channel1 description Server 1 vpc 1 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel2 description Server 2 vpc 2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel3 description Server 3 vpc 3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel4 description Server 4 vpc 4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface Ethernet1/1 description Server 1 - vmnic4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 1 interface Ethernet1/5 description Server 3 - vmnic4 96 Video Surveillance Solutions Using NetApp E-Series Storage

97 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 3 interface Ethernet1/13 description Server 1 - vmnic2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 1 interface Ethernet1/17 description Server 3 - vmnic2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 3 interface Ethernet1/25 description Server 2 - vmnic4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 2 interface Ethernet1/29 description Server 4 - vmnic4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 4 interface Ethernet1/37 description Server 2 - vmnic2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 2 interface Ethernet1/41 description Server 4 - vmnic2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 4 Enable Interfaces From the configuration mode (config t), run the following commands on both switches: VIDEO_INGRESS: interface Eth1/1,Eth1/5,Eth1/13,Eth1/17,Eth1/25,Eth1/29,Eth1/37,Eth1/41,Eth1/49,Eth1/50 no shut interface po1, po2, po3, po4 no shut interface vlan 2020 no shut DEVICE_MANAGEMENT: interface Eth1/14, Eth1/16, Eth1/18, Eth1/20, Eth1/22, Eth1/24, Eth1/26, Eth1/28, Eth1/30, Eth1/32, Eth1/34, Eth1/36,Eth1/49, Eth1/50 no shut interface vlan 7 no shut vpc keepalive and vpc peer links: 97 Video Surveillance Solutions Using NetApp E-Series Storage

98 interface Eth1/2, Eth1/48 no shut interface po58,po59 no shut interface Ethernet1/49,Ethernet1/50 no shut Configure Layer 3 Uplinks (Optional) Use this procedure to configure layer 3 uplinks. 1. From the configuration mode (config t), run the following commands on switch 1: interface Ethernet1/51 description L3 UPLINK stl3048-f5-1 e1/51 no switchport ip address /30 ip router eigrp 64 ip hold-time eigrp 64 3 ip hello-interval eigrp From the configuration mode (config t), run the following commands on switch 2: interface Ethernet1/51 description L3 UPLINK stl3048-f5-2 e1/51 no switchport ip address /30 ip router eigrp 64 ip hold-time eigrp 64 3 ip hello-interval eigrp 64 1 Configure Layer 2 Uplinks Use this procedure to configure layer 2 uplinks. 1. From the configuration mode (config t), run the following commands on switch 1: interface port-channel10 description L2 Portchannel to CORE switchport mode trunk vpc 10 switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network no negotiate auto interface Ethernet1/52 description L2 UPLINK stl3048 Eth1/49 switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network channel-group 10 mode active 2. From the configuration mode (config t), run the following commands on switch 2: interface port-channel10 description L2 Portchannel to CORE switchport mode trunk vpc 10 switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network no negotiate auto 98 Video Surveillance Solutions Using NetApp E-Series Storage

99 interface Ethernet1/52 description L2 UPLINK stl3048 Eth1/50 switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network channel-group 10 mode active IP Multicast Configuration Milestone XProtect Corporate and OnSSI Ocularis ES support IP multicast transport of video to client users. The Cisco Nexus 3048 switches have been validated to integrate to an IP multicast-enabled campus network. Because this configuration uses IP PIM sparse mode, two routers (for availability) in the network core should be configured as rendezvous points (RPs). RPs are used by senders to an IPmc group to announce their existence and by receivers of IPmc packets to learn about new senders. The core routers are configured with an IP PIM auto-rp and an RP-mapping agent to arbitrate conflicts between the two RPs. The RP-mapping agent provides consistent group-to-rp mappings to all other routers in the IP PIM network. The solution was validated to transport IP multicast packets source from recording servers on the VIDEO INGRESS VLAN to clients configured on an access-layer switch also connected to the campus core network. Cisco NX-OS does not support PIM dense mode. In Cisco NX-OS, multicast is enabled only after the PIM feature is enabled on each router. The PIM sparse mode is then enabled on the appropriate interfaces. With the video surveillance solution Cisco Nexus 3048 switches, PIM sparse mode should be enabled on the loopback interface, the VIDEO INGRESS switch virtual interface (SVI) interface, and the layer 3 uplinks. Also, the auto-rp function should be enabled to listen and forward. From the configuration mode (config t), run the following commands on both switches: feature pim ip pim auto-rp listen forward ip pim log-neighbor-changes interface loopback0 ip pim sparse-mode interface Vlan2020 ip pim sparse-mode interface Ethernet1/51 ip pim sparse-mode After enabling IP multicast routing, the switches should form PIM neighbor relationships with the uplink switches and learn the RPs. Verify the PIM neighbors and PIM RP status as follows: VSS # show ip pim neighbor PIM Neighbor Status for VRF "default" Neighbor Interface Uptime Expires DR Bidir- BFD Priority Capable State Vlan :19:35 00:01:29 1 no n/a Ethernet1/51 06:21:06 00:01:33 1 no n/a VSS # show ip pim rp PIM RP Status Information for VRF "default" BSR disabled Auto-RP RPA: , uptime: 06:38:39, expires: 00:02:48 BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None 99 Video Surveillance Solutions Using NetApp E-Series Storage

100 Auto-RP Discovery policy: None RP: , (0), uptime: 06:38:39, expires: 00:02:48, priority: 0, RP-source: (A), group ranges: /24 RP: , (0), uptime: 06:38:39, expires: 00:02:48, priority: 0, RP-source: (A), group ranges: /8 Save Configuration Verify that the configuration has been saved to NVRAM and FTP server for each switch: VSS # copy running-config startup-config [########################################] 100% VSS # copy running Enter vrf (If no input, current vrf 'default' is considered): management Password: ***** Transfer of file Completed Successfully ***** 14.6 E-Series Storage Array Configure E-Series Management Ports Each E-Series controller has two Ethernet ports per controller: Port 1 for management (left port) and Port 2 for service (right port). A few minutes after the storage array is powered on, the ports will default to: Controller A: and Controller B: and Figure depicts the location of the management ports on the E-Series controllers. The A controller is at the top, and the B controller is at the bottom. The green lines point to the management port (Port 1). For initial configuration, connect a laptop to one of the service ports (Port 2). Figure 45) E-Series controllers and management ports. 100 Video Surveillance Solutions Using NetApp E-Series Storage

101 Best Practice A best practice is to leave Port 2 with the default values for a service technician to use locally. When the initial configuration is done, connect each of the Port 1s on both controllers to a network switch in the test environment topology. The IP address of the first port can be changed using SANtricity ES from a laptop on the same subnet as the default IP address on Port 2. Run the SANtricity ES application. The Enterprise Management window (EMW) appears. For initial configuration, do the following: 1. Edit > Add Storage Array and select Out of Band Management. Enter the IP address for one management IP port (for example, ). 2. When you have connected to the controller, click the SANtricity ES Setup tab and select Configure Ethernet Management Ports to change the IP address of Port 1 on each controller. 3. After configuring the IP addresses management Port 1 on both controllers, detach the laptop from Port 2 (the service technician port) and remove the storage array from the SANtricity EMW. 4. Connect the laptop to an available port on the appropriate network switch and create a static IP address on the laptop using an available service technician IP address (for example, ). 5. Use the SANtricity ES EMW to manually add the storage array, using the IP addresses assigned to Port 1 of controller A and B. Provision and Configure E-Series Volume Groups and Volumes This section provides the steps to implement a sample configuration on an E-Series storage system to provide volumes as logical unit numbers (LUNs) for use by a typical video management software application. The example uses a system from actual testing done by NetApp using OnSSI Ocularis ES as the VMS. The volume groups and volumes required will vary depending on VMS chosen and design of the specific customer s deployment. These volume groups, volumes, and LUN descriptions are for illustration purposes only and must be replaced with those required by the customer for the customer s deployment. The sample utilizes a pair of E2600 controllers and three 60-drive E-Series DE6600 chassis for a total of 180 drives. 150 drives are 3TB 7200 RPM NL-SAS drives; 30 drives are 900GB 10,000 RPM SAS drives. E-Series Storage Array Configuration 1. Open the SANtricity ES Array Management Windows (AMW). From the menu bar, select Storage Array > Change > Cache Settings. Make changes as shown in the following screenshot. 101 Video Surveillance Solutions Using NetApp E-Series Storage

102 2. E-Series arrays are delivered from the factory with two provisioned LUNs. The first LUN (LUN 0) is an unnamed LUN that is mapped to the default group. This LUN mapping and the associated volume must be removed and deleted before creating customer-specific volumes and hosts. The second LUN is the access LUN (LUN 7), used to enable in-band management of the E-Series array. Out-of-band management is recommended for NetApp E-Series solutions. As a result, the access volume is not required, and the associated mapping to the default group should be removed. 3. First create volume groups, then volumes. In this sample configuration for video recording, three RAID 10 volume groups are created, each with up to four volumes. For archives, ten RAID 6 volume groups are configured, each with one or two volumes, as shown in Table 16. Each recording server will later be presented with two volumes as LUNs (one for recording and one for archive). Figure shows a visual representation of the sample configuration; each rectangle represents a single volume group, and the sections within each rectangle represent volumes. 102 Video Surveillance Solutions Using NetApp E-Series Storage

103 Figure 46) Volume group and volume layout used for sample storage configuration. Table 16 illustrates the details for the sample configuration. Use these details as a guide to create the volume groups and volumes. Table 16) Details for sample E-Series storage configuration. Volume Group Volume RAID Layout Volume Size Drives Used Preferred Controller VG_BOOKMARKS VOL_BOOKMARKS RAID 1 (1+1) 2.794TB 3TB A VG_ARCHIVE_1 VOL_ARCHIVE_1 RAID 6 (12+2) 28TB 3TB B VOL_FAILOVER_ TB B VG_ARCHIVE_2 VOL_ARCHIVE_2 RAID 6 (12+2) 28TB 3TB A VOL_FAILOVER_ TB A 103 Video Surveillance Solutions Using NetApp E-Series Storage

104 Volume Group Volume RAID Layout Volume Size Drives Used Preferred Controller VG_ARCHIVE_3 VOL_ARCHIVE_3 RAID 6 (12+2) 28TB 3TB B VOL_FAILOVER_ TB B VG_ARCHIVE_4 VOL_ARCHIVE_4 RAID 6 (12+2) 28TB 3TB A VG_ARCHIVE_4 VOL_FAILOVER_4 RAID 6 (12+2) 4.856TB A VG_ARCHIVE_5 VOL_ARCHIVE_5 RAID 6 (12+2) TB 3TB B VG_ARCHIVE_6 VOL_ARCHIVE_6 RAID 6 (12+2) TB 3TB A VG_ARCHIVE_7 VOL_ARCHIVE_7 RAID 6 (12+2) TB 3TB B VG_ARCHIVE_8 VOL_ARCHIVE_8 RAID 6 (12+2) TB 3TB A VG_ARCHIVE_9 VOL_ARCHIVE_9 RAID 6 (12+2) TB 3TB B VG_ARCHIVE_10 VOL_ARCHIVE_10 RAID 6 (12+2) TB 3TB A VG_RECORDING_1_2 VOL_RECORDING_1 RAID 10 (5+5) 1TB 900GB B VOL_RECORDING_2 1TB A VG_RECORDING_3_6 VOL_RECORDING_3 RAID 10 (5+5) 1TB 900GB B VOL_RECORDING_4 1TB A VOL_RECORDING_5 1TB B VOL_RECORDING_6 1TB A VG_RECORDING_7_10 VOL_RECORDING_7 RAID 10 (5+5) 1TB 900GB B VOL_RECORDING_8 1TB A VOL_RECORDING_9 1TB B VOL_RECORDING_10 1TB A When volume creation is completed, the controllers will initialize the volumes. Additional configuration steps can be done while the volumes initialize; there is no need to wait until initialization has completed. The initialization process can take several days to complete (depending on size and number of volumes created). Although configuration tasks can still be done to the array or the servers connected to the array, application performance testing should not be done until the initialization process has completed. Check the status of the volume initialization process. View Operations in Progress in SANtricity ES, as shown below. 104 Video Surveillance Solutions Using NetApp E-Series Storage

105 4. Set the E-Series global parameters to recommended settings specified in the section titled E-Series Performance Checklist. 5. Set the E-Series volume and volume group specific parameters to the recommended settings specified in the section titled E-Series Performance Checklist. 6. Create hot spares using SANtricity ES AMW. Click the Hardware tab. Right-click a desired available drive and select Hot Spare Coverage. Select the option to manually assign individual drives Cisco UCS Servers and ESXi This section outlines the procedure to implement a server environment for use by a typical VMS application. The sample configuration uses OnSSI Ocularis ES as the VMS; the physical or virtual machines required will vary depending on VMS chosen and design of the deployment. These server descriptions are for illustration purposes only and must be replaced with those required by the customer for the customer deployment. The configuration steps illustrate a video surveillance solution using an E2660 storage array, two Cisco Nexus 3048 switches, four Cisco UCS C220-M3 servers, and VMware ESXi v5.1 as the hypervisor environment for virtual machines. Configure Cisco Integrated Management Controller IP Addressing The following steps can be utilized to configure the CIMC ports on Cisco UCS servers. For more information about configuring CIMC, reference the following link: 1. Connect a monitor, keyboard, and mouse directly to the console connections on the server, or use a KVM or terminal-server type connection to the physical server if available. 2. Power on the server (press power button on front of server). 3. During the power-on self-test (POST), press F8 to view the CIMC setup screen when prompted. 4. Use the on-screen instructions to change values as needed. If a static IP address is to be used, unselect DHCP. Enter the IP address in the CIMC setup screen. 5. Change the NIC to dedicated mode by placing an X in the appropriate box in the CIMC setup screen. 6. Change the NIC redundancy to none by checking the None box. 105 Video Surveillance Solutions Using NetApp E-Series Storage

106 7. Press the F10 key to save this configuration. 8. Press the ESC key to exit the CIMC utility. The machine will reboot. 9. After the reboot, a web browser may be used to connect to the newly configured IP address for the CIMC port on the server. Configure Cisco UCS Server Power Policy From the main CIMC logon screen, select the Power Policies option and configure the power restore policy and power delay type. The power restore policy should be set to Power On; the power delay type can either be a fixed delay in seconds or a random delay. In solution testing, a random delay was selected. This is shown in Figure. 106 Video Surveillance Solutions Using NetApp E-Series Storage

107 Figure 47) CIMC power policies. Save the configuration changes by clicking the Save Changes button on the bottom-right corner of the screen. Configure Cisco UCS Internal Drives Using LSI 8i Mezzanine Card In the sample configuration, each Cisco UCS server has two internal hard drives, connected internally to an LSI 8i mezzanine SAS card. Configure the two drives into a RAID 1 virtual drive using the LSI WebBIOS utility, available during the boot sequence, by pressing Control-H at the appropriate time during the POST process. 1. Open a web browser on a workstation that can connect to the subnet on which the CIMC for the host server resides. In the browser s URL field, enter the IP address for the CIMC console for the host (physical) server. Ignore any web browser certificate errors that might appear. 2. Enter the user name and password to log in. The Cisco defaults are admin/password. 3. On the CIMC main screen, click the Admin tab (upper left of screen). 4. On the list on the left side of the screen, click Network. 5. Under the Actions tab, click Launch KVM Console. 6. If necessary, power on or power cycle the server to allow it to start its preboot POST process. 7. During the POST sequence, watch for a line showing the two internal drives. 8. Immediately after seeing this, watch for a line instructing as follows: Press <CTRL><H> for WebBIOS or press <Ctrl><Y> for Preboot CLI 9. Now quickly press Control-H. You will see a line stating that the web GUI will start after POST is complete. Wait for POST to complete. 10. Use the LSI utility to create a single virtual drive as a RAID 1 device utilizing both internal drives. 11. Initialize (format) the drive (any data on the drives is lost). The desired configuration is as follows. 107 Video Surveillance Solutions Using NetApp E-Series Storage

108 Prepare to Install ESXi to Cisco UCS Internal Drives The sample configuration includes VMware ESXi version 5.1 as the hypervisor installed on each Cisco server. Using a valid ESXi version 5.1.ISO installation file, install ESXi on the internal drives of each server. The process involves mapping a local folder to the server using CIMC device mapping. A reboot is required. 1. Download the appropriate VMware installer.iso file to your workstation/laptop. Be sure that you use the correct version (for example, VMware VMvisor Installer-5.1). 2. Log in to the CIMC on the host server using a web browser. 3. Under the Server tab, click Summary. 4. Under Actions, click Launch KVM Console. 5. In the KVM Console window, click the Virtual Media tab. 6. Click the Add Image button. 7. Navigate to the VMware installer image file (.ISO file) that is located on your workstation/laptop. Select the specific.iso file to be used for this installation. 8. Click Open. 9. Under Client View, click the Mapped checkbox so that a checkmark appears in this box. It should appear similar to the following screenshot (the directory and file name will not be exactly as shown). 108 Video Surveillance Solutions Using NetApp E-Series Storage

109 10. Click the KVM tab; leave the KVM open. 11. In the CIMC web user interface, under Actions, click Power Cycle Server and click OK. Observe the server power cycle on the KVM screen to make sure that the server reboots. After the server is rebooted, the ESXi installation screen is displayed. Complete ESXi Installation After the ESXi installation has begun, follow the on-screen instructions to install VMware ESXi version 5.1. Select to install it on the server s local drives (the RAID 1 virtual drive that was created previously). 1. When the Installation Complete dialog box appears, do NOT click Enter at this time. The.ISO file must be unmapped so that a normal reboot to start ESXi can occur. 2. Click the Virtual Media tab in the KVM console. 3. Uncheck the Mapped box next to the.iso file that was used to install ESXi. 4. Click Yes to confirm the Unmap request. 5. Click Remove Image to remove the.iso image file from the Client View list. 6. Return to the KVM tab. 7. On the Installation Complete dialog box, press Enter to reboot. 8. The server will reboot and load/run ESXi for the first time. The final appearance of the screen or KVM window is shown as follows. 109 Video Surveillance Solutions Using NetApp E-Series Storage

110 Configure ESXi Management Network IP On each Cisco UCS server that is running VMware ESXi, set the IP address for the management port. The settings screen offers an option to test the management network to make sure that the IP information entered is valid. 1. Log in to the CIMC for the server and open the KVM Console window. 2. On the gray/yellow ESXi screen, press F2 to customize the system. 3. Log in with user name root and the appropriate password (these were set during the installation of ESXi). 4. Select Configure Management Network and press Enter. 5. Select IP Configuration and press Enter. 6. In the IP Configuration dialog box, select Static IP Address and press the spacebar to mark this selection. 7. Enter the desired IP address, subnet mask, and default gateway. 110 Video Surveillance Solutions Using NetApp E-Series Storage

111 8. Press Enter to accept these changes. 9. Press Escape to go back to the configuration. 10. In the Configure Management Network Confirm dialog box, press Y to confirm the network configuration changes. 11. Select Test Management Network on the left and press Enter. 12. The Test Management Network dialog box appears; press Enter to begin the test. 13. Confirm the results of the test and press Enter when done. 14. Press ESC to exit configuration screen. 15. Verify on the gray/yellow screen that the new IP settings are shown correctly. Configure NTP for ESXi Accurate and consistent time across all systems is very important in a video surveillance installation. Follow these steps to configure NTP for each ESXi host server. 1. Using the vsphere client application (downloadable from the ESXi server just installed; use a web browser and open an HTTP session on the ESXi host IP address), connect to the Cisco UCS server using the IP address configured in the previous procedure ( Configure ESXi Management Network IP ). Enter the user name and password to log in to the server. 2. In the vsphere client, select the physical server in the upper left section of the window. 3. Select the Configuration tab. 4. Under the Software tab, click Time Configuration. 5. Click Properties. 6. Check the NTP Client Enabled box. 111 Video Surveillance Solutions Using NetApp E-Series Storage

112 7. Click Options. 8. Select NTP Settings. 9. Click Add. 10. Enter one or more IP addresses of the NTP servers. It should be for an NTP time server that is network reachable to the ESXi host. Click OK. A sample screenshot is shown as follows: 11. Check Restart NTP service to apply changes; click OK. 12. Go to the General tab. Make sure that that Start and Stop with Host tabs are checked. 13. In the Time Configuration window, click OK. If the time change is large, it might take a while (minutes to hours) for the change to take place. Note: There is no time zone configured on the ESXi host. Time is displayed in Universal Time (UT), and the offset of the vsphere client is used to adjust for the local time zone. 112 Video Surveillance Solutions Using NetApp E-Series Storage

113 Add VMware License Before beginning this step, have a valid VMware ESXi license key available for each Cisco UCS server. Configure the VMware license for each server using the vsphere client application. 1. Click the physical machine icon (upper left). 2. Click the Configuration tab. 3. Under the Software tab, click Licensed Features. 4. At the upper right of the vsphere application screen, click Edit. 5. Click the Assign a new license key to the host button. 6. Click the Enter Key tab. 7. Enter the new key exactly as it appears in the VMware license information (copy and paste or enter it directly). 8. Click OK to accept the newly entered key. 9. On the Assign License window, click OK. Configure Video Ingress Virtual Switch Use the vsphere client application to configure a vswitch with NIC teaming for video ingest for each server. The vswitch will contain all ports in a PortChannel for the 4-port NIC on which video data is sent from cameras to the server. The NIC teaming load-balancing is configured as route based on IP hash. 1. Click the physical machine in the left pane of the vsphere client application. 2. Click the Configuration tab. 3. Click Networking. 4. Click Add Networking. 113 Video Surveillance Solutions Using NetApp E-Series Storage

114 5. Select the Virtual Machine button and click Next. 6. Click Create a vsphere standard switch; the vswitch will be created for the Broadcom 4-port NIC installed in the host server. 7. Select all 4 ports that belong to this PortChannel. Click Next. 8. Type a network label, such as VLAN_2020, the name of the VLAN for the camera network. 9. Leave the VLAN ID at its default setting. 10. Click Next. 11. Click Finish on the Ready to Complete summary screen. The vswitch should appear similar to the following screenshot. 12. Select Properties next to the vswitch that was just added. 13. Select the vswitch and click Edit. 14. Select the NIC Teaming tab and click the checkbox for Load Balancing. 15. In the drop-down next to Load Balancing, select Route based on IP Hash. 16. The results should appear similar to the following screenshot. Click OK at the bottom of the dialog box. 17. Click Close in the vswitch Properties dialog window. 114 Video Surveillance Solutions Using NetApp E-Series Storage

115 Configure Server Management Virtual Switch Create a vswitch for the server management network on each ESXi host server. 1. Click the physical machine icon on the left pane in the vsphere client application. 2. Click the Configuration tab. 3. Click Networking. 4. Click Add Networking. 5. Select Virtual Machine. Click Next. 6. Click the Create a vsphere standard button. This vswitch is for the management network. See the following screenshot. 7. Select the vmnic1 port (the only one not in a vswitch). Click Next. 8. Type a network label (such as SERVER_MGMT). 9. Leave the VLAN ID at the default setting. 10. Click Next. 11. On the Ready to Complete summary screen, verify and select Finish. 115 Video Surveillance Solutions Using NetApp E-Series Storage

116 Configure ESXi Host Name Optionally, use the vsphere client application to enter an ESXi host name that represents the server naming system in use. In the following example, the ESXi host servers in the sample deployment are named SVR-1, SVR-2, and so on. 1. In the vsphere client application, click the physical machine icon. 2. Click the Configuration tab. 3. Under the Software tab, click DNS and Routing. 4. In the upper right pane, click Properties. 5. In the DNS and Routing Configuration dialog box, under Host Identification, change the name as desired. Change the domain name and IP address necessary for actual deployment. For example: 6. Change other parameters as needed for your network/dns environment; click OK. 7. A warning might appear about IPv6; click Yes to continue. 8. Verify that the new host identification name is shown as desired. See the following sample configuration. 116 Video Surveillance Solutions Using NetApp E-Series Storage

117 Enable SSH for ESXi Some management functions for VMware ESXi hosts are easily done through ESXi shell access. To do this, enable SSH on each server. 1. In the vsphere client, click the physical machine icon. 2. Click the Configuration tab. 3. In the Software box, click Security Profile. 4. In the Services section on the right, select Properties. 5. In the Services Properties dialog box, select SSH and click Options. 6. Under Startup Policy, select the Start and stop with host button. 7. Click Start. 8. Because startup policy reverts (changes), it is necessary to click the Start and stop with host button again. 9. On the Services Properties screen, the SSH should show its daemon as Running. 10. To save the setting change, click OK. Create Datastore for ESXi host A VMware datastore is a virtual designation for a storage location. Create a datastore using the internal drive previously configured as a RAID 1 virtual disk. This datastore will be used for a system/boot drive for the guest operating system of each virtual machine, as well as other file storage needs for configuring the VMware environment. 1. In the vsphere client, click the physical machine s icon. 2. Click the Configuration tab. 3. Under the Hardware tab, click Storage. 4. In the upper right pane, click Add Storage. 5. In the Add Storage dialog box, select Disk/LUN and click Next. 6. In the next window, select the name of the local (internal) disk; its name will likely be Local LSI Disk. Click Next. 117 Video Surveillance Solutions Using NetApp E-Series Storage

118 7. Under File System Version, select VMFS-5 and click Next. 8. Enter a name for the datastore such as datastore1. Click Next. 9. Under Capacity, click the Maximum available space button. Click Next. 10. On the Ready to Complete page, review the information shown and if correct click Finish. The datastore should appear similar to this screenshot. Create and Configure Virtual Machines The basic configuration of each server running VMware ESXi 5.1 should now be complete. The next procedure is to create and configure virtual machines on the physical ESXi hosts. The following steps describe a sample configuration method that can be employed. The goal is to create the required number of virtual machines on each host server. Each virtual machine will be configured with the appropriate characteristics, based on the requirements of the VMS and limited by the total physical resources of the host server. The following configuration steps are for a sample installation previously described in this document. Each physical server contains four virtual machines, each of which will run the Windows Server 2008 R2 operating system. Table 17 lists the server and virtual machines naming scheme used for this sample implementation, which utilizes Oculars ES as the VMS. Table 17) Server and VM naming. Function Physical Server Virtual Machine Name Ocularis base SVR-1 SVR-10 Ocularis manager SVR-1 SVR-11 Failover 1 SVR-1 SVR-12 Failover 2 SVR-1 SVR-13 Recording 1 SVR-2 SVR-20 Recording 2 SVR-2 SVR Video Surveillance Solutions Using NetApp E-Series Storage

119 Function Physical Server Virtual Machine Name Recording 3 SVR-2 SVR-22 Failover 3 SVR-2 SVR-23 Recording 4 SVR-3 SVR-30 Recording 5 SVR-3 SVR-31 Recording 6 SVR-3 SVR-32 Failover 4 SVR-3 SVR-33 Recording 7 SVR-4 SVR-40 Recording 8 SVR-4 SVR-41 Recording 9 SVR-4 SVR-42 Recording 10 SVR-4 SVR Log in to the host machine using vsphere client. 2. Right-click the physical machine on the left and select New Virtual Machine. 3. In the Create New Virtual Machine dialog window, select Typical and click Next. 4. Enter a name for the new virtual machine, for example, SVR-10. Click Next. 5. Select the datastore to be used. Click Next. 6. In the next dialog box, select Windows as the guest operating system. 7. Under the Version tab, select Microsoft Windows Server 2008 R2 (64-bit). Click Next. 8. In the next window, select 2 in the How many NICs drop-down box. 9. For NIC 1, select the teaming NIC created earlier (Interface to Video Camera Network, VLAN_2020). 10. For NIC 2, select the SERVER_MGMT network. Click Next. 11. Under Create a Disk Virtual Disk Size, select the size of the OS disk to be created for this virtual machine, for example, 50GB. This allocates the specified amount of space to be used as the virtual machine s system (boot) disk. Leave all other items at their default values and click Next. 12. Verify the settings shown. If all appear correct, click Finish. The following screenshot shows a sample virtual machine just before Finish is clicked. 119 Video Surveillance Solutions Using NetApp E-Series Storage

120 13. On the left pane, right-click the virtual machine just created. 14. Select Edit Settings. 15. Under the Hardware tab, select Memory. 16. Enter the desired memory size on the right pane, for example, 8GB. 17. Under the Hardware tab, select CPUs. 18. Enter the desired number of cores on the right pane (number of cores per socket), for example, Click OK. The settings should appear similar to the ones in the following screenshot. 20. Repeat as many times as necessary to create additional virtual machines on each host machine. Upload ISO Image to Datastore The guest operating system to be installed onto the virtual machines in this example is Windows Server 2008 R2 64-bit. One method for installing the OS is to have the.iso (image) file for the installer available on the datastore of the ESXi host. When the appropriate.iso file has been downloaded to a workstation or laptop, use the vsphere client application as follows to load the.iso file onto the datastore created earlier. 120 Video Surveillance Solutions Using NetApp E-Series Storage

121 1. Log in to the host using the vsphere client; click the host server on the left pane. 2. Click the Configuration tab. 3. Under the Hardware tab, click Storage. 4. Right-click Datastore. 5. Select Browse Datastore. 6. In next dialog box, click the Create a new folder icon. Name and create a folder. For example, name the folder ISO_IMAGES and click OK. 7. Double-click the new folder on the left (the folder just created) so that it is active like the current folder. 8. Click the Upload files to this datastore icon (image of a disk with a green up arrow). 9. Select Upload file. 10. Navigate to the location of the.iso file on your laptop or workstation. 11. Select the files to upload. Click Open. 12. If an Upload/Download Operation Warning dialog box appears, click Yes. 13. Observe as the file copies to the folder on the datastore. It will appear similar to this screenshot: Install Guest OS to Virtual Machines The next step is to install the Windows Server 2008 R2 operating system onto the virtual machine. One method is to configure the virtual machine so that it boots from the.iso image (OS installer file) placed onto the datastore in the previous procedure. After the VM is configured to boot from the.iso image, the Windows Server 2008 R2 installation process will begin when the virtual machine is powered on. 1. In the VSphere client, click the + sign next to the host server name to see the list of virtual machines running on the host. 2. Right-click the virtual machine and select Edit Settings. 3. In the Virtual Machine Properties window, under Hardware, select CD/DVD drive. 4. In the same window, under Device Type, select Datastore ISO File. 5. Check the Connect at Power On box under Device Status. 6. Click Browse and navigate to the Windows installer.iso file to be used. Select the file to be used. Click OK. The window should appear similar to this screenshot: 121 Video Surveillance Solutions Using NetApp E-Series Storage

122 7. In the Virtual Machine Properties window, click OK. 8. In the vsphere client, make sure virtual machine is selected in the left pane. 9. Click the Summary tab. 10. Under the Commands tab, click Power On. 11. Click the Console tab to view the boot-up process. 12. Follow the on-screen instructions to perform the Microsoft Windows Server 2008 R2 installation. 13. Repeat this procedure for all virtual machines on the ESXi host machine. Enable Remote Access to Windows 2008 R2 Virtual Machines Using the Windows Remote Desktop Protocol (RDP) utility Remote Desktop Connection makes it easier to connect to the virtual machine for configuration and management tasks. The following procedure contains steps to enable RDP access for machines running the Windows Server 2008 R2 operating system. 1. Log in to vsphere client on the host machine, select the virtual machine to be managed, and then click the Console tab. 2. If necessary, press Control-Alt-Insert and log in to Windows. 3. In Windows, click Start; right-click Computer, and select Properties. The System window appears. 4. Click Remote Settings. The System Properties dialog box appears with the Remote tab selected. 5. Select Allow connections from computers running any version of Remote Desktop (less secure), so that this server can be managed using RDP. 122 Video Surveillance Solutions Using NetApp E-Series Storage

123 6. A dialog box appears stating that an exception will be added to the firewall settings. Click OK. 7. Adjust Windows firewall settings if necessary for the environment into which the system is to be installed. 8. Click OK in the main System Settings window. 9. Close the System window. Show ESXi Network Names and MAC Addresses For documentation and troubleshooting purposes, determine the Ethernet MAC addresses to the NIC name relationship. Log in to the ESXi shell and issue the esxcli network nic list command as shown: ~ # esxcli network nic list Name PCI Device Driver Link Speed Duplex MAC Address MTU Description vmnic0 0000:001:00.0 igb Up 1000 Full a4:4c:11:2a:00:f Intel I350 vmnic1 0000:001:00.1 igb Up 1000 Full a4:4c:11:2a:00:f Intel I350 vmnic2 0000:084:00.0 bnx2 Up 1000 Full 00:10:18:eb:b0: Broadcom BCM5709 vmnic3 0000:084:00.1 bnx2 Up 1000 Full 00:10:18:eb:b0: Broadcom BCM5709 vmnic4 0000:085:00.0 bnx2 Up 1000 Full 00:10:18:eb:b0: Broadcom BCM5709 vmnic5 0000:085:00.1 bnx2 Up 1000 Full 00:10:18:eb:b0: Broadcom BCM5709 Note: When a single CAT 5 cable is removed and the command is reissued, the Link column can be observed to show a down state. This is useful for troubleshooting cabling issues. Configure Cisco Discovery Protocol (CDP) in ESXi NetApp recommends configuring the ESXi host to advertise CDP packets to the switches to assist with troubleshooting any cable issues. Refer to Configuring the Cisco Discovery Protocol (CDP) with ESX/ESXi for more information. 1. Log in to each ESXi server and list the virtual switches defined on the host with the esxcfgvswitch command. # esxcfg-vswitch --list Switch Name Num Ports Used Ports Configured Ports MTU Uplinks vswitch vmnic0 PortGroup Name VLAN ID Used Ports Uplinks VM Network 0 0 vmnic0 123 Video Surveillance Solutions Using NetApp E-Series Storage

124 Management Network 0 1 vmnic0 Switch Name Num Ports Used Ports Configured Ports MTU Uplinks vswitch vmnic2,vmnic3,vmnic4,vmnic5 PortGroup Name VLAN ID Used Ports Uplinks VLAN_ vmnic2,vmnic3,vmnic4,vmnic5 Switch Name Num Ports Used Ports Configured Ports MTU Uplinks vswitch vmnic1 PortGroup Name VLAN ID Used Ports Uplinks SERVER_MGMT 0 4 vmnic1 2. Use the esxcfg-vswitch command to enable advertisement of CDP on all virtual switches configured on the ESXi host. ~ # esxcfg-vswitch -B both vswitch0 ~ # esxcfg-vswitch -B both vswitch1 ~ # esxcfg-vswitch -B both vswitch2 3. Verify the configuration with the esxcfg-vswitch command. ~ # esxcfg-vswitch -b vswitch2 both ~ # esxcfg-vswitch -b vswitch1 both ~ # esxcfg-vswitch -b vswitch0 Both 4. Following this change, log in to the Cisco Nexus 3048 switches and verify the ESXi host device IP and port connected to individual interfaces. For example, CDP neighbor for Ethernet 1/1 is shown. VSS # show cdp neighbors interface e1/1 Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge S - Switch, H - Host, I - IGMP, r - Repeater, V - VoIP-Phone, D - Remotely-Managed-Device, s - Supports-STP-Dispute Device-ID Local Intrfce Hldtme Capability Platform Port ID RACK-SVR-1.stl.netapp.com Eth1/1 166 S VMware ESX vmnic5 Configure Virtual Machine Management and Video Ingress Network Adapters Each virtual machine defined on the physical machine has two Ethernet adapters: Local Area Connection and Local Area Connection When the VMs are configured and powered on, use the vsphere client, and log in to the console of each VM. Open a command prompt and issue the ipconfig command with the specified parameters shown as follows. C:\Users\Administrator>ipconfig /all findstr "Physical Ethernet" Ethernet adapter Local Area Connection 2: Physical Address : 00-0C-29-A2-62-D9 Ethernet adapter Local Area Connection: Physical Address : 00-0C-29-A2-62-CF Physical Address : E0 Physical Address : E0 Physical Address : E0 Physical Address : E0 2. Using the output from the ipconfig command, Local Area Connection 2 has an Ethernet MAC address of 00-0C-29-A2-62-D9, and Local Area Connection has an Ethernet MAC address of 00-0C- 29-A2-62-CF. Ignore the physical address lines, which are not preceded by an Ethernet adapter line. 124 Video Surveillance Solutions Using NetApp E-Series Storage

125 Note and retain the MAC address and adapter name relationship for the next step. 3. In the vsphere client, select the virtual machine on the left pane, highlight it, and right-click Edit Settings. Next click one of the network adapters to highlight the devices. From the preceding screenshot, note that the Ethernet MAC address associated with Network adapter 1 (VLAN_2020) is 00:0c:29:a2:62:cf. Given the MAC address values shown in the last two steps, it can be determined that the Windows Ethernet adapter named Local Area Connection is associated with the video ingress interface named VLAN_2020. The purpose of this procedure is to verify which Windows Ethernet adapter is associated with the network adapter defined to the virtual machine using the vsphere client. This eliminates the common error of assigning the wrong IP address to the wrong adapter. This relationship can be documented for each virtual machine (in a text file) as follows: Virtual Machine Ethernet adapter MAC Address vswitch IP Address =============== ======================= ================= =========== ========== RACK-SVR-43 Local Area Connection 2: 00-0C-29-A2-62-D9 SERVER_MGMT Local Area Connection : 00-0C-29-A2-62-CF VLAN_ From the command window on the virtual machine, assign the appropriate IP addresses to the interfaces. The following example assigns an IP address of /24. netsh interface ip add address name="local Area Connection" addr= mask= If the interface requires a DNS server, it can also be specified with the netsh command, as shown here: netsh interface ip add dns name="local Area Connection" addr= If the IP addresses must be changed, the existing address may be deleted and a new IP address added. To delete an IP address of on the interface Local Area Connection, use the command format shown here: netsh interface ip delete address name="local Area Connection" addr= Video Surveillance Solutions Using NetApp E-Series Storage

126 The netsh command reference is available from Microsoft at Issue the appropriate netsh commands for each Windows Ethernet adapter to assign IP addresses for the SERVER_MGMT network and the video ingress network (VLAN2020). Configure Static IP Routes on Virtual Machines Because the video recording servers are dual-homed machines, the appropriate static routes must be configured on the machines to determine which interface is to be used to reach the destination networks. Assuming the deployment requires the virtual machines to reach the Internet (a default route is required) for service updates, the network video cameras are installed on /15. The prefix length of /15 is the dotted decimal mask of The gateway for the video ingress network interface is The default gateway is The p option is used to identify the route as persistent, meaning the route is stored in the registry and preserved between reboots. The following commands implement these respective routes. route add mask metric 5 p route add mask metric 5 p To verify the configured IPv4 routes, use the route print command to review the section labeled Persistent Routes. C:\Users\Administrator>route print -4 =========================================================================== Interface List c 29 a2 62 d9...intel(r) PRO/1000 MT Network Connection # c 29 a2 62 cf...intel(r) PRO/1000 MT Network Connection 1...Software Loopback Interface e0 Microsoft ISATAP Adapter e0 Microsoft 6to4 Adapter e0 Microsoft ISATAP Adapter # e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric On-link On-link On-link On-link On-link On-link On-link On-link On-link On-link On-link =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric =========================================================================== Note: The metric value specified on the command line is used to prioritize routes with the same network and mask; a lower value has a higher preference. To verify the routing, both tracert and ping may be used. In this example, ping is used to verify connectivity to a switch virtual interface (SVI) supporting network video cameras at Tracert is used to verify the path to a camera at Video Surveillance Solutions Using NetApp E-Series Storage

127 C:\Users\Administrator>ping Pinging with 32 bytes of data: Reply from : bytes=32 time<1ms TTL=252 Reply from : bytes=32 time=6ms TTL=252 Reply from : bytes=32 time=1ms TTL=252 Reply from : bytes=32 time<1ms TTL=252 Ping statistics for : Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 6ms, Average = 1ms C:\Users\Administrator>tracert -d Tracing route to over a maximum of 30 hops 1 <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms ms <1 ms <1 ms ms <1 ms <1 ms <1 ms <1 ms <1 ms Trace complete. C:\Users\Administrator> Configure NTP and Host Name of Virtual Machines Several additional steps are recommended to complete the basic Windows configuration. Setting computer names and descriptions to meaningful names is useful for further application configuration and troubleshooting procedures. For the sample deployment scenario used in these instructions, see Sample IP Address Allocation for VIDEO_INGRESS Network for sample virtual machine names. 1. From a laptop or workstation used to manage the environment, use Remote Desktop Connect to connect to the virtual machine by its management IP address configured earlier. The laptop or workstation must be able to connect to the subnet on which the virtual machines are located. 2. Log in to the administrator account in Windows using the password configured during Windows Server R2 installation. 3. Click Start, right-click Computer, and select Properties. 4. Under Computer Name, domain, and Workgroup settings, click Change. 5. In the Computer description field, enter an appropriate server name or description text. 6. Click the Change button. 7. Enter an appropriate server name (use the same name as used in the description field); click OK. 8. If the server is to join a domain, select that domain in the window. 9. Click OK on the Computer Name/Domain Changes dialog box. The computer name information will look similar to this: 127 Video Surveillance Solutions Using NetApp E-Series Storage

128 10. A warning will appear stating that a reboot is required. Click OK. 11. Close any other programs that might be running. 12. If the main System Properties window is still visible, click Apply and Close. 13. A small window appears; click Restart Now. The VM will reboot. Configure Windows Time Service (w32tm) Use the following steps to configure the Windows Time Service. 1. Connect to each virtual machine using Remote Desktop Protocol (RDP) and log in to Windows. 2. In Windows, click the time in the lower right portion of the task bar. 3. In the Date and Time window, set the time zone as required. 4. Open a Windows command line prompt (Start > Command Prompt). 5. Enter the command w32tm /tz and verify that the time zone shown is correct. 6. Enter a command, using IP addresses for NTP servers, appropriate for the network environment in place, as follows: w32tm /config /manualpeerlist: Video Surveillance Solutions Using NetApp E-Series Storage

129 7. Start the Server Manager program by clicking the icon in the task bar. 8. In the left pane of Server Manager, open Configuration and then click Services. 9. Scroll down until the service Windows Time is visible. 10. Right-click Windows Time service and select Properties. 11. In the Windows Time Properties dialog box, change Startup type to Automatic (Delayed Start). 12. Also in the Windows Time Properties dialog box, if the Service Status shows Stopped, click Start. Allow the service to start. 13. Observe that the server s time is correct. If it is not, stop and then restart the Windows Time services. 14. Click OK in the Windows Time Properties dialog box. 15. Click OK to close all date-and-time dialog boxes. 16. Repeat as necessary for all virtual machines. LSI e Serial Attached SCSI (SAS) HBA Update Procedure This procedure is used to update the firmware, BIOS, and drivers on the LSI e HBA. Introduction The LSI e is a 6Gbps SAS host bus adapter (HBA) used to provide host connectivity for the Cisco UCS C220-M3 servers to the SAS host interfaces of the E-Series storage array. The driver, firmware, and BIOS upgrade procedure using VMware ESXi is illustrated in this section. More information and detailed instructions (including instructions for other operating systems) can be found at LSI SAS Host Adapter. During solution testing, two issues were encountered that are addressed by this update: Servers running ESXi 5.1 might crash to a purple screen of death (PSOD) page fault (PF14). SAS HBA presenting only one SAS address to attached storage arrays. Without the update applied, the host properties screen shown in the SANtricity ES might report only one SAS address per dual port host. 129 Video Surveillance Solutions Using NetApp E-Series Storage

130 The following steps illustrate how to apply the updates to address these two known issues. Installation of the LSI e Driver, Firmware, and BIOS Updates To download the installer, driver, firmware, and BIOS updates, refer to the LSI SAS Host Adapter webpage. It might be necessary to search the LSI website for the latest files and instructions, because these often change. Select as follows: 1. Component type: Storage Product 2. Family: Host Bus Adapters 3. Product: LSI SAS e 4. Asset type: All The following files are used in this procedure: Note: The installer is listed under the firmware tab. The installer download contains the sas2flash command line utility program. For VMware ESXi, this has a.vib file extension. The file used in this procedure is Installer_P15_for_VMware_ESX50 dated November 6, 2012, with a description of VMware ESX 5.0 Installer. The driver for VMware ESXi has a.vib file extension. The driver tab on the LSI webpage provides a link to a webpage of VMware to download the driver. Downloading the driver requires a user name and password for the VMware site. The product name is VMware ESXi 5.x Driver CD for mpt2sas controllers, version vmw with a release date of December 17, In the ZIP file, there is an embedded ZIP file whose file name includes the text offline_bundle. Use the.vib from this ZIP file. The firmware downloaded ZIP file contains the firmware and BIOS. The firmware file has a.bin file extension, and BIOS has a.rom file extension. The version applied is dated November 7, 2012, with a description of Package_P15_Firmware_BIOS_for_MSDOS_Windows. Although the file name contains the description Firmware_BIOS_for_MSDOS_Windows, it is the correct package for ESXi. Upload Files to Datastore Upload these files to the datastore on each physical server using the vsphere client. For example, create a folder on the datastore called other and upload the files to this folder. This screenshot shows an example of these four files on the ESXi host: Install the sas2flash Utility Program After the files are uploaded, use SSH to log in to the ESXi host using the command below. Substitute the datastore name for <datastore_name> in the command prompt. 130 Video Surveillance Solutions Using NetApp E-Series Storage

131 esxcli software vib install -f -v /vmfs/volumes/<datastore_name>/other/vmware-esx-sas2flash.vib This command installs the sas2flash utility, as shown here: ~ # ls -l /opt/lsi/bin -r-xr-xr-x 1 root root Nov 5 20:53 sas2flash ~ # Install the Driver If a driver update exists, install it as follows. At the ESXi command prompt, issue the following command, replacing the file name shown in this example with the file name for the specific driver file downloaded in the previous example. esxcli software vib install f v /vmfs/volumes/datastore1/other/filename.vib Use the sas2flash Utility Program to Update the Firmware and BIOS Follow instructions from LSI SAS Host Adapter and the LSI website to flash the HBA with new firmware and BIOS code. The procedure involves erasing all code from the LSI e HBA card and then flashing the firmware and BIOS, followed by finally resetting the SAS address of the card. Following is a brief summary of steps, details of which can be found at LSI SAS Host Adapter and the LSI website. Note: When the SAS address of the HBA is displayed using the list option, save the SAS address for later use. Copy and paste the SAS address into a text document. Issue commands in this order, substituting actual file names where appropriate: cd /opt/lsi/bin./sas2flash -list./sas2flash o e 7./sas2flash o f <path_and_name_of_firmware_file>./sas2flash o b <path_and_name_of_bios_file>./sas2flash o sasadd <SAS_Address> A sample output of executing the steps on a server running VMware ESXi 5.1 is shown as follows. Some output has been removed for brevity. sas2flash -list /opt/lsi/bin #./sas2flash -list LSI Corporation SAS2 Flash Utility Adapter Selected is a LSI SAS: SAS2008(B2) Controller Number : 0 Controller : SAS2008(B2) PCI Address : 00:03:00:00 SAS Address : b NVDATA Version (Default) : 0e NVDATA Version (Persistent) : 0e Firmware Product ID : 0x2213 Firmware Version : NVDATA Vendor : LSI NVDATA Product ID : SAS9200-8e BIOS Version : Finished Processing Commands Successfully. Exiting SAS2Flash. Save the output of this command; the SAS address will be needed later. The next command erases all code on the LSI SAS HBA. sas2flash -o -e 7 /opt/lsi/bin #./sas2flash -o -e Video Surveillance Solutions Using NetApp E-Series Storage

132 LSI Corporation SAS2 Flash Utility Executing Operation: Erase Flash Erasing Entire Flash Region (including MPB)... Resetting Adapter... Reset Successful! Finished Processing Commands Successfully. Exiting SAS2Flash. /opt/lsi/bin #./sas2flash -o -f /vmfs/volumes/datastore1/other/9200-8e.bin LSI Corporation SAS2 Flash Utility Executing Operation: Flash Firmware Image Firmware Image has a Valid Checksum. Firmware Image compatible with Controller. Valid NVDATA Image found. NVDATA Device ID and Chip Revision match verified. NVDATA Versions Compatible. Valid Initialization Image verified. Valid BootLoader Image verified. Beginning Firmware Download... Firmware Download Successful. Verifying Download... Firmware Flash Successful! Resetting Adapter... Adapter Successfully Reset. Finished Processing Commands Successfully. Exiting SAS2Flash. sas2flash -o -b /vmfs/volumes/datastore1/other/mptsas2.rom /opt/lsi/bin #./sas2flash -o -b /vmfs/volumes/datastore1/other/mptsas2.rom LSI Corporation SAS2 Flash Utility Executing Operation: Flash BIOS Image Validating BIOS Image... BIOS Header Signature is Valid BIOS Image has a Valid Checksum. BIOS PCI Structure Signature Valid. BIOS Image Compatible with the SAS Controller. Attempting to Flash BIOS Image... Flash BIOS Image Successful. Finished Processing Commands Successfully. Exiting SAS2Flash. sas2flash o sasadd /opt/lsi/bin #./sas2flash o sasadd b sas2flash -list /opt/lsi/bin #./sas2flash -list LSI Corporation SAS2 Flash Utility (change to your actual SAS address) 132 Video Surveillance Solutions Using NetApp E-Series Storage

133 PCI Address : 00:03:00:00 SAS Address : b NVDATA Version (Default) : 0e NVDATA Version (Persistent) : 0e Firmware Product ID : 0x2213 Firmware Version : NVDATA Vendor : LSI NVDATA Product ID : SAS9200-8e BIOS Version : Finished Processing Commands Successfully. Exiting SAS2Flash. The next section illustrates how to use SANtricity ES to create host mappings on the E-Series array. The host map should show two SAS addresses, rather than one. Configure Raw Device Mapping The ESXi host servers must be mapped to the E-Series storage system. The volumes created on the E- Series storage array must then be mapped to specific host servers. The sample configuration documented here uses LSI e SAS HBAs as the connection from host servers to storage controllers. To map a host server (ESXi host) to the storage array, the SAS addresses must be known for each host. There are several ways to locate this information; two such methods are shown here. 1. Boot the server and invoke the LSI configuration utility for the e SAS HBA. a. During an ESXi host server reboot, press Ctrl+C to invoke the LSI configuration utility at the appropriate time. b. Select the adapter (there will be only one). c. Observe and document the SAS address shown on the main Adapter Properties screen. This address will be needed later. 133 Video Surveillance Solutions Using NetApp E-Series Storage

134 2. Use the LSI sas2flash command line utility. a. This procedure relies on having the LSI sas2flash utility installed. This utility is available from the LSI Support Page. The example shown here is for an LSI e SAS HBA (PCI card). b. Connect to the ESXi host server shell and log in as root. c. The sas2flash utility (as installed using LSI s documented procedure) is usually located in a specific location on the server: the /opt/lsi/bin directory. Issue the following commands to find the SAS address for the LSI e SAS HBA. Document the SAS address listed in the output; it will be needed later. login as: root Using keyboard-interactive authentication. Password: ~ # cd /opt/lsi/bin /opt/lsi/bin #./sas2flash -list LSI Corporation SAS2 Flash Utility Version ( ) Copyright (c) LSI Corporation. All rights reserved Adapter Selected is a LSI SAS: SAS2008(B2) Controller Number : 0 Controller : SAS2008(B2) PCI Address : 00:03:00:00 SAS Address : b-0-04f NVDATA Version (Default) : 0f NVDATA Version (Persistent) : 0f Firmware Product ID : 0x2213 (IT) Firmware Version : NVDATA Vendor : LSI NVDATA Product ID : SAS9200-8e BIOS Version : UEFI BSD Version : N/A 134 Video Surveillance Solutions Using NetApp E-Series Storage

135 FCODE Version Board Name Board Assembly Board Tracer Number : N/A : SAS9200-8e : H C : SP The SAS address found using either of these methods is the address for the first SAS port. The second port will have an address that is the same, except for the last digit. Use this information to map hosts to the E-Series storage later. Define Hosts to E-Series Storage Array 1. Define hosts to the E-Series storage array using SANtricity ES. Use the standard procedure for mapping a host to an E-Series storage array: Host Mappings > Define > Host dialog. 2. Define one host entry per ESXi host; map both SAS ports for that host server to the host entry. 3. Select VMware as the host type before completing the define host procedure. The following screenshot illustrates two hosts mapped to an E-Series array. 4. Right-click to see the details for a single-mapping entry. You should see information similar to that shown as follows with both SAS addresses from the host LSI e SAS HBA. 135 Video Surveillance Solutions Using NetApp E-Series Storage

136 5. Map volumes to the ESXi hosts using Table 18. The table lists the sample configuration volumes discussed previously. The LUN needed to be specified is also shown, along with information that will be used later, to map the drives to virtual machines and to the Windows operating system. Table 18) Logical view of volume and LUNs for mapping hosts. Volume Name Mapped to Server (ESXi Host) Associated Virtual Machine Name LUN Virtual Machine Windows Server Drive Letter VOL_BOOKMARKS SVR-1 SVR-10 0 B VOL_ARCHIVE_1 SVR-2 SVR-20 1 E VOL_ARCHIVE_2 SVR-2 SVR-21 2 E VOL_ARCHIVE_3 SVR-2 SVR-22 3 E VOL_ARCHIVE_4 SVR-3 SVR-30 4 E VOL_ARCHIVE_5 SVR-3 SVR-31 5 E VOL_ARCHIVE_6 SVR-3 SVR-32 6 E VOL_ARCHIVE_7 SVR-4 SVR-40 7 E VOL_ARCHIVE_8 SVR-4 SVR-41 8 E VOL_ARCHIVE_9 SVR-4 SVR-42 9 E VOL_ARCHIVE_10 SVR-4 SVR E VOL_LIVE_1 SVR-2 SVR L VOL_LIVE_2 SVR-2 SVR L 136 Video Surveillance Solutions Using NetApp E-Series Storage

137 Volume Name Mapped to Server (ESXi Host) Associated Virtual Machine Name LUN Virtual Machine Windows Server Drive Letter VOL_LIVE_3 SVR-2 SVR L VOL_LIVE_4 SVR-3 SVR L VOL_LIVE_5 SVR-3 SVR L VOL_LIVE_6 SVR-3 SVR L VOL_LIVE_7 SVR-4 SVR L VOL_LIVE_8 SVR-4 SVR L VOL_LIVE_9 SVR-4 SVR L VOL_LIVE_10 SVR-4 SVR L VOL_FAILOVER_1 SVR-1 SVR F VOL_FAILOVER_2 SVR-1 SVR F VOL_FAILOVER_3 SVR-2 SVR F VOL_FAILOVER_4 SVR-3 SVR F The following example illustrates a physical host with two volumes mapped to the host. All volumes for the guest virtual machines are mapped to the physical (ESXi) host. If there are four virtual machines on the physical host with two volumes per virtual machine, there will be eight volumes mapped to the physical host. The previous example only illustrated the first two volumes mapped to the physical machine. After all the volumes are mapped to the physical machine, they are assigned to the appropriate guest machine by the RDM function. Configure Raw Device Mapping on ESXi Hosts RDM is used to present E-Series LUNs directly to ESXi hosts and virtual machines. To configure RDM for volumes, perform the following steps. Both the vsphere client and a command line session will be needed to configure RDM. Several vmkfstools commands are issued to each physical ESXi host to map all E- Series LUNs that will be used by virtual machines on host servers. The goal is to create a series of commands to be executed in the ESXi shell to create the RDM entries for all LUNs used for each ESXi host. The commands when built will look similar to the following example. 137 Video Surveillance Solutions Using NetApp E-Series Storage

138 The sample commands listed here cannot be used directly; they are only a sample (template) showing the syntax of the commands to be built, using the following procedure. This is the syntax of the commands to be created and executed: vmkfstools -z /vmfs/devices/disks/<your naa-id here> /vmfs/volumes/<your vmfs-volume here>/<your vm-name here>/<diskname here>.vmdk Here is an actual complete vmkfstools command where the final syntax is seen. The actual command is issued to ESXi as a single-line command with a space character after the network address authority (NAA) ID. vmkfstools -z /vmfs/devices/disks/naa.60080e50002e a3d506d9132 /vmfs/volumes/datastore1/svr-10/rec-svr-10-rdmlun0.vmdk Following are templates for the commands that will be created for each of the four ESXi host servers in the sample configuration. Create four text files with content similar to the following commands. The actual commands can be created from these templates. The naa.x in each command is replaced by the appropriate actual NAA ID value. The LUN numbers are taken from the sample volume configuration in Table 18. # SVR-1: for Bookmarks LUN, 2 Failovers LUNs: vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-10/rec-svr-10-rdmlun0.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-12/rec-svr-12-rdmlun21.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-13/rec-svr-13-rdmlun22.vmdk # SVR-2: for 3 Archive LUNs, 3 Recording LUNs, 1 Failover LUN vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-20/rec-svr-20-rdmlun1.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-21/rec-svr-21-rdmlun2.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-22/rec-svr-22-rdmlun3.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-20/rec-svr-20-rdmlun11.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-21/rec-svr-21-rdmlun12.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-22/rec-svr-22-rdmlun13.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-23/rec-svr-23-rdmlun23.vmdk # SVR-3: for 3 Archive LUNs, 3 Recording LUNs, 1 Failover LUN vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-30/rec-svr-30-rdmlun4.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-31/rec-svr-31-rdmlun5.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-32/rec-svr-32-rdmlun6.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-30/rec-svr-30-rdmlun14.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-31/rec-svr-31-rdmlun15.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-32/rec-svr-32-rdmlun16.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-33/rec-svr-33-rdmlun24.vmdk # SVR-4: for 4 Archive LUNS, 4 Recording LUNs vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-40/rec-svr-40-rdmlun7.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-41/rec-svr-41-rdmlun8.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-42/rec-svr-42-rdmlun9.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-43/rec-svr-43-rdmlun10.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-40/rec-svr-40-rdmlun17.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-41/rec-svr-41-rdmlun18.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-42/rec-svr-42-rdmlun19.vmdk vmkfstools -z /vmfs/devices/disks/naa.x /vmfs/volumes/datastore1/svr-43/rec-svr-43-rdmlun20.vmdk To locate the NAA ID identifier for each LUN, follow these steps. 1. Log in to the host using the vsphere client; click the host server name on the left pane. 2. Click the Configuration tab. 138 Video Surveillance Solutions Using NetApp E-Series Storage

139 3. Under Hardware, select Storage Adapters. 4. Under Storage Adapters, click the name of the external SAS adapter LSI2008, listed as a block SCSI device. The device is named vmhba0 or similar. 5. Under Details, select the Devices tab. 6. Under Devices, locate the first LUN to be used. Be sure that the LUN number shown in the LUN column matches the E-Series volume s LUN you are preparing to set up for RDM. Also, verify expected LUN size using the Capacity column in the vsphere client. Scroll down to see all these columns in vsphere. 7. If the expected LUNs do not show up or appear to be an old list (erroneous or from a previous setup), click Rescan All in the upper right of the vsphere client interface. 8. Right-click the data line for the LUN. 9. Select Copy identifier to clipboard. This naa ID value is the NAA identifier for the LUN. 139 Video Surveillance Solutions Using NetApp E-Series Storage

140 10. Paste the value from the clipboard in an appropriate place of the command file you are building. In the sample files shown earlier, this will replace the naa.x in the command. 11. Repeat this procedure to build the command lines for all LUNs on each ESXi host. The goal is to have a set of vmkfstools commands to execute for the LUNs on each ESXi host: that is, four groups of commands, one for each of the four hosts in the sample configuration used in this document. 12. Carefully verify the syntax of the commands using the preceding examples. The commands must be in the correct format to be executed properly. 13. After the commands have been built, execute them at the command line on each ESXi host. Use an SSH session to the server for this purpose. If there are syntax problems with the commands, copy, paste, and execute one command line at a time. 14. Repeat for all LUNs on the host, and then repeat the procedure for other hosts. Configure Raw Device Mapping on Virtual Machines Next, map the RDM drives to virtual machines using the following steps. This will allow the Windows operating system running on the virtual machine access to the drives. 1. Log in to host using vsphere client application. 2. Right click the virtual machine and select Edit Settings. 3. With the Hardware tab active, click the Add button at the top of the window. 4. In the next window, in the center column, click Hard Disk and then click Next. 5. Select Use an existing virtual disk and click Next. 6. For Disk File Path, click the Browse button. 7. Double-click the datastore that this virtual machine is using. 8. Double-click the folder representing this virtual machine s name. 9. Select the VMDK name corresponding to the desired LUN, then click OK. This will be the same name as created in the ESXi shell command entered using an SSH session to create the RDM in a previous step. The following screenshot shows an example of a vmdk file name chosen for LUN for SVR Click OK. 11. Select defaults for the Advanced Options details and click Next. 12. The window says Ready to Complete. Verify the information and click Finish. It should appear similar to the following screenshot: 140 Video Surveillance Solutions Using NetApp E-Series Storage

141 13. Click OK in the Virtual Machines Settings window. 14. Repeat for each LUN on a given ESXi host. 15. Repeat for other ESXi hosts. Configure Raw Device Mapping for Fibre Channel Configuring RDM for Fibre Channel host interfaces is a process similar to the previous SAS example. The host mapping to the physical machine must be done on the E-Series. The worldwide port name is used to identify the host. Note: There is no need to use the vmkfstools commands as shown in the SAS example. Fibre Channel devices export a global serial number for ESXi to uniquely identify the device, whereas SAS attached devices do not. When the host and the storage array are attached to the fabric and mapped on the E-Series, the RDM can be configured for the host. Following is an example from the storage host configuration showing the ESXi host with a dual-port HBA. Host: stlc200m2-7 Host type: VMWare Interface type: Fibre Channel Host port identifier: 21:00:00:24:ff:3d:e3:82 Alias: stlc200m2-7-1 Host port identifier: 21:00:00:24:ff:3d:e3:83 Alias: stlc200m2-7-2 Data Assurance (DA) capable: Yes All LUNs mapped to the host should appear under the storage adapters after you attach the server to the fabric, as shown in this example. 141 Video Surveillance Solutions Using NetApp E-Series Storage

142 In the previous example, LUNs 9, 10, 19, 20, 21, and 91 were mapped to this physical host. On selecting vmhba2 or vmhba3, the active and standby paths to each LUN are displayed. 1. Log in to vsphere client for each host machine to be modified. 2. Click the host machine name on the left. 3. Right-click the selected host and select Edit settings. 4. Click Add, Hard Disk and then select the RDM button. 142 Video Surveillance Solutions Using NetApp E-Series Storage

143 5. Select the target LUN. 6. Select Datastore, store with a virtual machine. 7. Select physical compatibility mode. 143 Video Surveillance Solutions Using NetApp E-Series Storage

144 8. Use the highlighted virtual device node. 9. Click OK. Complete the RDM mapping for all configured LUNs and virtual machines. When the virtual machines are powered on, the Windows disk management tool can be used to initialize, format, and map the disk to Windows. Configure Virtual Machine Startup VMware ESXi should be configured so that each virtual machine automatically starts in the event of a host reboot due to manual intervention or power failure. 1. Log in to vsphere client for each host machine to be modified. 2. Click the host machine name on the left. 3. Click the Configuration tab. 4. Under Software, click Virtual Machine Startup/Shutdown. 5. On the upper right, click Properties. 6. Check the Allow Virtual Machines to start and stop automatically box. 7. Adjust the startup delay time as appropriate (a setting of 180 seconds has been tested). 8. Select and move up any servers that you want to start automatically, so that they appear under the Automatic Startup section. 9. Click OK. Observe that they now show up as desired in the Virtual Machine Startup/Shutdown page under Software on the Configuration tab. The final configuration should appear similar to the following screenshot. Note: Server names shown in this example are different from those described in the sample deployment design used in this document. Install SANtricity ES Utilities on Virtual Machines The native multipathing software in VMware ESXi handles multipath I/O (MPIO). It is not necessary to install an E-Series device-specific module (DSM) for MPIO on the Windows operating system when it is running on virtual machines under ESXi. It might be useful, however, to install the utilities and management component of the SANtricity ES program on virtual machines. NetApp recommends installing the SANtricity ES management component on at least one virtual machine in the configuration to serve as a management point in the video surveillance storage solution implementation. It is also useful to have the SANtricity ES utilities installed on all virtual machines, because they provide useful tools for configuration and troubleshooting tasks. The SMdevices tool is particularly useful. To install the SANtricity ES utilities on virtual machines, follow these steps. 144 Video Surveillance Solutions Using NetApp E-Series Storage

145 1. Use existing documentation for general information about how to install and use the NetApp E- Series SANtricity ES storage management tool. 2. When using the SANtricity ES installer, use the Custom install option and only select the Utilities option. 3. Verify installation in Windows on each virtual machine by using a command line window (running cmd.exe). Navigate to: C:\Program Files (x86)\storage Manager\util 4. Run the command SMdevices; it should list all E-Series volumes mapped to that virtual machine and display various information, including the current and preferred controller for each volume mapped to the virtual machine. A sample output of SMdevices is shown in the section Sample Configuration Cisco UCS-C220-M2 ESXi Fibre Channel. Map Drives in Windows Operating System on All Virtual Machines Use the information reported by SMdevices to aid in mapping E-Series LUNs to drive letters in Windows. The Windows Server operating system requires mapping a drive to a drive letter and performing other configuration steps before the drive is usable. The procedure for mapping a drive to a drive letter is a common Windows system management process that is standard and well documented in Microsoft s documentation and help files. The Windows disk management tool identifies an E-Series logical unit number (LUN) as a drive that must be initialized, formatted, and mapped to a drive letter in Windows Server 2008 before the capacity can be utilized. The disk management tool is used to view and set details, such as the configuration of drive type, volume name, and allocation unit size. For video surveillance implementations, an allocation unit size of 64k is recommended. For more information about how to use the disk management tool, refer to Install VMware Tools on Each Virtual Machine VMware Tools is a set of features that enhances the graphics and mouse performance and the experience of using the vsphere console tab. The installation of VMware Tools is done in the vsphere client application. The virtual machine needs to be powered on, and the guest operating system must be running. 1. Log in to the ESXi host machine with the vsphere client application. 2. Click the + sign next to the host name to see the list of VMs running on the host. 3. Right-click a virtual machine and select Guest > Install/Upgrade VMware Tools. This mounts a virtual CD onto the virtual machine that contains the installer program. 4. Use RDP or the vsphere client console tab to connect to the Windows virtual machine. 5. Open Windows Explorer and click Computer. 6. Double-click the VM icon for the VM Tools installer. The installer prepares for product installation. 145 Video Surveillance Solutions Using NetApp E-Series Storage

146 Note: The task of preparing VMware Tools for installation might take a long time to complete. It might appear that nothing is happening. Wait for the installer initialization to complete and for the actual VMware Tools installation window to open. 7. The VMware Tools installer window should open shortly. 8. When the computing space requirements task has completed, click the Next button and follow the prompts to perform a typical installation of VMware Tools. Click Finish. 9. Click Yes when prompted to perform a required reboot of the virtual machine. 10. After the VM has rebooted, verify that VMware Tools are installed. To verify that VMware Tools are installed and running, use the vsphere client application; examine the summary tab for the virtual machine. It should appear similar to the following screenshot: 146 Video Surveillance Solutions Using NetApp E-Series Storage

147 11. Double-click the VMware Tools icon in the system tray to display the About VMware Tools menu. Back Up ESXi Configuration After all ESXi hosts have been configured, create a backup of the configuration in the event the physical server must be replaced. Log in to each ESXi host shell and issue these commands. The process flushes any configuration changes and creates an archive file of the host configuration: ~ # vim-cmd hostsvc/firmware/sync_config ~ # vim-cmd hostsvc/firmware/backup_config Bundle can be downloaded at : ~ # 147 Video Surveillance Solutions Using NetApp E-Series Storage

148 The file created can be saved by opening a web browser and substituting the IP address of the ESXi host for the sample IP address of URL shown in the following example. To restore the configuration, the archive file must be uploaded to /tmp/configbundle.tgz, and the host must be placed in maintenance mode before restoring the configuration. ~ # vim-cmd hostsvc/maintenance_mode_enter ~ # vim-cmd hostsvc/firmware/restore_config /tmp/configbundle.tgz For a detailed example of this procedure, refer to How To Backup & Restore Free ESXi Host Configuration. 15 Verification and Troubleshooting This section provides general guidelines to verify network connectivity between the switches and servers in the video surveillance storage solution, as well as cameras and viewing workstations in the customer network. In addition to the test procedures, these solution component log files can be monitored for any errors or warning messages. Solution Component Log File Description Microsoft Event Viewer VMware event log Major event log (MEL) VMS event log Network switch logging buffer CIMC faults and log file The Microsoft Event Viewer utility may be used to query system log file entries for MPIO- and DSM-related events. The event log for VMware hypervisors can be queried from the vsphere client by selecting the events tab and highlighting physical hardware. The SANtricity ES Array Management GUI can be used to view the MEL on the storage array. Use the SUPPORT tab and select View Event Log. The dialog box has drop-down boxes that allow the administrator to filter events, optionally view details, and save the log files to the local disk of the workstation. The VMS management client provides a means to review the software system event logs. The recording servers in Milestone and OnSSI have log files that can provide state and debugging details. They are hidden files on each recording server starting at the C:\ProgramData\ directory: C:\ProgramData\OnSSI\RC-E-Recorder\Logs C:\ProgramData\Milestone\XProtect Corporate Recording Server\Logs The network switches and routers supporting the deployment (for example, Cisco Nexus 3048 switches) logging buffer should be examined for any warnings or errors. The relevant interface counters should be monitored for packet loss or errors. The physical server CIMC main screen has an option to view hardwarerelated faults and logs. Verify the system event log, the fault summary, and the CIMC log for operational errors Sample Network Topology The sample network topology shown in Figure provides a reference for the validation steps described in this section. 148 Video Surveillance Solutions Using NetApp E-Series Storage

149 Figure 49) Sample network topology. The video surveillance storage solution does not provide Network Time Protocol (NTP) servers or Domain Name System (DNS) services. An accurate clock source through NTP is required for proper functioning of the system. DNS is a recommended, but optional service. A client viewing workstation is used to allow the physical-security manager to view live or recorded video clips. One or more client viewing workstations may be deployed at various locations in the network topology, including extranet locations such as police substations or VPN/teleworker connections. By default, the rack Cisco Nexus 3048 switches do not have ports enabled for use by client viewing workstations Verify Time and Reachability to Network Time Protocol Servers Having accurate time for all components of a video surveillance storage solution is critical to the functionality of the system. Without a consistent and accurate timestamp, it cannot be proven when events depicted in the video archive happened. This procedure demonstrates how to verify the time and reachability to the configured NTP servers. As a prerequisite, it is assumed that all components have been configured to use two or more NTP servers. The current time can be determined by referring to the USNO Master Clock webpage. This section verifies the time on: Windows 2008 R2 server Cisco Nexus 3048 switches VMware ESXi hypervisor E-Series controller clocks Windows 2008 R2 Server Use the w32tm command from a Windows command prompt to verify the correct time and time zone. An accurate time value should be listed in the Last Successful Sync Time, and the peer state should be Active. C:\Users\Administrator>w32tm /query /status /verbose Leap Indicator: 0(no warning) Stratum: 6 (secondary reference - syncd by (S)NTP) Precision: -6 (15.625ms per tick) Root Delay: s 149 Video Surveillance Solutions Using NetApp E-Series Storage

150 Root Dispersion: s ReferenceId: 0xC (source IP: ) Last Successful Sync Time: 3/7/ :47:54 AM Source: Poll Interval: 10 (1024s) Phase Offset: s ClockRate: s State Machine: 1 (Hold) Time Source Flags: 0 (None) Server Role: 0 (None) Last Sync Error: 0 (The command completed successfully.) Time since Last Good Sync Time: s C:\Users\Administrator>w32tm /query /peers #Peers: 2 Peer: State: Active Time Remaining: s Mode: 3 (Client) Stratum: 5 (secondary reference - syncd by (S)NTP) PeerPoll Interval: 17 (out of valid range) HostPoll Interval: 10 (1024s) Peer: State: Active Time Remaining: s Mode: 3 (Client) Stratum: 5 (secondary reference - syncd by (S)NTP) PeerPoll Interval: 17 (out of valid range) HostPoll Interval: 10 (1024s) Cisco Nexus 3048 Switches Log in to each switch and verify the correct time and time zone on the Cisco Nexus 3048 switches using the show clock detail command and the show ntp peer-status command. The output of the show clock detail should represent an accurate time, and the peer-status should have a value from 1 through 377 under the reach column. VSS # show clock detail 10:57: est Thu Mar summer-time configuration: timezone name: edt starts : 2 Sun Mar at 2:00 hours Ends : 1 Sun Nov at 2:00 hours Minute offset: 60 VSS # show ntp peer-status Total peers : 1 * - selected for sync, + - peer mode(active), - - peer mode(passive), = - polled in client mode remote local st poll reach delay * VMware ESXi Hypervisor Log in to the ESXi host (SSH) and issue the esxcli system time get command to verify the correct time. There is no time zone offset; the time will be shown as Zulu (GMT) time. ~ # esxcli system time get T16:00:12Z 150 Video Surveillance Solutions Using NetApp E-Series Storage

151 E-Series Controller Clocks The time on the storage array is set by synchronizing the controller clocks with the time of the host, managing the storage array with SANtricity. For this reason, the SANtricity host must be configured to have an accurate time source from Simple Network Time Protocol (SNTP)/Network Time Protocol (NTP). Note: The clocks on the controllers will drift and will need to be manually synchronized periodically. The clocks on the storage array can be manually synchronized to the SANtricity host from the Enterprise Management window by highlighting the array and right-clicking to select Execute Script. The command to execute is set storagearray time. Alternately, the Array Management window will prompt the administrator to synchronize the controller clocks if they are out of synchronization by more than a few minutes when compared to the storage management station. An example is shown in the following screenshot. 151 Video Surveillance Solutions Using NetApp E-Series Storage

152 15.3 Verify Reachability to Gateway Addresses All recording and management servers must be configured with a gateway (next hop router) address, and the SVI of the routers should be configured with a virtual address (HSRP, for example) for high availability. This section verifies that the video recording and management servers can reach the gateway address. As a prerequisite, it is assumed that all virtual or physical recording servers have been powered up and properly configured. It is also assumed that the supporting router/switch configuration has been completed. This section verifies: Note: Reachability to the video ingress network virtual IP address Reachability to the server management virtual IP address Do not be concerned if the first request fails. If all requests fail, verify server addressing, routing configuration, and SVI configuration. Server Management Virtual IP Address Log in to each virtual machine and issue the ping (ICMP ECHO request) command for the gateway virtual IP address from the command prompt. C:\Users\Administrator>ping Pinging with 32 bytes of data: Reply from : bytes=32 time<1ms TTL=255 Reply from : bytes=32 time<1ms TTL=255 Reply from : bytes=32 time<1ms TTL=255 Reply from : bytes=32 time<1ms TTL=255 Ping statistics for : Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: 152 Video Surveillance Solutions Using NetApp E-Series Storage

153 Minimum = 0ms, Maximum = 0ms, Average = 0ms Video Ingress Network Virtual IP Address Log in to each virtual machine and issue the ping (ICMP ECHO request) command for the gateway virtual IP address from the command prompt. C:\Users\Administrator>ping Pinging with 32 bytes of data: Reply from : bytes=32 time<1ms TTL=255 Reply from : bytes=32 time<1ms TTL=255 Reply from : bytes=32 time<1ms TTL=255 Reply from : bytes=32 time<1ms TTL=255 Ping statistics for : Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms 15.4 Verify Connectivity to Network Video Cameras This section assumes that the network switches supporting the deployment have been properly integrated into the customer network. The expected outcome is that the recording servers can reach the network video cameras. If the test fails, video archives will not function properly. This section verifies: Connectivity from the top-of-rack switches Connectivity from all recording servers Verify Connectivity from Top-of-Rack Cisco Nexus 3048 Switches Assuming that the gateway configured in the network video camera is and a camera is configured at , verify connectivity to both the gateway and one or more cameras on the network. It is also assumed that the virtual routing and forwarding (VRF) in use is named default. VSS # traceroute vrf default traceroute to ( ), 30 hops max, 40 byte packets ( ) 1.05 ms ms ms ( ) ms ms ms ( ) ms * ms VSS # traceroute vrf default traceroute to ( ), 30 hops max, 40 byte packets ( ) ms ms ms ( ) 0.91 ms ms ms ( ) ms ms ms ( ) ms ms ms Verify Connectivity from All Recording Servers This step is used to verify connectivity from all virtual machines to one or more network video camera IP addresses prior to the installation of the VMS software. C:\Users\Administrator>tracert -d Tracing route to over a maximum of 30 hops 1 <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms Video Surveillance Solutions Using NetApp E-Series Storage

154 5 <1 ms <1 ms <1 ms Trace complete Show Interface Command This section assumes the Cisco Nexus 3048 switches are configured, and all server and E-Series interfaces are connected and configured. Show Interface Brief Log in to each switch and issue the show interface brief command. It is used to verify the interface, VLAN assignment, status, and PortChannel number. VSS # show interface brief Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # Eth1/ eth access up none 1000(D) 1 Eth1/2 58 eth access up none 1000(D) 58 Eth1/3 2 eth access down Link not connected auto(d) -- Eth1/4 2 eth access down Link not connected auto(d) -- Eth1/ eth access up none 1000(D) 3 Eth1/6 2 eth access down Link not connected auto(d) -- Eth1/7 2 eth access down Link not connected auto(d) -- Eth1/8 2 eth access down Link not connected auto(d) -- Eth1/9 2 eth access up none 1000(D) -- Eth1/10 2 eth access down Link not connected auto(d) -- Eth1/11 2 eth access down Link not connected auto(d) -- Eth1/12 2 eth access down Link not connected auto(d) -- Eth1/ eth access up none 1000(D) 1 Eth1/14 7 eth access up none 1000(D) -- Eth1/15 2 eth access down Link not connected auto(d) -- Eth1/16 7 eth access down Link not connected auto(d) -- Eth1/ eth access up none 1000(D) 3 Eth1/18 7 eth access down Link not connected auto(d) -- Eth1/19 2 eth access down Link not connected auto(d) -- Eth1/20 7 eth access down Link not connected auto(d) -- Eth1/21 2 eth access up none 1000(D) -- Eth1/22 7 eth access down Link not connected auto(d) -- Eth1/23 2 eth access down Link not connected auto(d) -- Eth1/24 7 eth access down Link not connected auto(d) -- Eth1/ eth access up none 1000(D) 2 Eth1/26 7 eth access down Link not connected auto(d) -- Eth1/27 2 eth access down Link not connected auto(d) -- Eth1/28 7 eth access down Link not connected auto(d) -- Eth1/ eth access up none 1000(D) 4 Eth1/30 7 eth access down Link not connected auto(d) -- Eth1/31 2 eth access down Link not connected auto(d) -- Eth1/32 7 eth access down Link not connected auto(d) -- Eth1/ eth access up none 1000(D) -- Eth1/34 7 eth access down Link not connected auto(d) -- Eth1/35 2 eth access down Link not connected auto(d) -- Eth1/36 7 eth access down Link not connected auto(d) -- Eth1/ eth access up none 1000(D) 2 Eth1/38 2 eth access down Link not connected auto(d) -- Eth1/39 2 eth access down Link not connected auto(d) -- Eth1/40 2 eth access down Link not connected auto(d) -- Eth1/ eth access up none 1000(D) 4 Eth1/42 2 eth access down Link not connected auto(d) -- Eth1/43 2 eth access down Link not connected auto(d) -- Eth1/44 2 eth access down Link not connected auto(d) -- Eth1/45 2 eth access up none 1000(D) -- Eth1/46 2 eth access down Link not connected auto(d) -- Eth1/47 2 eth access down Link not connected auto(d) -- Eth1/48 58 eth access up none 1000(D) Video Surveillance Solutions Using NetApp E-Series Storage

155 Eth1/49 3 eth trunk up none 10G(D) 59 Eth1/50 3 eth trunk up none 10G(D) 59 Eth1/51 -- eth routed up none 10G(D) -- Eth1/52 3 eth trunk up none 10G(D) Port-channel VLAN Type Mode Status Reason Speed Protocol Interface Po eth access up none a-1000(d) none Po eth access up none a-1000(d) none Po eth access up none a-1000(d) none Po eth access up none a-1000(d) none Po10 3 eth trunk up none a-10g(d) lacp Po58 58 eth access up none a-1000(d) lacp Po59 3 eth trunk up none a-10g(d) lacp Port VRF Status IP Address Speed MTU mgmt0 -- up Interface Secondary VLAN(Type) Status Reason Vlan1 -- down Administratively down Vlan7 -- up -- Vlan58 -- up -- Vlan up Interface Status Description Lo0 up Verify Virtual PortChannel This section assumes that the Cisco Nexus 3048 switches are cabled and configured properly. Issue these commands on both switches. The expected outcome is that the vpc configuration is functional and all links are up and operational. Note: If the status column indicates notconnec, sfpabsent, or noopermem, verify the switch configuration and cabling. Procedures in this section include: Show interface status for the vpc-related interfaces Show the vpc configuration Show orphan ports Show spanning-tree Show Interface Status Log in to the Cisco Nexus 3048 switches and issue the show interface status command with the following parameters. The pipe to inc is a UNIX grep-like filter for eliminating extraneous information. VSS # show interface status inc vpc Port Port Name Status Vlan Duplex Speed Type Eth1/2 vpc_peer-keepalive connected 58 full /100/1000BaseT Eth1/48 vpc_peer-keepalive connected 58 full /100/1000BaseT Eth1/49 vpc peer link connected trunk full 10G SFP-H10GB-CU1M Eth1/50 vpc peer link connected trunk full 10G SFP-H10GB-CU1M Po10 L2 Portchannel to noopermem trunk full auto Video Surveillance Solutions Using NetApp E-Series Storage

156 Po58 vpc_peer-keepalive connected 58 full Po59 vpc peer link connected trunk full 10G -- In the preceding example, all vpc-related interfaces except the Po10 interface show they are connected. The Po10 interface is not properly connected to the customer network in this example. Show vpc Log in to the Cisco Nexus 3048 switches and issue the show vpc command. VSS # show vpc Legend: (*) - local vpc is down, forwarding via vpc peer-link vpc domain id : 58 Peer status : peer adjacency formed ok vpc keep-alive status : peer is alive Configuration consistency status: success Per-vlan consistency status : success Type-2 consistency status : success vpc role : primary Number of vpcs configured : 7 Peer Gateway : Disabled Dual-active excluded VLANs : - Graceful Consistency Check : Enabled vpc Peer-link status id Port Status Active vlans Po59 up 3,7,2020 vpc status id Port Status Consistency Reason Active vlans Po1 up success success Po2 up success success Po3 up success success Po4 up success success Po10 down* success success - In the preceding example, the peer status should report being alive and adjacency formed okay. The consistency status should report success. The vpc peer-link status should show an up status. Under the vpc status, each port (Po1, Po2, and so on) represents the PortChannel to the respective server (Server1, Server2, and so on) and should report an up status. Show Orphan Ports The show vpc orphan-ports command is used to verify there are no orphan ports for the video ingress VLAN. This command should be issued from each switch in the vpc domain. Ports connected to the management network are shown as orphan ports because they are not part of a PortChannel configuration. The expected outcome is there are no orphaned ports in the video ingress VLAN, but the ports of the device management VLAN are listed as orphaned ports. VSS # show vpc orphan-ports Note: ::Going through port database. Please be patient.:: VLAN Orphan Ports Eth1/14 VSS # show vpc orphan-ports 156 Video Surveillance Solutions Using NetApp E-Series Storage

157 Show Spanning-Tree The show spanning-tree command is used to verify the type and state of the spanning tree protocol running on the PortChannel interface that connects switches and servers. After logging on to each server, issue the show spanning-tree vlan command on the video ingress VLAN number. VSS # show spanning-tree vlan 2020 VLAN2020 Spanning tree enabled protocol rstp Root ID Priority Address 547f.ee78.51fc Cost 3 Port 4154 (port-channel59) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority (priority sys-id-ext 2020) Address fc f101 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Interface Role Sts Cost Prio.Nbr Type Po1 Desg FWD (vpc) Edge P2p Po2 Desg FWD (vpc) Edge P2p Po3 Desg FWD (vpc) Edge P2p Po4 Desg FWD (vpc) Edge P2p Po10 Root FWD (vpc) Network P2p Po59 Root FWD (vpc peer-link) Network P2p Note: As a best practice, the root (and secondary root) bridge in the topology should be explicitly identified switches in the network core. Use the spanning-tree vlan vlan-id root [primary secondary] command to configure the root and secondary root Verify Server Video Ingress Ports This section verifies that the switch ports associated with the video ingress network ports to the physical server are cabled and configured. The expected outcome is that all ports to servers show as connected. Note: The include filter assumes that the interface description has been entered as shown in the implementation and configuration section. Show Interface Status Issue the show interface status command as follows for both Cisco Nexus 3048 switches and verify that the status is reported as connected. VSS # show interface status inc Server Status Port Name Status Vlan Duplex Speed Type Eth1/1 Server 1 - vmnic5 connected 2020 full /100/1000BaseT Eth1/5 Server 3 - vmnic5 connected 2020 full /100/1000BaseT Eth1/13 Server 1 - vmnic3 connected 2020 full /100/1000BaseT Eth1/17 Server 3 - vmnic3 connected 2020 full /100/1000BaseT Eth1/25 Server 2 - vmnic5 connected 2020 full /100/1000BaseT Eth1/29 Server 4 - vmnic5 connected 2020 full /100/1000BaseT Eth1/37 Server 2 - vmnic3 connected 2020 full /100/1000BaseT Eth1/41 Server 4 - vmnic3 connected 2020 full /100/1000BaseT Po1 Server 1 connected 2020 full Po2 Server 2 connected 2020 full Po3 Server 3 connected 2020 full Po4 Server 4 connected 2020 full Video Surveillance Solutions Using NetApp E-Series Storage

158 15.8 Verify Device Management Ports This section verifies that the servers and E-Series controllers are cabled and configured on the DEVICE_MANAGEMENT VLAN. The expected outcome is that the E2660 controller management ports are connected to the DEVICE_MANAGEMENT VLAN and the three management interfaces for each Cisco UCS server are also connected. Note: Servers 1 and 3 are connected to switch1, and Servers 2 and 4 are connected to switch 2. The E2660 controllers A and B are connected to port 1/14 on both switches. Verify the Management Port Status Issue the show interface status command with the include parameters shown on both Cisco Nexus 3048 switches. VSS # show interface status inc CIMC vmnic0 vmnic1 DEVI Status Port Name Status Vlan Duplex Speed Type Eth1/14 E2660-A:DEVICE_MAN connected 7 full /100/1000BaseT Eth1/16 SERVER 1 - CIMC:DE notconnec 7 unknown auto 10/100/1000BaseT Eth1/18 SERVER 1 - vmnic0: notconnec 7 unknown auto 10/100/1000BaseT Eth1/20 SERVER 1 - vmnic1: notconnec 7 unknown auto 10/100/1000BaseT Eth1/22 SERVER 3 - CIMC:DE notconnec 7 unknown auto 10/100/1000BaseT Eth1/24 SERVER 3 - vmnic0: notconnec 7 unknown auto 10/100/1000BaseT Eth1/26 SERVER 3 - vmnic1: notconnec 7 unknown auto 10/100/1000BaseT Eth1/28 DEVICE_MANAGEMENT notconnec 7 unknown auto 10/100/1000BaseT Eth1/30 DEVICE_MANAGEMENT notconnec 7 unknown auto 10/100/1000BaseT Eth1/32 DEVICE_MANAGEMENT notconnec 7 unknown auto 10/100/1000BaseT Eth1/34 DEVICE_MANAGEMENT notconnec 7 unknown auto 10/100/1000BaseT Eth1/36 DEVICE_MANAGEMENT notconnec 7 unknown auto 10/100/1000BaseT Note: Ethernet ports 1/16 to 1/26 show notconnec for illustrative purposes only. When properly configured, their status should report connected, as is the case with port Ethernet 1/ Verify Uplinks In this section, it is assumed the top-of-rack Cisco Nexus 3048 switches are configured and integrated into the customer network. Each switch should be configured with at least one uplink, and, given that the deployment is streaming video from the networked video cameras, the combined input rate from both switches should equal the aggregate load to all recording servers. The link should report an up status. If the link status is down (link not connected) or administratively down, verify the root cause of the issue. Show Interface Issue the show interface command on both switches for all configured uplinks. VSS # show interface ethernet 1/51 Ethernet1/51 is up Hardware: 1000/10000 Ethernet, address: e4d3.f162.88bc (bia e4d3.f a) Description: L3 UPLINK stl3048-f5-1 e1/51 Internet Address is /30 MTU 1500 bytes, BW Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA full-duplex, 10 Gb/s, media type is 10G Beacon is turned off Input flow-control is off, output flow-control is off Rate mode is dedicated Switchport monitor is off EtherType is 0x8100 Last link flapped 4week(s) 5day(s) Last clearing of "show interface" counters never 30 seconds input rate bits/sec, bytes/sec, packets/sec 30 seconds output rate bits/sec, bytes/sec, 223 packets/sec 158 Video Surveillance Solutions Using NetApp E-Series Storage

159 Load-Interval #2: 5 minute (300 seconds) input rate Mbps, Kpps; output rate Kbps, 146 pps RX unicast packets multicast packets 1544 broadcast packets input packets bytes 0 jumbo packets 0 storm suppression packets 0 giants 0 input error 0 short frame 0 overrun 0 underrun 0 watchdog 0 if down drop 0 input with dribble 0 input discard(includes ACL drops) 0 Rx pause TX unicast packets multicast packets 1346 broadcast packets output packets bytes 0 jumbo packets 0 output errors 0 collision 0 deferred 0 late collision 0 lost carrier 0 no carrier 0 babble 0 Tx pause 1 interface resets Note: In the preceding example, the input rate of this uplink is approximately 927Mbps. This data rate, when combined with the reported rate from the second Cisco Nexus 3048 switch, should approximately equal the average data rate per camera, multiplied by the number of cameras Verify Configured Domain Name System (DNS) Servers Assuming the deployment uses DNS servers, they should be configured on each virtual machine and be able to resolve host names to IP addresses. The expected outcome is that the configured DNS servers can resolve host names to IP addresses. DNS is recommended and may be configured, but is not required by the VMS. This section includes the following procedures: Use the ipconfig command to identify the configured DNS servers. Resolve a host name to verify function and connectivity to the configured DNS servers. Use ipconfig to Identify DNS Servers Issue the ipconfig command from a Windows command prompt on each recording server to verify the DNS hosts configured. C:\Users\Administrator>ipconfig /all Windows IP Configuration Host Name : RACK-SVR-45 Primary Dns Suffix : Node Type : Hybrid IP Routing Enabled : No WINS Proxy Enabled : No DNS Suffix Search List : stl.netapp.com Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix. : stl.netapp.com Description : Intel(R) PRO/1000MT Network Connection #2 Physical Address : 00-0C-29-A2-62-D9 DHCP Enabled : Yes Autoconfiguration Enabled.... : Yes Link-local IPv6 Address..... : fe80::b09c:3a43:e1fd:5c6a%13(preferred) IPv4 Address : (Preferred) Subnet Mask : Lease Obtained : Monday, February 04, :52:11 AM Lease Expires : Saturday, March 09, :55:02 AM Default Gateway : DHCP Server : DHCPv6 IAID : Video Surveillance Solutions Using NetApp E-Series Storage

160 DHCPv6 Client DUID : E-FC-AF-00-0C-29-A2-62-CF DNS Servers : NetBIOS over Tcpip : Enabled [snip] Resolve a Host Name Use nslookup to resolve a host name to the IP address on each virtual machine. C:\Users\Administrator>nslookup support.netapp.com Server: acast-cns4.rtp.eng.netapp.com Address: Non-authoritative answer: Name: support.netapp.com Address: Verify Connectivity Between VMS Components This procedure assumes that VMS (OnSSI Ocularis or Milestone XProtect) has been installed and cameras have been configured with the recording servers. The expected outcome is that network connectivity has been established between the virtual machines in the deployment and the network video cameras. These procedures rely on the known port number in use by these two VMS software implementations. Note: Although pinging (ICMP ECHO REQUEST) can be used to provide a rudimentary validation of network connectivity, this section uses netstat to reduce the number of test iterations and to validate connectivity at the transport layer (OSI layer 4). This section includes these procedures: Verify the base virtual machine has connectivity to each recording server virtual machine. Verify the base virtual machine has connectivity to the manager and failover recording servers. Verify each recording server can reach the configured cameras. This section assumes the user can log in to the respective virtual machine through either the vsphere client console connection or Windows Remote Desktop Services (terminal services), using the Microsoft terminal services client mstsc.exe, from a client workstation on the management network. To determine the host name and IP address of the machine into which you are logged, issue the command ipconfig /all findstr Host IPv4. A sample output is shown as follows: C:\Users\Administrator>ipconfig /all findstr "Host IPv4" Host Name : RACK-SVR-45 IPv4 Address : (Preferred) IPv4 Address : (Preferred) Verify Base Connectivity to Recording Servers For an OnSSI Ocularis or Milestone XProtect implementation, log in to the base machine and verify that there is connectivity between the base machine and each recording server on port TCP port 9993 is used for communication between recording servers and management servers. The netstat command is issued from the base machine at the IP address There should be an established session to each recording server. C:\Users\Administrator>netstat -n findstr "9993" TCP : :58395 ESTABLISHED TCP : :59924 ESTABLISHED 160 Video Surveillance Solutions Using NetApp E-Series Storage

161 TCP : :60491 ESTABLISHED TCP : :59570 ESTABLISHED TCP : :56765 ESTABLISHED TCP : :59496 ESTABLISHED TCP : :61666 ESTABLISHED TCP : :56940 ESTABLISHED TCP : :56243 ESTABLISHED TCP : :56383 ESTABLISHED Verify Base Machine Connectivity to Manager and Failover Servers The manager virtual machine and each failover recording server should have a TCP connection on port 80 to the base machine. Log in to the base machine assuming that the IP address of the base machine is and issue a netstat command to verify that there is at least one established TCP connection from each failover recording server and the manager. C:\Users\Administrator>netstat -n findstr " :80" TCP : :52113 ESTABLISHED TCP : :52116 ESTABLISHED TCP : :52117 ESTABLISHED TCP : :49158 ESTABLISHED TCP : :49160 ESTABLISHED TCP : :49159 ESTABLISHED TCP : :49161 ESTABLISHED TCP : :49159 ESTABLISHED TCP : :49160 ESTABLISHED TCP : :49159 ESTABLISHED TCP : :49160 ESTABLISHED Note: In the previous example, the manager IP address is , and the failover recording servers IP addresses are , , , and Verify That Each Recording Server Can Reach the Configured Network Video Cameras After logging on a recording server, connectivity between the recording server and the configured network video camera can be verified by using ping or tracert. Additionally, the recording server has a control panel connection to each IP camera. The following examples illustrate how to validate connectivity of an IP camera at the IP address , at port C:\Users\Administrator>tracert -d Tracing route to over a maximum of 30 hops 1 <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms Trace complete. C:\Users\Administrator>netstat -n findstr " :8000" TCP : :8000 ESTABLISHED Verify Connectivity of Client Viewing Workstations This section assumes the VMS software is installed on all virtual machines, including client workstations. The client (viewing) workstation must have connectivity to the base virtual machine and recording server virtual machines. This section uses these utilities to verify connectivity: netstat tracert 161 Video Surveillance Solutions Using NetApp E-Series Storage

162 Netstat From the client workstation, assuming that the base virtual machine has an IP address of and the recording server has an IP address of , use netstat and tracert to verify connectivity. C:\Users\NETAPP>netstat -n findstr " " TCP : :80 ESTABLISHED TCP : :7563 ESTABLISHED TCP : :7563 ESTABLISHED Tracert C:\Users\NETAPP>tracert -d Tracing route to over a maximum of 30 hops 1 3 ms 9 ms 10 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms <1 ms Trace complete Performance Monitoring of ESXi This section assumes that ESXi has been installed and the VMS software is installed and operational. The example output in this section illustrates how to monitor individual interfaces to observe the data rates and load sharing across physical links. Esxtop also reports the traffic from the virtual switch to each virtual machine. This provides an easy reference of how equally distributed the video traffic load is on a recording server by recording server basis. Esxtop Esxtop is started from the ESXi shell prompt. Open an SSH client and log in to the ESXi hypervisor. ~ # esxtop Enter h for the interactive help display to appear. In the tested configuration, memory and CPU utilization should not be a limiting factor for this solution. From a performance verification standpoint, the network traffic should be distributed over at least two physical interfaces. This can be verified by the esxtop n:network display. Additionally, this display can be used to verify the IP packets received by each of the virtual machine instances. The offered load should be equally distributed as much as practical. This is illustrated in Figure. 162 Video Surveillance Solutions Using NetApp E-Series Storage

163 Figure 48) esxtop network statistics. In the previous screenshot, vmnic 3 and vmnic 5 have a relatively equal distribution of the ingress video traffic at MbRX/s and MbRX/s, for a total of Mbps of ingress video traffic to the physical machine. Additionally, this display also reports the respective data rate of ingress video traffic to each virtual machine. The four virtual machines, RACK-SVR-45, RACK-SVR-46, RACK-SVR-47, and RACK-SVR-48, have ingress video traffic ranging from 43.08MbRX/s to MbRX/s. Because observed data rates normally fluctuate between each of the iterations, this display can be used to verify the distribution of load across the recording servers in the implementation. The physical-security integrator can use this information to balance the offered load across all recording servers by moving cameras between servers Verify Cisco Nexus 3048 Switch Load-Balance Configuration This section assumes that supporting hardware and software have been implemented and the VMS software is installed and operational. This section illustrates the following: Show PortChannel load-balance Configure PortChannel load-balance Verify PortChannel load-balance Virtual PortChannel caveats Show PortChannel Load-Balance Log in to each Cisco Nexus 3048 switch. The load-balancing configuration can be verified using the show port-channel load-balance command. VSS # show port-channel load-balance 163 Video Surveillance Solutions Using NetApp E-Series Storage

164 Port Channel Load-Balancing Configuration: System: source-dest-ip Port Channel Load-Balancing Addresses Used Per-Protocol: Non-IP: source-dest-mac IP: source-dest-ip The default value source-dest-ip is a recommended initial value. This default value should provide a reasonable degree of load sharing, because hundreds of networked video cameras (each with a unique IP address) will be streaming video of up to four video recording servers, each with its own IP address. Configure PortChannel Load-Balance The port-channel load-balance ethernet global configuration command can be used to change the default configured value on the switch. VSS (config)# port-channel load-balance ethernet? destination-ip Destination IP address destination-mac Destination MAC address destination-port Destination TCP/UDP port source-dest-ip Source & Destination IP address source-dest-mac Source & Destination MAC address source-dest-port Source & Destination TCP/UDP port source-ip Source IP address source-mac Source MAC address source-port Source TCP/UDP port Verify PortChannel Load-Balance Verify that the configured value provides an acceptable degree of load sharing over all member links in the PortChannel on each switch. To verify the degree of load sharing on the Cisco 3048 switch, log in to the switch and issue the show port-channel traffic interface command on the PortChannel interface on both switches: VSS # show port-channel traffic interface port-channel 1 ChanId Port Rx-Ucst Tx-Ucst Rx-Mcst Tx-Mcst Rx-Bcst Tx-Bcst Eth1/ % 74.38% 39.39% 71.88% 22.35% 55.33% 1 Eth1/ % 25.61% 60.60% 28.11% 77.64% 44.66% VSS # show port-channel traffic interface port-channel 2 ChanId Port Rx-Ucst Tx-Ucst Rx-Mcst Tx-Mcst Rx-Bcst Tx-Bcst Eth1/ % 0.85% 40.21% 22.54% 72.01% 32.30% 2 Eth1/ % 99.14% 59.78% 77.45% 27.98% 67.69% VSS # show port-channel traffic interface port-channel 3 ChanId Port Rx-Ucst Tx-Ucst Rx-Mcst Tx-Mcst Rx-Bcst Tx-Bcst Eth1/ % 57.50% 73.22% 40.13% 83.07% 69.25% 3 Eth1/ % 42.49% 26.77% 59.86% 16.92% 30.74% VSS # show port-channel traffic interface port-channel 4 ChanId Port Rx-Ucst Tx-Ucst Rx-Mcst Tx-Mcst Rx-Bcst Tx-Bcst Eth1/ % 41.83% 50.92% 22.38% 3.34% 55.41% 4 Eth1/ % 58.16% 49.07% 77.61% 96.65% 44.58% Note: This command reports the percentage of traffic and not the observed data rate on the member links. Virtual PortChannel Caveats The vpc feature connects two of the four member links on switch 1 and the remaining two member links on switch 2. A caveat of vpc is that video ingress traffic might not be equally distributed over the four member links. If the video traffic is transmitted from the network core/distribution layer to switch 1, the two 164 Video Surveillance Solutions Using NetApp E-Series Storage

165 member links for each PortChannel will be used on switch 1 for video traffic to the respective server. The vpc peer link is not utilized for load-balancing across the two member links on the second switch. This behavior is illustrated by showing the data rate on both switches for an individual PortChannel. In the following example, the show interface port-channel is issued for PortChannel 4, and the aggregated link to Server 4 is shown as follows. VSS # show interface port-channel 4 include Members rate Members in this channel: Eth1/29, Eth1/41 30 seconds input rate bits/sec, bytes/sec, 913 packets/sec 30 seconds output rate bits/sec, bytes/sec, packets/sec input rate 9.59 Mbps, 837 pps; output rate Mbps, Kpps VSS # show interface port-channel 4 include Members rate Members in this channel: Eth1/29, Eth1/41 30 seconds input rate bits/sec, bytes/sec, 414 packets/sec 30 seconds output rate bits/sec, 8743 bytes/sec, 110 packets/sec input rate 3.52 Mbps, 372 pps; output rate Kbps, 99 pps Note: The majority of the traffic for server 4 uses the two member links on PortChannel 4 of switch 1: 19,373 packets/sec versus 110 packets/sec for switch Verify NTFS Cluster Size This section assumes that the E-Series volume is created and mapped to the physical host, Windows 2008 R2 is installed, and the volumes (LUNs) are online and are formatted. Refer to the Microsoft document Optimizing NTFS for more information. Fsutil To verify the cluster size of a volume (LUN) on E-Series (assuming drive letter E:\), log in to each recording server, open a command window, and issue the fsutil command. fsutil fsinfo ntfsinfo e: The bytes per cluster value reported should be (64 kilobytes). 16 Network and System Topology and Configuration Files This section contains configuration files that were used during the video surveillance storage solution performance and verification evaluation. The topology is shown in Figure. There are sample configuration excerpts for the following devices: E-Series storage array Cisco Nexus and Cisco Catalyst switches Axis virtual camera simulator Windows Server The solution network and system topology consists of a core/distribution layer 2/layer 3 network using Cisco Nexus 3048 and Cisco Catalyst 4948 switches. The layer 3 routing protocol is Enhanced IGRP (EIGRP), and the layer 2 spanning tree protocol is Rapid Spanning Tree Protocol (802.1w), or RSTP. The topology shown here implements a routed access layer to both the top-of-rack Cisco Nexus 3038 switches as well as the Catalyst 3560 access-layer switches. The Axis virtual camera simulator servers are attached to VLAN The Axis cameras generating the live video feeding the simulators are from an access-layer Catalyst 3560 switch. 165 Video Surveillance Solutions Using NetApp E-Series Storage

166 Figure 49) Solution network and system topology E-Series Storage Array This section provides sample configuration files and storage array profiles for the E-Series used in validation testing. E5460 This is a sample configuration from an E5460 used in solution testing. The host interface to this array is Fibre Channel. This configuration includes traditional volume groups and DDPs. Following is an excerpt summary of the configuration showing three volumes configured for use by the physical host stlc200m2-7. This host supports one virtual machine running a Milestone XProtect recording server. DISK POOLS Name Status Usable Capacity Used Capacity Free Capacity Preservation Capacity DP_ARCHIVE_90 Optimal TB TB 8, GB 5, GB (2 Drives) VOLUME GROUPS Name Status Usable Capacity Used Capacity Free Capacity RAID Level VG_ARCHIVE_91 Optimal TB TB MB 6 VG_LIVE_90 Optimal 2, GB 2, GB MB 1 STANDARD VOLUMES Name Status Capacity Accessible by Source VOL_ARCHIVE_90 Optimal TB Host stlc200m2-7 Disk Pool DP_ARCHIVE_90 VOL_ARCHIVE_91 Optimal TB Host stlc200m2-7 Volume Group VG_ARCHIVE_91 VOL_LIVE_90 Optimal 2, GB Host stlc200m2-7 Volume Group VG_LIVE_ Video Surveillance Solutions Using NetApp E-Series Storage

167 The configuration was created from the Array Management window by selecting the Storage Array -> Configuration -> Save option. // Logical configuration information from Storage Array stle5460-7_8. // Saved on March 27, 2013 // Firmware package version for Storage Array stle5460-7_8 = // NVSRAM package version for Storage Array stle5460-7_8 = N DB2 //on error stop; // Uncomment the two lines below to delete the existing configuration. //show "Deleting the existing configuration."; //clear storagearray configuration; // Storage Array global logical configuration script commands show "Setting the Storage Array user label to stle5460-7_8."; set storagearray userlabel="stle5460-7_8"; show "Setting the Storage Array media scan rate to 30."; set storagearray mediascanrate=30; // Uncomment the three lines below to remove the default volume (if exists). NOTE: Default volume name is always = "Unnamed". //on error continue; //show "Deleting the default volume created during the removal of the existing configuration."; //delete volume["unnamed"] removevolumegroup=true; //on error stop; // Copies the hot spare settings // NOTE: These statements are wrapped in on-error continue and on-error stop statements to // account for minor differences in capacity from the drive of the Storage Array on which the // configuration was saved to that of the drives on which the configuration will be copied. show "Setting the Storage Array cache block size to 32."; set storagearray cacheblocksize=32; show "Setting the Storage Array to begin cache flush at 80% full."; set storagearray cacheflushstart=80; show "Setting the Storage Array to end cache flush at 80% full."; set storagearray cacheflushstop=80; // Creating Host Topology show "Creating Host stlc200m2-7 with Host Type Index 10."; // This Host Type Index corresponds to Type VMWare create host userlabel="stlc200m2-7" hosttype=10; show "Creating Host Port stlc200m2-7-1 on Host stlc200m2-7 with WWN ff3de382 and with interfacetype FC."; create hostport host="stlc200m2-7" userlabel="stlc200m2-7-1" identifier=" ff3de382" interfacetype=fc; show "Creating Host Port stlc200m2-7-2 on Host stlc200m2-7 with WWN ff3de383 and with interfacetype FC."; create hostport host="stlc200m2-7" userlabel="stlc200m2-7-2" identifier=" ff3de383" interfacetype=fc; show "Creating Volume Group VG_LIVE_90, RAID 1."; //This command creates volume group <VG_LIVE_90>. create volumegroup drives=(99,3,1 99,1,1) RAIDLevel=1 userlabel="vg_live_90" securitytype=capable dataassurance=none; show "Creating volume VOL_LIVE_90 on volume group VG_LIVE_90."; //This command creates volume <VOL_LIVE_90> on volume group <VG_LIVE_90>. create volume volumegroup="vg_live_90" userlabel="vol_live_90" owner=a segmentsize=128 dsspreallocate=false dataassurance=none mapping=none; show "Setting additional attributes for volume VOL_LIVE_90."; // Configuration settings that can not be set during Volume creation. set volume["vol_live_90"] cacheflushmodifier=10; set volume["vol_live_90"] cachewithoutbatteryenabled=false; 167 Video Surveillance Solutions Using NetApp E-Series Storage

168 set volume["vol_live_90"] mirrorenabled=false; set volume["vol_live_90"] readcacheenabled=true; set volume["vol_live_90"] writecacheenabled=true; set volume["vol_live_90"] mediascanenabled=true; set volume["vol_live_90"] redundancycheckenabled=true; set volume["vol_live_90"] cachereadprefetch=true; set volume["vol_live_90"] modificationpriority=lowest; set volume["vol_live_90"] prereadredundancycheck=false; show "Creating Disk Pool DP_ARCHIVE_90."; //This command creates disk pool <DP_ARCHIVE_90>. create diskpool drives=(99,1,2 99,3,2 99,2,5 99,5,2 99,1,3 99,3,3 99,2,6 99,1,4 99,3,4 99,2,7 99,1,5 99,3,5 99,2,9 99,1,6 99,3,6 99,2,10 99,1,7 99,3,8 99,2,11 99,1,8) userlabel="dp_archive_90" securitytype=capable dataassurance=none warningthreshold=85 criticalthreshold=95 criticalpriority=highest degradedpriority=high backgroundpriority=low; show "Setting the reserved drive count to 2."; set diskpool ["DP_ARCHIVE_90"] reserveddrivecount=2; show "Creating volume VOL_ARCHIVE_90 on disk pool DP_ARCHIVE_90."; //This command creates volume <VOL_ARCHIVE_90> on disk pool <DP_ARCHIVE_90>. create volume diskpool="dp_archive_90" userlabel="vol_archive_90" owner=b capacity= Bytes dataassurance=none mapping=none; show "Setting additional attributes for volume VOL_ARCHIVE_90."; // Configuration settings that can not be set during Volume creation. set volume["vol_archive_90"] cacheflushmodifier=10; set volume["vol_archive_90"] cachewithoutbatteryenabled=false; set volume["vol_archive_90"] mirrorenabled=false; set volume["vol_archive_90"] readcacheenabled=true; set volume["vol_archive_90"] writecacheenabled=true; set volume["vol_archive_90"] mediascanenabled=true; set volume["vol_archive_90"] redundancycheckenabled=true; set volume["vol_archive_90"] cachereadprefetch=true; set volume["vol_archive_90"] modificationpriority=lowest; show "Creating Volume Group VG_ARCHIVE_91, RAID 6."; //This command creates volume group <VG_ARCHIVE_91>. create volumegroup drives=(99,1,9 99,3,9 99,1,10 99,3,10 99,1,11 99,3,11 99,1,12 99,3,12) RAIDLevel=6 userlabel="vg_archive_91" securitytype=capable dataassurance=none; show "Creating volume VOL_ARCHIVE_91 on volume group VG_ARCHIVE_91."; //This command creates volume <VOL_ARCHIVE_91> on volume group <VG_ARCHIVE_91>. create volume volumegroup="vg_archive_91" userlabel="vol_archive_91" owner=a segmentsize=128 dsspreallocate=false dataassurance=none mapping=none; show "Setting additional attributes for volume VOL_ARCHIVE_91."; // Configuration settings that can not be set during Volume creation. set volume["vol_archive_91"] cacheflushmodifier=10; set volume["vol_archive_91"] cachewithoutbatteryenabled=false; set volume["vol_archive_91"] mirrorenabled=false; set volume["vol_archive_91"] readcacheenabled=true; set volume["vol_archive_91"] writecacheenabled=true; set volume["vol_archive_91"] mediascanenabled=true; set volume["vol_archive_91"] redundancycheckenabled=true; set volume["vol_archive_91"] cachereadprefetch=true; set volume["vol_archive_91"] modificationpriority=lowest; set volume["vol_archive_91"] prereadredundancycheck=false; // Creating Volume-To-LUN Mappings show "Creating Volume-to-LUN Mapping for Volume VOL_ARCHIVE_90 to LUN 9 under Host stlc200m2-7."; set volume ["VOL_ARCHIVE_90"] logicalunitnumber=9 host="stlc200m2-7"; show "Creating Volume-to-LUN Mapping for Volume VOL_LIVE_90 to LUN 19 under Host stlc200m2-7."; set volume ["VOL_LIVE_90"] logicalunitnumber=19 host="stlc200m2-7"; show "Creating Volume-to-LUN Mapping for Volume VOL_ARCHIVE_91 to LUN 91 under Host stlc200m2-7."; set volume ["VOL_ARCHIVE_91"] logicalunitnumber=91 host="stlc200m2-7"; 168 Video Surveillance Solutions Using NetApp E-Series Storage

169 E2660 This is an excerpt of a storage array profile for an E2660 used in solution validation testing. This storage array is SAS attached to Cisco UCS C220-M3 servers. Storage array profile (Extract) PROFILE FOR STORAGE ARRAY: stle _34 CACHE SETTINGS Start cache flushing at: 80% Stop cache flushing at: 80% Cache block size: 32 KB Media scan frequency: Failover alert delay: 30 days 5 minutes STORAGE SUMMARY Volume groups: 15 RAID 1 Volume Groups: 4 Volumes: 11 RAID 6 Volume Groups: 11 Volumes: 14 HOST MAPPINGS SUMMARY Default host OS: Windows (Host OS index 1) Mapped volumes: 25 Unmapped volumes: 0 HARDWARE SUMMARY Trays: 3 Controllers: 2 Redundancy mode: Duplex (dual controllers) Drives: 180 FIRMWARE INVENTORY SANtricity ES AMW Version: G0.32 Storage Array Storage Array Name: stle _34 Current Package Version: Current NVSRAM Version: N26X DB2 Controllers Location: Tray 99, Slot A Current Package Version: Current NVSRAM Version: N26X DB2 Location: Tray 99, Slot B Current Package Version: Current NVSRAM Version: N26X DB2 VOLUME GROUPS Total Volume Groups: 15 Total Capacity: TB Usable, TB Used Total Free Capacity: GB Status: 15 Optimal, 0 Non Optimal Name Status Usable Capacity Used Capacity Free Capacity RAID Level Drive/Media Type Volumes Secure Capable DA Capable VG_ARCHIVE_1 Optimal TB TB MB 6 Serial Attached SCSI (SAS), Hard Disk Drive 2 Yes (Non Secure) Yes VG_LIVE_1_2 Optimal 4, GB 4, GB GB 10 Serial Attached SCSI (SAS), Hard Disk Drive 4 Yes (Non Secure) Yes DETAILS Name: VG_ARCHIVE_1 Status: Optimal Capacity: TB Current owner: Controller in slot B RAID level: Video Surveillance Solutions Using NetApp E-Series Storage

170 Drive media type: Hard Disk Drive Drive interface type: Serial Attached SCSI (SAS) Tray loss protection: No Drawer Loss Protection: Yes Data Assurance (DA) capable: Yes DA enabled volume present: No Total Volumes: 2 Standard volumes: 2 Repository volumes: 0 Free Capacity: MB Name: VG_LIVE_1_2 Status: Optimal Capacity: 4, GB Current owner: Controller in slot A,B RAID level: 10 Drive media type: Hard Disk Drive Drive interface type: Serial Attached SCSI (SAS) Tray loss protection: Yes Drawer Loss Protection: Yes Data Assurance (DA) capable: Yes DA enabled volume present: No Total Volumes: 4 Standard volumes: 4 Repository volumes: 0 Free Capacity: GB DETAILS Volume name: VOL_ARCHIVE_1 Volume status: Optimal Thin provisioned: No Capacity: TB Volume world-wide identifier: 60:08:0e:50:00:2e:31:92:00:00:0a:2f:50:6d:8d:4a Associated volume group: VG_ARCHIVE_1 RAID level: 6 LUN: 1 Accessible By: Host stlc220m3-10 Drive media type: Hard Disk Drive Drive interface type: Serial Attached SCSI (SAS) Preferred owner: Controller in slot B Current owner: Controller in slot B Segment size: 128 KB Modification priority: Lowest Read cache: Enabled Write cache: Enabled Write cache without batteries: Disabled Write cache with mirroring: Disabled Flush write cache after (in seconds): Dynamic cache read prefetch: Disabled Enable background media scan: Enabled Media scan with redundancy check: Disabled Pre-Read redundancy check: Disabled Volume name: VOL_LIVE_1 Volume status: Optimal Thin provisioned: No Capacity: 1, GB Volume world-wide identifier: 60:08:0e:50:00:2e:5a:64:00:00:03:d9:50:64:4b:fe Associated volume group: VG_LIVE_1_2 RAID level: 10 LUN: 11 Accessible By: Host stlc220m3-10 Drive media type: Hard Disk Drive Drive interface type: Serial Attached SCSI (SAS) Preferred owner: Controller in slot B Current owner: Controller in slot B Segment size: 128 KB Modification priority: Lowest Read cache: Enabled Write cache: Enabled Write cache without batteries: Disabled 170 Video Surveillance Solutions Using NetApp E-Series Storage

171 Write cache with mirroring: Disabled Flush write cache after (in seconds): Dynamic cache read prefetch: Disabled Enable background media scan: Enabled Media scan with redundancy check: Disabled Pre-Read redundancy check: Disabled 16.2 Cisco Nexus and Catalyst Switches These are sample configurations for both Cisco Nexus 3048 switches for use at the distributor facility for first article build (FAB) deployment. These configurations are supplied with the documentation bundle as text files and applied to the respective switch by using FTP. However, there is an open issue with configuring the Cisco Nexus switches with FTP. To circumvent the problem, it is preferable to use a terminal session and copy and paste the configuration into the respective switch. VSS This sample configuration is from the first top-of-rack Cisco Nexus 3048 switch.!command: show running-config!time: Fri Feb 22 12:09: version 5.0(3)U5(1a) feature telnet cfs eth distribute feature interface-vlan feature hsrp feature lacp feature vpc logging level interface-vlan 2 banner motd # UNAUTHORIZED ACCESS TO THIS NETWORK DEVICE IS PROHIBITED. You must have explicit permission to access or configure this device. All activities performed on this device are logged and violations of this policy may result in disciplinary action. # ip domain-lookup hostname VSS vrf context vpc_peer-keepalive vlan 1 vlan 2 name UNUSED_PORTS vlan 3 name NATIVE_VLAN vlan 7 name DEVICE_MANAGEMENT vlan 58 name vpc_keepalive vlan 2020 name VIDEO_INGRESS spanning-tree port type edge bpduguard default vpc domain 58 role priority 11 peer-keepalive destination source vrf vpc_peer-keepalive interface Vlan1 interface Vlan7 no shutdown description DEVICE_MANAGEMENT 171 Video Surveillance Solutions Using NetApp E-Series Storage

172 ip address /24 interface Vlan58 no shutdown description vpc_peer-keepalive vrf member vpc_peer-keepalive ip address /30 interface Vlan2020 no shutdown description VIDEO_INGRESS ip address /24 interface port-channel1 description Server 1 vpc 1 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel2 description Server 2 vpc 2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel3 description Server 3 vpc 3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel4 description Server 4 vpc 4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel10 description L2 Portchannel to CORE switchport mode trunk vpc 10 switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network no negotiate auto interface port-channel58 description vpc_peer-keepalive switchport access vlan 58 no negotiate auto interface port-channel59 description vpc peer link switchport mode trunk vpc peer-link switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network no negotiate auto 172 Video Surveillance Solutions Using NetApp E-Series Storage

173 interface Ethernet1/1 description Server 1 - vmnic5 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 1 interface Ethernet1/2 description vpc_peer-keepalive switchport access vlan 58 channel-group 58 mode active interface Ethernet1/3 switchport access vlan 2 interface Ethernet1/4 switchport access vlan 2 interface Ethernet1/5 description Server 3 - vmnic5 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 3 interface Ethernet1/6 switchport access vlan 2 interface Ethernet1/7 switchport access vlan 2 interface Ethernet1/8 switchport access vlan 2 interface Ethernet1/9 switchport access vlan 2 interface Ethernet1/10 switchport access vlan 2 interface Ethernet1/11 switchport access vlan 2 interface Ethernet1/12 switchport access vlan 2 interface Ethernet1/13 description Server 1 - vmnic3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 1 interface Ethernet1/14 description E2660-A:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/15 switchport access vlan 2 interface Ethernet1/16 description SERVER 1 - CIMC:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/ Video Surveillance Solutions Using NetApp E-Series Storage

174 description Server 3 - vmnic3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 3 interface Ethernet1/18 description SERVER 1 - vmnic0:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/19 switchport access vlan 2 interface Ethernet1/20 description SERVER 1 - vmnic1:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/21 switchport access vlan 2 interface Ethernet1/22 description SERVER 3 - CIMC:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/23 switchport access vlan 2 interface Ethernet1/24 description SERVER 3 - vmnic0:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/25 description Server 2 - vmnic5 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 2 interface Ethernet1/26 description SERVER 3 - vmnic1:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/27 switchport access vlan 2 interface Ethernet1/28 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/29 description Server 4 - vmnic5 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 4 interface Ethernet1/30 description DEVICE_MANAGEMENT switchport access vlan Video Surveillance Solutions Using NetApp E-Series Storage

175 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/31 switchport access vlan 2 interface Ethernet1/32 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/33 switchport access vlan 2 interface Ethernet1/34 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/35 switchport access vlan 2 interface Ethernet1/36 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/37 description Server 2 - vmnic3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 2 interface Ethernet1/38 switchport access vlan 2 interface Ethernet1/39 switchport access vlan 2 interface Ethernet1/40 switchport access vlan 2 interface Ethernet1/41 description Server 4 - vmnic3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 4 interface Ethernet1/42 switchport access vlan 2 interface Ethernet1/43 switchport access vlan 2 interface Ethernet1/44 switchport access vlan 2 interface Ethernet1/45 switchport access vlan 2 interface Ethernet1/46 switchport access vlan 2 interface Ethernet1/47 switchport access vlan Video Surveillance Solutions Using NetApp E-Series Storage

176 interface Ethernet1/48 description vpc_peer-keepalive switchport access vlan 58 channel-group 58 mode active interface Ethernet1/49 description vpc peer link switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network channel-group 59 mode active interface Ethernet1/50 description vpc peer link switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network channel-group 59 mode active interface Ethernet1/52 description L2 UPLINK stl3048-loaner Eth1/49 switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network channel-group 10 mode active clock timezone est -5 0 clock summer-time edt 2 Sun Mar 2:00 1 Sun Nov 2:00 60 line console line vty VSS This sample configuration is from the second top-of-rack Cisco Nexus 3048 switch.!command: show running-config!time: Fri Feb 22 12:12: version 5.0(3)U5(1a) feature telnet cfs eth distribute feature interface-vlan feature hsrp feature lacp feature vpc banner motd # UNAUTHORIZED ACCESS TO THIS NETWORK DEVICE IS PROHIBITED. You must have explicit permission to access or configure this device. All activities performed on this device are logged and violations of this policy may result in disciplinary action. # ip domain-lookup hostname VSS vrf context vpc_peer-keepalive vlan 1 vlan 2 name UNUSED_PORTS 176 Video Surveillance Solutions Using NetApp E-Series Storage

177 vlan 3 name NATIVE_VLAN vlan 7 name DEVICE_MANAGEMENT vlan 58 name vpc_keepalive vlan 2020 name VIDEO_INGRESS spanning-tree port type edge bpduguard default vpc domain 58 role priority 12 peer-keepalive destination source vrf vpc_peer-keepalive interface Vlan1 interface Vlan7 no shutdown description DEVICE_MANAGEMENT ip address /24 interface Vlan58 no shutdown description vpc_peer-keepalive vrf member vpc_peer-keepalive ip address /30 interface Vlan2020 no shutdown description VIDEO_INGRESS ip address /24 interface port-channel1 description Server 1 vpc 1 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel2 description Server 2 vpc 2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel3 description Server 3 vpc 3 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel4 description Server 4 vpc 4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable no negotiate auto interface port-channel10 description L2 Portchannel to CORE switchport mode trunk vpc 10 switchport access vlan Video Surveillance Solutions Using NetApp E-Series Storage

178 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network no negotiate auto interface port-channel58 description vpc_peer-keepalive switchport access vlan 58 no negotiate auto interface port-channel59 description vpc peer link switchport mode trunk vpc peer-link switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network no negotiate auto interface Ethernet1/1 description Server 1 - vmnic4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 1 interface Ethernet1/2 description vpc_peer-keepalive switchport access vlan 58 channel-group 58 mode active interface Ethernet1/3 switchport access vlan 2 interface Ethernet1/4 switchport access vlan 2 interface Ethernet1/5 description Server 3 - vmnic4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 3 interface Ethernet1/6 switchport access vlan 2 interface Ethernet1/7 switchport access vlan 2 interface Ethernet1/8 switchport access vlan 2 interface Ethernet1/9 switchport access vlan 2 interface Ethernet1/10 switchport access vlan 2 interface Ethernet1/11 switchport access vlan 2 interface Ethernet1/12 switchport access vlan 2 interface Ethernet1/13 description Server 1 - vmnic2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable 178 Video Surveillance Solutions Using NetApp E-Series Storage

179 channel-group 1 interface Ethernet1/14 description E2660-B:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge interface Ethernet1/15 switchport access vlan 2 interface Ethernet1/16 description SERVER 2 - CIMC:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/17 description Server 3 - vmnic2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 3 interface Ethernet1/18 description SERVER 2 - vmnic0:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/19 switchport access vlan 2 interface Ethernet1/20 description SERVER 2 - vmnic1:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/21 switchport access vlan 2 interface Ethernet1/22 description SERVER 4 - CIMC:DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/23 switchport access vlan 2 interface Ethernet1/24 description SERVER 4 - vmnic0:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/25 description Server 2 - vmnic4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 2 interface Ethernet1/26 description SERVER 4 - vmnic1:device_management switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/27 switchport access vlan Video Surveillance Solutions Using NetApp E-Series Storage

180 interface Ethernet1/28 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/29 description Server 4 - vmnic4 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 4 interface Ethernet1/30 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/31 switchport access vlan 2 interface Ethernet1/32 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/33 switchport access vlan 2 interface Ethernet1/34 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/35 switchport access vlan 2 interface Ethernet1/36 description DEVICE_MANAGEMENT switchport access vlan 7 spanning-tree port type edge spanning-tree bpduguard enable interface Ethernet1/37 description Server 2 - vmnic2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 2 interface Ethernet1/38 switchport access vlan 2 interface Ethernet1/39 switchport access vlan 2 interface Ethernet1/40 switchport access vlan 2 interface Ethernet1/41 description Server 4 - vmnic2 switchport access vlan 2020 spanning-tree port type edge spanning-tree bpduguard enable channel-group 4 interface Ethernet1/ Video Surveillance Solutions Using NetApp E-Series Storage

181 switchport access vlan 2 interface Ethernet1/43 switchport access vlan 2 interface Ethernet1/44 switchport access vlan 2 interface Ethernet1/45 switchport access vlan 2 interface Ethernet1/46 switchport access vlan 2 interface Ethernet1/47 switchport access vlan 2 interface Ethernet1/48 description vpc_peer-keepalive switchport access vlan 58 channel-group 58 mode active interface Ethernet1/49 description vpc peer link switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 channel-group 59 mode active interface Ethernet1/50 description vpc peer link switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 channel-group 59 mode active interface Ethernet1/52 description L2 UPLINK stl3048-loaner Eth1/50 switchport mode trunk switchport access vlan 3 switchport trunk native vlan 3 switchport trunk allowed vlan 3,7,2020 spanning-tree port type network channel-group 10 mode active clock timezone est -5 0 clock summer-time edt 2 Sun Mar 2:00 1 Sun Nov 2:00 60 line console line vty Cisco Catalyst 4948 Switch This is an abbreviated sample configuration from the Catalyst switch supporting video ingress to the Cisco UCS C220-M2 that is Fibre Channel attached to an E5460 storage array.! version 12.2 no service pad service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone service password-encryption service compress-config! hostname stl4948-f5-2! boot-start-marker 181 Video Surveillance Solutions Using NetApp E-Series Storage

182 boot system flash bootflash:cat4500-entservicesk9-mz sg1.bin;bootflash: boot-end-marker! clock timezone est -5 clock summer-time edt recurring! ip multicast-routing! port-channel load-balance src-dst-mac! spanning-tree mode rapid-pvst! vlan 2 name UNUSED_PORTS! vlan 3 name NATIVE! vlan 2012 name SIMULATORS! interface Loopback0 description RP ip address ip pim sparse-mode! interface Port-channel7 description PortChannel to stlec200m2-7 switchport switchport access vlan 2012 switchport mode access load-interval 30 spanning-tree portfast spanning-tree bpduguard enable! interface GigabitEthernet1/29 description stlc200m2-7 switchport access vlan 2012 switchport mode access load-interval 30 channel-group 7 mode on! interface GigabitEthernet1/30 description stlc200m2-7 switchport access vlan 2012 switchport mode access load-interval 30 channel-group 7 mode on! interface GigabitEthernet1/31 description stlc200m2-7 switchport access vlan 2012 switchport mode access load-interval 30 channel-group 7 mode on! interface Vlan2012 description Simulators mtu 9198 ip address ip pim sparse-mode load-interval 30 standby 12 ip standby 12 priority 60 standby 13 ip standby 13 priority 60! router eigrp 64 network passive-interface default no passive-interface GigabitEthernet1/ Video Surveillance Solutions Using NetApp E-Series Storage

183 no passive-interface GigabitEthernet1/45 no passive-interface GigabitEthernet1/46 no passive-interface GigabitEthernet1/47 no passive-interface GigabitEthernet1/48 eigrp router-id ! ip pim send-rp-announce Loopback0 scope 32 group-list IPVS_IPmc_Groups ip pim send-rp-discovery Loopback0 scope 5! ip access-list standard IPVS_IPmc_Groups permit !! banner exec ^C UNAUTHORIZED ACCESS TO THIS NETWORK DEVICE IS PROHIBITED. You must have explicit permission to access or configure this device. All activities performed on this device are logged and violations of this policy may result in disciplinary action. ^C banner motd ^C = == === == = ^C ntp master 11 ntp update-calendar ntp server vrf mgmtvrf end 16.3 Axis Virtual Camera Axis Communications is a technology partner of NetApp and has provided use of the Axis virtual camera simulator. This simulator runs on Windows 2008 R2 on physical machines. The simulator is configured to connect to a physical camera, accept an input video feed, and replicate that video feed for a given number of virtual cameras. In this validation deployment, 64 virtual cameras per server with 11 simulators were deployed. A sample of one instance of the Axis virtual camera simulator is shown in Figure. This simulator is configured for a resolution of 1920x1080 at 30 frames per second, using H.264 in UDP/RTP transport. 183 Video Surveillance Solutions Using NetApp E-Series Storage

184 Figure 50) Axis virtual camera. Note: The live input (video feed from the camera), the number of video feeds output to the recording server, and the aggregate data rate are shown in the lower right corner of the window Windows Server Sample Configuration Cisco UCS-C220-M2 ESXi Fibre Channel C:\Users\Administrator\Desktop>show_clock The current time is 3/26/2013 1:55:49 PM. C:\Program Files (x86)\storagemanager\util>smdevices SANtricity ES Storage Manager Devices, Version Built Fri Feb 24 04:53:27 CST 2012 Copyright (C) NetApp, Inc. All Rights Reserved. \\.\PHYSICALDRIVE1 [Storage Array stle5460-7_8, Volume VOL_ARCHIVE_90, LUN 9, Volume ID <60080e500 01f738c bdf20f>, Preferred Path (Controller-B): Owning controller - Active/Optimized] 184 Video Surveillance Solutions Using NetApp E-Series Storage

Video Surveillance Solutions with NetApp E-Series Storage

Video Surveillance Solutions with NetApp E-Series Storage Technical Report Video Surveillance Solutions with NetApp E-Series Storage Performance Considerations James Laing, Jürgen Türk, Frank Poole NetApp December 2017 TR-4198 Abstract Video surveillance solutions

More information

Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution

Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution Technical Report Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution Joel W. King, NetApp September 2012 TR-4110 TABLE OF CONTENTS 1 Executive Summary... 3 1.1 Overview...

More information

Surveillance Dell EMC Storage with Verint Nextiva

Surveillance Dell EMC Storage with Verint Nextiva Surveillance Dell EMC Storage with Verint Nextiva Sizing Guide H14897 REV 1.3 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published September 2017 Dell believes the information

More information

Milestone Solution Partner IT Infrastructure Components Certification Report

Milestone Solution Partner IT Infrastructure Components Certification Report Milestone Solution Partner IT Infrastructure Components Certification Report Dell MD3860i Storage Array Multi-Server 1050 Camera Test Case 4-2-2016 Table of Contents Executive Summary:... 3 Abstract...

More information

Video Surveillance Solutions with NetApp E-Series Storage

Video Surveillance Solutions with NetApp E-Series Storage Technical Report Video Surveillance Solutions with NetApp E-Series Storage Sizing Considerations Joel W. King, James Laing, NetApp July 2013 TR-4199 Abstract Video surveillance solutions using E-Series

More information

Milestone Solution Partner IT Infrastructure Components Certification Report

Milestone Solution Partner IT Infrastructure Components Certification Report Milestone Solution Partner IT Infrastructure Components Certification Report Dell Storage PS6610, Dell EqualLogic PS6210, Dell EqualLogic FS7610 July 2015 Revisions Date July 2015 Description Initial release

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Surveillance Dell EMC Storage with Milestone XProtect Corporate Surveillance Dell EMC Storage with Milestone XProtect Corporate Sizing Guide H14502 REV 1.5 Copyright 2014-2018 Dell Inc. or its subsidiaries. All rights reserved. Published January 2018 Dell believes

More information

Configuring Milestone XProtect Corporate Video Management System with NetApp E-Series Storage Proof-of-Concept Implementation Guide

Configuring Milestone XProtect Corporate Video Management System with NetApp E-Series Storage Proof-of-Concept Implementation Guide Technical Report Configuring Milestone XProtect Corporate Video Management System with NetApp E-Series Proof-of-Concept Implementation Guide Jim Laing, NetApp July 2013 TR-4201 Abstract This document provides

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Video Surveillance EMC Storage with LENSEC Perspective VMS

Video Surveillance EMC Storage with LENSEC Perspective VMS Video Surveillance EMC Storage with LENSEC Perspective VMS Sizing Guide H14768 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published March, 2016 EMC believes the information

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Introduction to NetApp E-Series E2700 with SANtricity 11.10

Introduction to NetApp E-Series E2700 with SANtricity 11.10 d Technical Report Introduction to NetApp E-Series E2700 with SANtricity 11.10 Todd Edwards, NetApp March 2014 TR-4275 1 Introduction to NetApp E-Series E2700 with SANtricity 11.10 TABLE OF CONTENTS 1

More information

Surveillance Dell EMC Storage with Synectics Digital Recording System

Surveillance Dell EMC Storage with Synectics Digital Recording System Surveillance Dell EMC Storage with Synectics Digital Recording System Sizing Guide H15109 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes

More information

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Sizing Guide H15052 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published May 2016 EMC believes the information

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems NETAPP TECHNICAL REPORT Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems A Performance Comparison Study of FC, iscsi, and NFS Protocols Jack McLeod, NetApp

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

Virtual Security Server

Virtual Security Server Data Sheet VSS Virtual Security Server Security clients anytime, anywhere, any device CENTRALIZED CLIENT MANAGEMENT UP TO 50% LESS BANDWIDTH UP TO 80 VIDEO STREAMS MOBILE ACCESS INTEGRATED SECURITY SYSTEMS

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

Cisco UCS Performance Manager

Cisco UCS Performance Manager Data Sheet Cisco UCS Performance Manager Introduction Today s integrated infrastructure data centers must be highly responsive, with heightened levels of flexibility and visibility. Personnel are responsible

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

Demartek December 2007

Demartek December 2007 HH:MM Demartek Comparison Test: Storage Vendor Drive Rebuild Times and Application Performance Implications Introduction Today s datacenters are migrating towards virtualized servers and consolidated storage.

More information

Dell EMC Storage with Panasonic Video Insight

Dell EMC Storage with Panasonic Video Insight Dell EMC Storage with Panasonic Video Insight Surveillance June 2018 H17179 Sizing Guide Abstract The purpose of this guide is to help you understand the benefits of using a Dell EMC storage solution with

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.6 November 2017 215-12636_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

vstart 50 VMware vsphere Solution Specification

vstart 50 VMware vsphere Solution Specification vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

Milestone Solution Partner IT Infrastructure Components Certification Report

Milestone Solution Partner IT Infrastructure Components Certification Report Milestone Solution Partner IT Infrastructure Components Certification Report Huawei OceanStor 5500 V3 SAN Storage 08-09-2015 Table of Contents Executive Summary:... 4 Abstract... 4 Certified Products...

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

Dell EMC Storage with the Avigilon Control Center System

Dell EMC Storage with the Avigilon Control Center System Dell EMC Storage with the Avigilon Control Center System Surveillance January 2019 H15398.5 Sizing Guide Abstract This guide helps you understand the benefits of using a Dell EMC storage solution with

More information

Dell EMC Storage with the Avigilon Control Center System

Dell EMC Storage with the Avigilon Control Center System Dell EMC Storage with the Avigilon Control Center System Surveillance July 2018 H15398.4 Sizing Guide Abstract This guide helps you understand the benefits of using a Dell EMC storage solution with Avigilon

More information

Architecting Storage for Semiconductor Design: Manufacturing Preparation

Architecting Storage for Semiconductor Design: Manufacturing Preparation White Paper Architecting Storage for Semiconductor Design: Manufacturing Preparation March 2012 WP-7157 EXECUTIVE SUMMARY The manufacturing preparation phase of semiconductor design especially mask data

More information

Video Surveillance EMC Storage with Honeywell Digital Video Manager

Video Surveillance EMC Storage with Honeywell Digital Video Manager Video Surveillance EMC Storage with Honeywell Digital Video Manager Sizing Guide H14748 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published February, 2016 EMC believes

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp Agenda The Landscape has Changed New Customer Requirements The Market has Begun to Move Comparing Performance Results Storage

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Dell EMC Storage with Milestone XProtect Corporate

Dell EMC Storage with Milestone XProtect Corporate Dell EMC Storage with Milestone XProtect Corporate Surveillance June 2018 H14502.7 Sizing Guide Abstract This guide provides guidelines for sizing the Dell EMC storage arrays and storage clusters with

More information

Vess A2000 Series. NVR Storage Appliance with Milestone XProtect Application Notes. Version 1.0

Vess A2000 Series. NVR Storage Appliance with Milestone XProtect Application Notes. Version 1.0 Vess A2000 Series NVR Storage Appliance with Milestone XProtect Application Notes Version 1.0 Contents Introduction 1 Purpose 2 Scope 2 Audience 2 Overview of surveillance technology 3 Milestone VMS Video

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

Features. HDX WAN optimization. QoS

Features. HDX WAN optimization. QoS May 2013 Citrix CloudBridge Accelerates, controls and optimizes applications to all locations: datacenter, branch offices, public and private clouds and mobile users Citrix CloudBridge provides a unified

More information

Milestone Solution Partner IT Infrastructure Components Certification Report

Milestone Solution Partner IT Infrastructure Components Certification Report Milestone Solution Partner IT Infrastructure Components Certification Report HUAWEI OceanStor 2800 V3 27-08-2015 Table of Contents Executive Summary:... 4 Abstract... 4 Certified Products... 4 Key Findings...

More information

VMware Technology Overview. Leverage Nextiva Video Management Solution with VMware Virtualization Technology

VMware Technology Overview. Leverage Nextiva Video Management Solution with VMware Virtualization Technology VMware Technology Overview Leverage Nextiva Video Management Solution with VMware Virtualization Technology Table of Contents Overview... 2 Seamless Integration within the IT Infrastructure... 2 Support

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Vess. Architectural & Engineering Specifications For Video Surveillance. A2600 Series. Version: 1.2 Sep, 2012

Vess. Architectural & Engineering Specifications For Video Surveillance. A2600 Series.  Version: 1.2 Sep, 2012 Vess A2600 Series Architectural & Engineering Specifications Version: 1.2 Sep, 2012 www.promise.com Copyright 2012 Promise Technology, Inc. All Rights Reserved. No part of this document may be reproduced

More information

Cisco SAN Analytics and SAN Telemetry Streaming

Cisco SAN Analytics and SAN Telemetry Streaming Cisco SAN Analytics and SAN Telemetry Streaming A deeper look at enterprise storage infrastructure The enterprise storage industry is going through a historic transformation. On one end, deep adoption

More information

Surveillance Dell EMC Storage with Genetec Security Center

Surveillance Dell EMC Storage with Genetec Security Center Surveillance Dell EMC Storage with Genetec Security Center Sizing Guide H13495 REV 2.5 Copyright 2014-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December 2017 Dell believes the

More information

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

Pivot3 Acuity with Microsoft SQL Server Reference Architecture Pivot3 Acuity with Microsoft SQL Server 2014 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales: sales@pivot3.com Austin,

More information

Milestone Solution Partner IT Infrastructure Components Certification Report

Milestone Solution Partner IT Infrastructure Components Certification Report Milestone Solution Partner IT Infrastructure Components Certification Report Infortrend Technologies EonStor DS/GSe Pro Family 4-2-2018 Table of Contents Executive Summary... 4 Introduction... 4 Certified

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees Course Name Format Course Books 5-day instructor led training 735 pg Study Guide fully annotated with slide notes 244 pg Lab Guide with detailed steps for completing all labs vsphere Version Covers uses

More information

Surveillance Dell EMC Isilon Storage with Video Management Systems

Surveillance Dell EMC Isilon Storage with Video Management Systems Surveillance Dell EMC Isilon Storage with Video Management Systems Configuration Best Practices Guide H14823 REV 2.0 Copyright 2016-2018 Dell Inc. or its subsidiaries. All rights reserved. Published April

More information

Vess. Architectural & Engineering Specifications For Video Surveillance. A2200 Series. Version: 1.5 June, 2014

Vess. Architectural & Engineering Specifications For Video Surveillance. A2200 Series.  Version: 1.5 June, 2014 Vess A2200 Series Architectural & Engineering Specifications Version: 1.5 June, 2014 www.promise.com Copyright 2014 Promise Technology, Inc. All Rights Reserved. No part of this document may be reproduced

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

Surveillance Dell EMC Storage with Infinova 2217 Security Management System

Surveillance Dell EMC Storage with Infinova 2217 Security Management System Surveillance Dell EMC Storage with Infinova 2217 Security Management System Sizing Guide H15952 REV 01 Copyright 2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2017 Dell believes

More information

Surveillance Dell EMC Storage with IndigoVision Control Center

Surveillance Dell EMC Storage with IndigoVision Control Center Surveillance Dell EMC Storage with IndigoVision Control Center Sizing Guide H14832 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2016 Dell believes the information

More information

Cisco Virtual Networking Solution for OpenStack

Cisco Virtual Networking Solution for OpenStack Data Sheet Cisco Virtual Networking Solution for OpenStack Product Overview Extend enterprise-class networking features to OpenStack cloud environments. A reliable virtual network infrastructure that provides

More information

Dell Storage - Video Solutions. Jeff Junker Technical Consultant Video Solutions Dell ESG Storage

Dell Storage - Video Solutions. Jeff Junker Technical Consultant Video Solutions Dell ESG Storage Dell - Video Solutions Jeff Junker Technical Consultant Video Solutions Dell ESG Agenda Dell Forum 2011 Surveillance Overview What is the application or solution that needs to be deployed Solution Overview

More information

Video Surveillance EMC Storage with Digifort Enterprise

Video Surveillance EMC Storage with Digifort Enterprise Video Surveillance EMC Storage with Digifort Enterprise Sizing Guide H15229 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published August 2016 EMC believes the information

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Administering VMware vsphere and vcenter 5

Administering VMware vsphere and vcenter 5 Administering VMware vsphere and vcenter 5 Course VM-05 5 Days Instructor-led, Hands-on Course Description This 5-day class will teach you how to master your VMware virtual environment. From installation,

More information

SAN Virtuosity Fibre Channel over Ethernet

SAN Virtuosity Fibre Channel over Ethernet SAN VIRTUOSITY Series WHITE PAPER SAN Virtuosity Fibre Channel over Ethernet Subscribe to the SAN Virtuosity Series at www.sanvirtuosity.com Table of Contents Introduction...1 VMware and the Next Generation

More information

Installation and Cluster Deployment Guide

Installation and Cluster Deployment Guide ONTAP Select 9 Installation and Cluster Deployment Guide Using ONTAP Select Deploy 2.3 March 2017 215-12086_B0 doccomments@netapp.com Updated for ONTAP Select 9.1 Table of Contents 3 Contents Deciding

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

RAIDIX Data Storage System. Entry-Level Data Storage System for Video Surveillance (up to 200 cameras)

RAIDIX Data Storage System. Entry-Level Data Storage System for Video Surveillance (up to 200 cameras) RAIDIX Data Storage System Entry-Level Data Storage System for Video Surveillance (up to 200 cameras) 2017 Contents Synopsis... 2 Introduction... 3 Challenge... 4 Suggested Architecture... 5 Solution...

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility White Paper Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility The Cisco 4000 Series Integrated Services Routers (ISRs) are designed for distributed organizations with

More information

Test Report: Digital Rapids Transcode Manager Application with NetApp Media Content Management Solution

Test Report: Digital Rapids Transcode Manager Application with NetApp Media Content Management Solution Technical Report Test Report: Digital Rapids Transcode Manager Application with NetApp Media Content Management Solution Jim Laing, NetApp July 2012 TR-4084 TABLE OF CONTENTS 1 Executive Summary... 3 2

More information

Cisco HyperFlex HX220c M4 Node

Cisco HyperFlex HX220c M4 Node Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems

More information

iscsi Technology: A Convergence of Networking and Storage

iscsi Technology: A Convergence of Networking and Storage HP Industry Standard Servers April 2003 iscsi Technology: A Convergence of Networking and Storage technology brief TC030402TB Table of Contents Abstract... 2 Introduction... 2 The Changing Storage Environment...

More information

Overview. Prerequisites. VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot

Overview. Prerequisites. VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot Course Name Format Course Books vsphere Version Delivery Options Remote Labs Max Attendees Requirements Lab Time Availability May, 2017 Suggested Price

More information

Technical Note P/N REV A01 March 29, 2007

Technical Note P/N REV A01 March 29, 2007 EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...

More information

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August IOmark-VM VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC-160816-a Test Report Date: 16, August 2016 Copyright 2010-2016 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI,

More information

StorMagic SvSAN 6.1. Product Announcement Webinar and Live Demonstration. Mark Christie Senior Systems Engineer

StorMagic SvSAN 6.1. Product Announcement Webinar and Live Demonstration. Mark Christie Senior Systems Engineer StorMagic SvSAN 6.1 Product Announcement Webinar and Live Demonstration Mark Christie Senior Systems Engineer Introducing StorMagic What do we do? StorMagic SvSAN eliminates the need for physical SANs

More information

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek Storage Update and Storage Best Practices for Microsoft Server Applications Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek Agenda Introduction Storage Technologies Storage Devices

More information

Surveillance Dell EMC Storage with Aimetis Symphony

Surveillance Dell EMC Storage with Aimetis Symphony Surveillance Dell EMC Storage with Aimetis Symphony Configuration Guide H13960 REV 3.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the information

More information

Dell Technologies IoT Solution Surveillance with Genetec Security Center

Dell Technologies IoT Solution Surveillance with Genetec Security Center Dell Technologies IoT Solution Surveillance with Genetec Security Center Surveillance December 2018 H17435 Configuration Best Practices Abstract This guide is intended for internal Dell Technologies personnel

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 TECHNOLOGY BRIEF Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 ABSTRACT Xcellis represents the culmination of over 15 years of file system and data management

More information

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS By George Crump Economical, Storage Purpose-Built for the Emerging Data Centers Most small, growing businesses start as a collection of laptops

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays Dell EMC Engineering December 2016 A Dell Best Practices Guide Revisions Date March 2011 Description Initial

More information

VMware vsphere 5.5 Advanced Administration

VMware vsphere 5.5 Advanced Administration Format 4-day instructor led training Course Books 630+ pg Study Guide with slide notes 180+ pg Lab Guide with detailed steps for completing labs vsphere Version This class covers VMware vsphere 5.5 including

More information

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 White Paper Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Introduction Executive

More information

Video Surveillance EMC Storage with Genetec Security Center

Video Surveillance EMC Storage with Genetec Security Center Video Surveillance EMC Storage with Genetec Security Center Sizing Guide H13495 02 Copyright 2014-2016 EMC Corporation. All rights reserved. Published in the USA. Published August 2016 EMC believes the

More information

vstart 50 VMware vsphere Solution Overview

vstart 50 VMware vsphere Solution Overview vstart 50 VMware vsphere Solution Overview Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY,

More information

RAIDIX Data Storage Solution. Data Storage for a VMware Virtualization Cluster

RAIDIX Data Storage Solution. Data Storage for a VMware Virtualization Cluster RAIDIX Data Storage Solution Data Storage for a VMware Virtualization Cluster 2017 Contents Synopsis... 2 Introduction... 3 RAIDIX Architecture for Virtualization... 4 Technical Characteristics... 7 Sample

More information

Storageflex HA3969 High-Density Storage: Key Design Features and Hybrid Connectivity Benefits. White Paper

Storageflex HA3969 High-Density Storage: Key Design Features and Hybrid Connectivity Benefits. White Paper Storageflex HA3969 High-Density Storage: Key Design Features and Hybrid Connectivity Benefits White Paper Abstract This white paper introduces the key design features and hybrid FC/iSCSI connectivity benefits

More information

Next Generation Computing Architectures for Cloud Scale Applications

Next Generation Computing Architectures for Cloud Scale Applications Next Generation Computing Architectures for Cloud Scale Applications Steve McQuerry, CCIE #6108, Manager Technical Marketing #clmel Agenda Introduction Cloud Scale Architectures System Link Technology

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information

Video Surveillance EMC Storage with LenSec Perspective VMS

Video Surveillance EMC Storage with LenSec Perspective VMS Video Surveillance EMC Storage with LenSec Perspective VMS Version 1.0 Functional Verification Guide H14258 Copyright 2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC

More information