Cisco UCS S3260 Storage Server and Red Hat Ceph Storage Performance

Size: px
Start display at page:

Download "Cisco UCS S3260 Storage Server and Red Hat Ceph Storage Performance"

Transcription

1 White Paper Cisco UCS S3260 Storage Server and Red Hat Ceph Storage Performance This document provides a performance analysis of Red Hat Ceph Storage using Cisco UCS S3260 Storage Servers, Cisco UCS C220 M4 Rack Servers, Cisco UCS 6300 Series Fabric Interconnects, and Cisco UCS Manager. April Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 27

2 Contents Executive Summary Main Findings Introduction Technology Overview Cisco Unified Computing System Cisco UCS S3260 Storage Server Cisco UCS C220 M4 Rack Server Cisco UCS Virtual Interface Card 1387 Cisco UCS 6300 Series Fabric Interconnect Cisco Nexus 9332PQ Switch Cisco UCS Manager Red Hat Enterprise Linux 7.3 Red Hat Ceph Storage Solution Design for Tested Configuration Workload Characterization for Red Hat Ceph Storage Design Principles of Red Hat Ceph Storage on Cisco UCS Solution Overview Tested Solution Physical Setup Red Hat Enterprise Linux and Ceph Performance Setup Hardware Versions Benchmark Results Performance Baseline Ceph Benchmarking with the Ceph Benchmark Tool CBT Benchmark Results Sequential Write Performance Sequential Read Performance Random Read Performance Summary of Benchmark Results Recommendations for Cisco UCS S-Series and Red Hat Ceph Storage Conclusion For More Information 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 27

3 Executive Summary This document describes Red Hat Ceph Storage performance using a Cisco UCS S3260 Storage Server solution. It is based on a Cisco Validated Design, Cisco UCS S3260 Storage Server with Red Hat Ceph Storage, and discusses performance for a Ceph-specific workload. The goal of the document is to show the read and write performance of Red Hat Ceph Storage for a specific workload based on Cisco Unified Computing System (Cisco UCS) architecture and to provide general Cisco UCS configuration recommendations for Ceph-specific workloads. Main Findings The main findings of the tests reported in the document are listed here: The write performance of the tested solution is up to 30 MBps per Ceph object-storage daemon (OSD) node. The read performance of the tested solution is up to 95 MBps per Ceph OSD node. The Cisco UCS S3260 Storage Server can be used for all types of Red Hat Ceph Storage target workloads. Introduction With the continuing development of new technologies and the corresponding growth in the amount of data, organizations are looking for ways to shift large amounts of unstructured data to more cost-effective and flexible solutions. However, the challenges resulting from the shift to new technologies, including management, flexibility, performance, and data protection challenges, make implementing solutions difficult. But failure to implement new technologies contributes significantly to increased demands on capacity and, ultimately, to infrastructure costs. Storage now consumes a large amount of an organization's IT hardware budget, and business managers are beginning to question whether the return is worth the investment. In seeking solutions, customers have learned that new technologies such as software-defined storage can solve many problems without reducing availability or reliability. In fact, software-defined storage solutions can deliver greater availability and reliability, making data centers more enterprise ready than before. One notable software-defined storage solution is Red Hat Ceph Storage. Ceph Storage is an open, cost-effective, software-defined storage solution that supports massively scalable cloud and object-storage workloads. It can also deliver enterprise features and high performance for transaction-intensive workloads, which are predominant for traditional storage and flash arrays. Together with Cisco UCS, Ceph Storage can deliver a fully enterprise-ready solution that can manage different workloads and still remain flexible. The Cisco UCS S3260 Storage Server is an excellent platform to use with the main types of Ceph workloads, such as throughput- and capacity-optimized workloads. It is also excellent for workloads with a large number of I/O operations per second (IOPS). This document addresses the performance for throughput- and capacity-optimized workloads and provides recommendations about how to use the Cisco UCS S3260 for Ceph Storage Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 27

4 Technology Overview This section introduces the technologies used in the solution described in this document. Cisco Unified Computing System Cisco UCS is a state-of-the-art data center platform that unites computing, network, storage access, and virtualization resources into a single cohesive system. The main components of Cisco UCS are described here: Computing: The system is based on an entirely new class of computing system that incorporates rackmount and blade servers using Intel Xeon processor E5 and E7 CPUs. The Cisco UCS servers offer the patented Cisco Extended Memory Technology to support applications with large data sets and allow more virtual machines per server. Network: The system is integrated onto a low-latency, lossless, 10- or 40-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing (HPC) networks, which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements. Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements. Storage access: The system provides consolidated access to both SAN storage and network-attached storage (NAS) over the unified fabric. By unifying the storage access layer, Cisco UCS can access storage over Ethernet (with Network File System [NFS] or Small Computer System Interface over IP [iscsi]), Fibre Channel, and Fibre Channel over Ethernet (FCoE). This approach provides customers with choice for storage access and investment protection. In addition, server administrators can preassign storage-access policies for system connectivity to storage resources, simplifying storage connectivity and management for increased productivity. Cisco UCS is designed to deliver: Reduced total cost of ownership (TCO) and increased business agility Increased IT staff productivity through just-in-time provisioning and mobility support A cohesive, integrated system that unifies the technology in the data center Industry standards supported by a partner ecosystem of industry leaders Unified, embedded management for easy-to-scale infrastructure Cisco UCS S3260 Storage Server The Cisco UCS S3260 Storage Server (Figure 1.) is a modular, high-density, high-availability dual-node rack server well suited for service providers, enterprises, and industry-specific environments. It addresses the need for dense, cost-effective storage for the ever-growing amounts of data. Designed for a new class of cloud-scale applications, it is simple to deploy and excellent for big data applications, software-defined storage environments such as Ceph and other unstructured data repositories, media streaming, and content distribution Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 27

5 Figure 1. Cisco UCS S3260 Storage Server Extending the capabilities of the Cisco UCS C3000 platform, the Cisco UCS S3260 helps you achieve the highest levels of data availability. With dual-node capability that is based on the Intel Xeon processor E v4 series, it offers up to 600 terabytes (TB) of local storage in a compact 4-rack-unit (4RU) form factor. All hard-disk drives (HDDs) can be asymmetrically split between the dual nodes and are individually hot-swappable. The drives can be built in an enterprise-class Redundant Array of Independent Disks (RAID) redundant design or used in passthrough mode. This high-density rack server easily fits in a standard 32-inch-depth rack, such as the Cisco R42610 Rack. The Cisco UCS S3260 is deployed as a standalone server in both bare-metal or virtualized environments. Its modular architecture reduces TCO by allowing you to upgrade individual components over time and as use cases evolve, without having to replace the entire system. The Cisco UCS S3260 uses a modular server architecture that, using Cisco s blade technology expertise, allows you to upgrade the computing or network nodes in the system without the need to migrate data from one system to another. It delivers: Dual server nodes Up to 36 computing cores per server node Up to 60 drives, mixing a large form factor (LFF) with up to 28 solid-state disk (SSD) drives plus 2 SSD SATA boot drives per server node Up to 512 GB of memory per server node (1 TB total) Support for 12-Gbps serial-attached SCSI (SAS) drives A system I/O controller with a Cisco UCS Virtual Interface Card (VIC) 1300 platform embedded chip supporting dual-port 40-Gbps connectivity High reliability, availability, and serviceability (RAS) features with tool-free server nodes, system I/O controller, easy-to-use latching lid, and hot-swappable and hot-pluggable components 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 27

6 Cisco UCS C220 M4 Rack Server The Cisco UCS C220 M4 Rack Server (Figure 2.) is the most versatile, general-purpose enterprise infrastructure and application server in the industry. It is a high-density 2-socket enterprise-class rack server that delivers industry-leading performance and efficiency for a wide range of enterprise workloads, including virtualization, collaboration, and bare-metal applications. The Cisco UCS C-Series Rack Servers can be deployed as standalone servers or as part of Cisco UCS to take advantage of Cisco s standards-based unified computing innovations that help reduce customers TCO and increase their business agility. Figure 2. Cisco UCS C220 M4 Rack Server The enterprise-class Cisco UCS C220 M4 server extends the capabilities of the Cisco UCS portfolio in a 1RU form factor. It incorporates the Intel Xeon processor E v4 and v3 product family, next-generation DDR4 memory, and 12-Gbps SAS throughput, delivering significant performance and efficiency gains. The Cisco UCS C220 M4 delivers outstanding levels of expandability and performance in a compact 1RU package: Up to 24 DDR4 DIMMs for improved performance and lower power consumption Up to 8 small form-factor (SFF) drives or up to 4 LFF drives Support for a 12-Gbps SAS module RAID controller in a dedicated slot, leaving the remaining two PCI Express (PCIe) Generation 3.0 slots available for other expansion cards A modular LAN-on-motherboard (mlom) slot that can be used to install a Cisco UCS VIC or third-party network interface card (NIC) without consuming a PCIe slot Two embedded 1 Gigabit Ethernet LAN-on-motherboard (LOM) ports Cisco UCS Virtual Interface Card 1387 The Cisco UCS VIC 1387 (Figure 3.) is a Cisco innovation. It provides a policy-based, stateless, agile server infrastructure for your data center. This dual-port Enhanced Quad Small Form-Factor Pluggable (QSFP+) halfheight PCIe mlom adapter is designed exclusively for Cisco UCS C-Series and S3260 Rack Servers. The card supports 40 Gigabit Ethernet and FCoE. It incorporates Cisco s next-generation converged network adapter (CNA) technology and offers a comprehensive feature set, providing investment protection for future software feature releases. The card can present more than 256 PCIe standards-compliant interfaces to the host, and these can be dynamically configured as either NICs or host bus adapters (HBAs). In addition, the VIC supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) technology. This technology extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment. Figure 3. Cisco UCS Virtual Interface Card Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 27

7 The Cisco UCS VIC 1387 provides the following features and benefits: Stateless and agile platform: The personality of the card is determined dynamically at boot time using the service profile associated with the server. The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, bandwidth, and quality-of-service (QoS) policies of the PCIe interfaces are all determined using the service profile. The capability to define, create, and use interfaces on demand provides a stateless and agile server infrastructure. Network interface virtualization: Each PCIe interface created on the VIC is associated with an interface on the Cisco UCS fabric interconnect, providing complete network separation for each virtual cable between a PCIe device on the VIC and the interface on the fabric interconnect. Cisco UCS 6300 Series Fabric Interconnect Cisco UCS 6300 Series Fabric Interconnects are core components of Cisco UCS, providing both network connectivity and management capabilities for the system (Figure 4.). The Cisco UCS 6300 Series offers line-rate, low-latency, lossless 10 and 40 Gigabit Ethernet, FCoE, and Fibre Channel functions. Figure 4. Cisco UCS 6300 Series Fabric Interconnect The Cisco UCS 6300 Series provides the management and communication backbone for the Cisco UCS B-Series Blade Servers, 5100 Series Blade Server Chassis, and C-Series Rack Servers managed by Cisco UCS. All servers attached to the fabric interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6300 Series provides both LAN and SAN connectivity for all servers within the Cisco UCS domain. The Cisco UCS 6300 Series uses a cut-through network architecture, supporting deterministic, low-latency, linerate 10 and 40 Gigabit Ethernet ports, switching capacity of 2.56 terabits per second (Tbps), and 320 Gbps of bandwidth per chassis, independent of packet size and enabled services. The product family supports Cisco lowlatency, lossless 10 and 40 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnects support multiple traffic classes over a lossless Ethernet fabric from the server through the fabric interconnects. Significant TCO savings can be achieved with an FCoE-optimized server design in which NICs, HBAs, cables, and switches are consolidated. The Cisco UCS Port Fabric Interconnect is a 1RU Gigabit Ethernet and FCoE switch offering up to 2.56 Tbps throughput and up to 32 ports. The switch has 32 fixed 40-Gbps Ethernet and FCoE ports Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 27

8 Both the Cisco UCS 6332UP 32-Port Fabric Interconnect and the Cisco UCS UP 40-Port Fabric Interconnect have ports that can be configured for the breakout feature that supports connectivity between 40 Gigabit Ethernet ports and 10 Gigabit Ethernet ports. This feature provides backward compatibility to existing hardware that supports 10 Gigabit Ethernet. A 40 Gigabit Ethernet port can be used as four 10 Gigabit Ethernet ports. Using a 40 Gigabit Ethernet SFP, these ports on a Cisco UCS 6300 Series Fabric Interconnect can connect to another fabric interconnect that has four 10 Gigabit Ethernet SFPs. The breakout feature can be configured on ports 1 to 12 and ports 15 to 26 on the Cisco UCS 6332UP fabric interconnect. Ports 17 to 34 on the Cisco UCS UP fabric interconnect support the breakout feature. Cisco Nexus 9332PQ Switch Cisco Nexus 9000 Series Switches (Figure 5.) include both modular and fixed-port switches that provide a flexible, agile, low-cost, application-centric infrastructure. Figure 5. Cisco Nexus 9332PQ Switch The Cisco Nexus 9300 platform consists of fixed-port switches designed for top-of-rack (ToR) and middle-of-row (MoR) deployment in data centers that support enterprise applications, service provider hosting, and cloud computing environments. They are Layer 2 and 3 nonblocking 10 and 40 Gigabit Ethernet switches with up to 2.56 Tbps of internal bandwidth. The Cisco Nexus 9332PQ Switch is a 1RU switch that supports 2.56 Tbps of bandwidth and over 720 million packets per second (mpps) across thirty-two 40-Gbps QSFP+ ports All Cisco Nexus 9300 platform switches use dual- core 2.5-GHz x86 CPUs with 64-GB SSD drives and 16 GB of memory for enhanced network performance. With the Cisco Nexus 9000 Series, organizations can quickly and easily upgrade existing data centers to carry 40 Gigabit Ethernet to the aggregation layer or to the spine (in a leaf-and-spine configuration) through advanced, cost-effective optics that enable the use of existing 10 Gigabit Ethernet fiber (a pair of multimode fiber [MMF] strands). Cisco provides two modes of operation for the Cisco Nexus 9000 Series. Organizations can use Cisco NX-OS Software to deploy the Cisco Nexus 9000 Series in standard Cisco Nexus switch environments (NX-OS mode). Organizations also can use a hardware infrastructure that is ready to support Cisco Application Centric Infrastructure (Cisco ACI ) to take full advantage of an automated, policy-based, systems management approach (ACI mode) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 27

9 Cisco UCS Manager Cisco UCS Manager (Figure 6.) provides unified, embedded management of all software and hardware components of Cisco UCS across multiple chassis and rack servers and thousands of virtual machines. It supports all Cisco UCS product models, including Cisco UCS B-Series Blade Servers, C-Series Rack Servers, and M-Series Modular Servers and Cisco UCS Mini, as well as the associated storage resources and networks. Cisco UCS Manager is embedded on a pair of Cisco UCS 6300 or 6200 Series Fabric Interconnects using a clustered, activestandby configuration for high availability. The manager participates in server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection. Figure 6. Cisco UCS Manager An instance of Cisco UCS Manager with all Cisco UCS components managed by it forms a Cisco UCS domain, which can include up to 160 servers. In addition to provisioning Cisco UCS resources, this infrastructure management software provides a model-based foundation for streamlining the day-to-day processes of updating, monitoring, and managing computing resources, local storage, storage connections, and network connections. By enabling better automation of processes, Cisco UCS Manager allows IT organizations to achieve greater agility and scale in their infrastructure operations while reducing complexity and risk. The manager provides flexible roleand policy-based management using service profiles and templates. Cisco UCS Manager manages Cisco UCS systems through an intuitive HTML 5 or Java user interface and a command-line interface (CLI). It can register with Cisco UCS Central Software in a multidomain Cisco UCS environment, enabling centralized management of distributed systems scaling to thousands of servers. Cisco UCS Manager can be integrated with Cisco UCS Director to facilitate orchestration and to provide support for converged infrastructure and infrastructure as a service (IaaS). The Cisco UCS XML API provides comprehensive access to all Cisco UCS Manager functions. The API provides Cisco UCS system visibility to higher-level systems management tools from independent software vendors (ISVs) such as VMware, Microsoft, and Splunk as well as tools from BMC, CA, HP, IBM, and others. ISVs and in-house developers can use the XML API to enhance the value of the Cisco UCS platform according to their unique requirements. Cisco UCS PowerTool for Cisco UCS Manager and the Python Software Development Kit (SDK) help automate and manage configurations within Cisco UCS Manager Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 27

10 Red Hat Enterprise Linux 7.3 Red Hat Enterprise Linux (RHEL) is a high-performing operating system that has delivered outstanding value to IT environments for more than a decade. More than 90 percent of Fortune Global 500 companies use Red Hat products and solutions, including Red Hat Enterprise Linux. As the world s most trusted IT platform, Red Hat Enterprise Linux has been deployed in mission-critical applications at global stock exchanges, financial institutions, leading telcos, and animation studios. It also powers the websites of some of the most recognizable global retail brands. Red Hat Enterprise Linux: Delivers high performance, reliability, and security Is certified by the leading hardware and software vendors Scales from workstations, to servers, to mainframes Provides a consistent application environment across physical, virtual, and cloud deployments Designed to help organizations make a seamless transition to emerging data center models that include virtualization and cloud computing, Red Hat Enterprise Linux includes support for major hardware architectures, hypervisors, and cloud providers, making deployments across physical and different virtual environments predictable and secure. Enhanced tools and new capabilities in this release enable administrators to tailor the application environment to efficiently monitor and manage computing resources and security. Red Hat Ceph Storage Red Hat Ceph Storage is an open, cost-effective, software-defined storage solution that enables massively scalable cloud and object-storage workloads. By unifying object storage, block storage, and file storage in one platform, Ceph Storage efficiently and automatically manages the petabytes of data needed to run businesses facing massive data growth. Ceph is a self-healing, self-managing platform with no single point of failure. Ceph enables a scale-out cloud infrastructure built on industry-standard servers that significantly lowers the cost of storing enterprise data and helps enterprises manage their exponential data growth in an automated fashion. For OpenStack environments, Ceph Storage is tightly integrated with OpenStack services, including Nova, Cinder, Manila, Glance, Keystone, and Swift, and it offers user-guided storage lifecycle management. Ceph Storage was voted the number-one storage option by OpenStack users. The product s highly tunable, extensible, and configurable architecture offers mature interfaces for enterprise block and object storage, making it well suited for archive, rich media, and cloud infrastructure environments. Ceph Storage is also well suited for object-storage workloads outside OpenStack because it is proven at web scale and flexible for demanding applications, and it offers the data protection, reliability, and availability that enterprises demand. It was designed from the foundation for web-scale object storage. Industry-standard APIs allow seamless migration of, and integration with, an enterprise s applications. A Ceph object-storage cluster is accessible using Amazon Simple Storage Service (S3), Swift, and native API protocols. Ceph has a lively and active open-source community contributing to its innovation. At Ceph s core is the Reliable Autonomic Distributed Object Store (RADOS) service, which stores data by spreading it across multiple industrystandard servers. Ceph uses Controller Replication Under Scalable Hashing (CRUSH), a uniquely differentiated data placement algorithm that intelligently distributes the data pseudo-randomly across the cluster for better performance and data protection. Ceph supports both replication and erasure coding to protect data, and it provides multisite disaster-recovery options Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 27

11 Red Hat collaborates with the global open-source Ceph community to develop new Ceph features and then packages changes into a predictable, stable, enterprise-quality software-defined storage product, which is Red Hat Ceph Storage. This unique development model combines the advantage of a large development community with Red Hat s industry-leading support services to offer new storage capabilities and benefits to enterprises. Solution Design for Tested Configuration Red Hat Ceph Storage on Cisco UCS is a solution well suited for running different workloads with software-defined storage on enterprise-proven technology. Workload Characterization for Red Hat Ceph Storage One of the most important design considerations for software-defined storage is characterization of the target workload of your application. With the need for enhanced support for new object-storage workloads over the past several years, Red Hat Ceph Storage is particularly attractive, but it requires a solid understanding and planning. The rise of flash-memory storage over the past decade and the ongoing development of software-defined storage systems have both created new possibilities for optimizing workloads. One of the main benefits of Ceph Storage is the capability to work on different workloads depending on the needs of the customer. It can be used and easily optimized on Cisco UCS for various workloads through the flexible use of different systems and components. As shown in Figure 7, the typical workloads can be characterized as follows: IOPS optimized: Sled chassis with a high amount of flash memory for high-iops workloads and uses cases such as MySQL workloads Throughput optimized: Standard or dense chassis with a mixture of SSDs and HDDs for use cases such as rich media Capacity optimized: Dense or ultradense chassis with a high number of HDDs for use cases such as active archives Figure 7. Workload Profiles for Red Hat Ceph Storage 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 27

12 Among these workloads, throughput- and capacity-optimized workloads present the biggest opportunities for Ceph Storage. The architectural design for these workloads differs. Throughput-optimized workloads can be reproduced by adding a small amount of flash memory to the storage layer or by using high computing power in the front-end servers, whereas capacity-optimized workloads use HDDs only. A benefit of Ceph Storage on Cisco UCS is that you can mix different target workloads and achieve better usability and readiness for your data center. For example, your Ceph Storage target workload can include a capacityoptimized application such as backup and a throughput-optimized web-scale application, both on the same Cisco UCS S-Series Storage Server. Overall, Ceph Storage is gaining popularity because it can handle different workloads, allowing IT organizations to more easily incorporate it. Design Principles of Red Hat Ceph Storage on Cisco UCS Based on the workload characterization discussed in the preceding section, the general design of a Red Hat Ceph Storage solution should consider the following principles, summarized in 0 The need for scale-out storage: Scalability, dynamic provisioning across a unified name space, and performance at scale are common needs that people hope to address by adding distributed scale-out storage to their data centers. For a few use cases, such as primary storage for a scale-up Oracle relational database management system (RDBMS), traditional storage appliances remain the right solution. Target workload: Ceph Storage pools can be deployed to serve three types of workload: IOPS-intensive, throughput-intensive, and capacity-intensive workloads. As noted in Table 1, server configurations should be chosen accordingly. Storage access method: Ceph Storage supports both block access pools and object access pools within a single Ceph cluster (additionally, distributed file access is in technical preview at the time of this writing). Block access is supported on replicated pools. Object access is supported on either replicated or erasurecoded pools. Capacity: Depending on cluster storage capacity needs, standard, dense, or ultradense servers can be chosen for Ceph storage pools. The Cisco UCS C-Series and S-Series provide several well-suited server models. Fault-domain risk tolerance: Ceph clusters are self-healing following hardware failure. Customers wanting to reduce the impact on performance and resources during the self-healing process should adhere to the minimum cluster server recommendations listed in Table 1. Data-protection method: With replication and erasure coding, Ceph Storage offers two data-protection methods that affect the overall design. Erasure-coded pools can provide a better price-to-performance ratio, and replicated pools typically provide higher absolute performance Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 27

13 Figure 8. Red Hat Ceph Storage Design Considerations Table 1presents some technical specifications you should follow for a successful implementation based on the design principles discussed here. Table 1. Technical Specifications for Red Hat Ceph Storage Workload Cluster Size Network CPU and Memory Ratio of OSD Journal to Disk Media Data Protection IOPS Minimum of 10 OSD nodes 10 to 40 Gbps 4 to 10 cores per OSD / 16 GB + 2 GB per OSD node 4:1 SSD:NVMe ratio or all NVMe with co-located journals Ceph RADOS block device (RBD; block) replicated pools Throughput Minimum of 10 OSD nodes 10 to 40 Gbps (greater than 10 Gbps with more than 12 HDDs or nodes) 1 core per 2 HDDs / 16 GB + 2 GB per OSD node 12-18:1 HDD:NVMe ratio, or 4 or 5:1 HDD:SSD ratio Ceph RBD (block) replicated pools Ceph RADOS gateway (RGW; object) replicated pools Capacity archive Minimum of 7 OSD nodes 10 Gbps (or 40 Gbps for latency-sensitive workloads) 1 core per 2 HDDs / 16 GB + 2 GB per OSD node All HDDs with co-located journals Ceph RGW (object) erasure-coded pools The tested Ceph Storage on Cisco UCS combination uses a mixed workload of throughput- and capacity-optimized configurations, as follows: Cluster size: 10 OSD nodes Network: All Ceph nodes connected with 40 Gbps CPU and memory: All nodes configured with 128 GB of memory and more than 28 cores OSD disk: Configured for a 6:1 HDD:SSD ratio Data protection: Ceph RBD with a replication factor of 3 (3x replication) and erasure coding Ceph monitor (MON) nodes: Deployed on Cisco UCS C220 M4 Rack Server Ceph OSD nodes: Deployed on Cisco UCS S3260 Storage Server Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 27

14 Solution Overview The tested solution using Cisco UCS and Red Hat Ceph Storage is built on a three-part foundation (Figure 9): Integration and configuration of the Cisco UCS S3260 Storage Server into Cisco UCS Manager Base installation of Red Hat Enterprise Linux and preparation for the next step Deployment of Red Hat Ceph Storage Figure 9. Tested Configuration Detailed design and deployment steps are presented in the document Cisco UCS S3260 Storage Server with Red Hat Ceph Storage Design and Deployment of Red Hat Ceph Storage 2.1 on Cisco UCS S3260 Storage Server. Tested Solution The tested solution uses a best-practices configuration. It demonstrates the performance of a mixed SSD and HDD configuration with a focus on throughput- and capacity-optimized workloads. To get a better understanding of Ceph performance, the tested solution focused on block storage devices with two different protection methods: Replication: Replicated storage pools provide the most common protection method. This protection method is the only one that Red Hat supports for block devices. The default protection method uses 3x replication. Erasure coding: Erasure-coded pools are useful for cost-effective storage of data such as data in active archives. This method creates a single copy plus parity and uses n = k + m notation, where k is the number of data chunks, m is the number of coding chunks, and n is the number of chunks placed by CRUSH over the OSD nodes. Usually coding is used, but the values can vary depending on the customer s preference. Erasure coding is currently not supported for block devices, but the tested solution gives an indication of the performance difference between replication and erasure coding. The following sections describe the physical setup of the tested solution with all the hardware and software components Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 27

15 Physical Setup Figure 10 shows the rack configuration with the following components: Two Cisco Nexus 9332PQ Switches for client access Two Cisco UCS 6248 fabric interconnects for management of clients Two Cisco UCS 6332 fabric interconnects for management of Ceph environment Three Cisco UCS C220 M4 servers for Ceph MON Five Cisco UCS S3260 servers for Ceph OSD Two Cisco UCS chassis with 10 Cisco UCS B200 M3 Blade Servers for Ceph client (24 virtual machines running Red Hat Enterprise Linux 7.3 configured in all) Figure 10. Overview of Rack Configuration Figure 11 shows the network setup for the tested solution. Both Cisco UCS chassis with ten Cisco UCS B200 M3 blades are fully connected with thirty-two 10-Gbps links to both Cisco UCS 6248 fabric interconnects. Each Cisco UCS 6248 fabric interconnect is connected with twelve 10-Gbps uplinks to the Cisco Nexus 9332PQ, helping ensure that each Ceph client has a theoretical bandwidth of 20 Gbps or more per blade. The Ceph environment is fully connected with 40-Gbps network technology, helping ensure that there are no bandwidth limitations for the OSD nodes. Each Ceph OSD node is connected through two 40-Gbps links to each Cisco UCS 6332 fabric interconnect. Each Ceph MON node is connected with a single 40-Gbps line to each Cisco UCS 6332 fabric interconnect. Both Cisco UCS 6332 fabric interconnects are connected with four 40-Gbps links to each Cisco Nexus 9332PQ Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 27

16 Figure 11. Network Setup The tested solution follows the best practice of separating the Ceph public network from the Ceph cluster network. All Ceph OSD nodes run with one 40-Gbps interface in their own private LANs for the cluster network, leaving all the network traffic under the Cisco UCS 6332 fabric interconnect. This approach prevents additional network traffic on the client network. In addition, all Ceph OSD nodes run with one 40-Gbps interface on the public network, the same as all Ceph MON nodes and all Ceph clients. All network interfaces were running with a maximum transmission unit (MTU) size of Red Hat Enterprise Linux and Ceph Performance Setup Following the document Cisco UCS S3260 Storage Server with Red Hat Ceph Storage Design and Deployment of Red Hat Ceph Storage 2.1 on Cisco UCS S3260 Storage Server, the CPU BIOS settings ( 977) and the virtual NIC (vnic) adapter settings ( 976) were configured for optimal use under Red Hat Enterprise Linux 7.3. The setup for the OSD and journal drives was as follows: OSD HDD 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 27

17 RAID 0 Access policy: Read Write Write cache policy: Always Write Back Drive cache: Disable Read policy: Normal I/O policy: Direct Journal SSD JBOD The configuration of the Ceph Storage cluster follows best practices by Red Hat with modifications in the Ansible all file and can be consulted at Hardware Versions Table 2 provides an overview of the tested hardware configuration. Table 2. Hardware Versions Ceph Tier Platform Component Specifics OSD with 5 Cisco UCS S3260 chassis, each with 2 Cisco UCS C3260 M4 nodes (overall 10 OSD nodes) MON with 3 Cisco UCS C220 M4 servers Cisco UCS C3260 M4 CPU 2 x Intel Xeon processor E v4 CPUs at 2.4 GHz Memory Network RAID controller 8 x 16-GB 2400-MHz DDR4 1 x Cisco UCS VIC 1387 with dual-port 40 Gbps 1 x Broadcom ROC 3316i with 4-GB cache Storage Boot: 2 x 120-GB SATA Intel DC S3500 SSDs (RAID 1) Journal: 4 x 400-GB SAS Toshiba PX02SMF040 SSDs (JBOD) OSD: 24 x 6-TB NL-SAS Seagate ST6000NM0014 drives (RAID 0) Cisco UCS C220 M4 CPU 2 x Intel Xeon processor E v4 CPUs at 2.2 GHz Memory 8 x 16-GB 2400-MHz DDR4 Network RAID controller 1 x Cisco UCS VIC 1385 with dual-port 40 Gbps 1 x Broadcom ROC 3108i Storage Boot: 2 x 600-GB SAS 10,000-rpm drives (RAID 1) Client with 10 Cisco UCS B200 M3 servers Cisco UCS B200 M3 CPU 2 x Intel Xeon processor E v2 CPUs at 2.8 GHz Memory 16 x 16-GB 1600-MHz DDR2 Network 1 x Cisco UCS VIC 1240 with dual-port 40 Gbps RAID controller 1 x LSI SAS 2004 Storage Boot: 2 x 300-GB SAS 10,000-rpm drives (RAID 1) 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 27

18 Software Distributions and Versions The required software distribution versions are listed in Table 3. Table 3. Software Versions Layer Component Version or Release Computing (chassis): Cisco UCS S3260 Board controller Chassis management controller Shared adapter 2.0(13e) 4.1(2d) Computing (server node): Cisco UCS C3260 M4 Computing (rack server): Cisco UCS C220 M4S SAS expander BIOS Board controller B073 Cisco Integrated Management Controller (IMC) 2.0(13f) C3x60M c Storage controller Adapter BIOS 4.1(2d) Board controller 32.0 Cisco IMC C220M d 2.0(13f) Cisco FlexFlash controller build 165 Storage controller Network: Cisco UCS 6332 fabric interconnect Cisco UCS Manager 3.1(2c) Kernel System Network: Cisco Nexus 9332PQ Switch BIOS Cisco NX-OS Software 5.0(3)N2(3.12c) 5.0(3)N2(3.12c) 7.0(3)I5(1) Software Red Hat Enterprise Linux Server 7.3 (x86_64) Ceph el7cp Benchmark Results Because Ceph is a software-defined storage solution built on various hardware components, it is important to get a solid understanding of the whole solution by running various performance benchmarks. The recommended approach is to run specific network and storage base-performance benchmarks at the beginning, before starting a comprehensive Ceph benchmarking process. Performance Baseline To fully understand what the maximum performance of the tested Ceph solution is, the recommended approach is to perform some simple base testing of network and storage by using Linux tools such as iperf3 1 and fio 2. For the base network performance, we conducted various iperf3 tests between all Ceph components. Table 4 provides a summary of the network performance. 1 For more information about iperf3, see 2 For more information about fio, see Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 27

19 Table 4. Network Baseline Performance with iperf3 Ceph OSD Ceph MON Ceph Client Ceph OSD 39.6 Gbps 39.6 Gbps 9.9 Gbps Ceph MON 39.5 Gbps 39.7 Gbps 9.9 Gbps Ceph Client 9.9 Gbps 9.9 Gbps 9.9 Gbps Additional information about optimizing the maximum network performance on a Cisco UCS S3260 Storage Server can be found in the document Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server. We next evaluated base performance for the Cisco UCS S3260 HDDs and SSDs. At first, we tested a single HDD and SSD. Then, to see the maximum performance we could get from a single S3260 node, we ran a performance baseline test using 24 HDDs and 4 SSDs. We concentrated on both extremes, running fio tests for a single device and genfio for multiple devices with an IOdepth of 1 on block sizes of 4 KB and 4 MB. The results are shown in Table 5. Table 5. SSD and HDD Baseline Performance with fio and genfio Block Size Fio Workload HDD (RAID 0) 1 Seagate 6-TB HDD HDD (RAID 0) 24 Seagate 6-TB HDDs SSD (JBOD) 1 Toshiba 400-GB SSD SSD (JBOD) 4 Toshiba 400-GB SSDs 4 KB Sequential read 24,900 IOPS 140,000 IOPS 8,400 IOPS 37,400 IOPS Sequential write 26,880 IOPS 95,100 IOPS 20,400 IOPS 81,300 IOPS Random read 196 IOPS 2,086 IOPS 10,400 IOPS 30,600 IOPS Random write 747 IOPS 4,653 IOPS 19,600 IOPS 79,400 IOPS 4 MB Sequential read 226 MBps 5358 MBps 600 MBps 2988 MBpS Sequential write 156 MBps 3611 MBps 430 MBps 1641 MBps Random read 167 MBps 2730 MBps 890 MBps 2753 MBps Random write 185 MBps 2233 MBps 427 MBps 1265 MBps The performance baseline for the network and disks provided a solid understanding of the maximum performance that can be achieved on Cisco UCS servers. The results may vary in other configurations; they represent an average of the components used in the testing reported here. Ceph Benchmarking with the Ceph Benchmark Tool The next step in the performance benchmarking process is a comprehensive test of the solution performance using the tested Ceph cluster. A common benchmarking tool for examining the performance of a Ceph cluster is the Ceph Benchmark Tool (CBT) 3. CBT is based on Python and can use various benchmark drivers such as radosbench, librbdfio, kvmrbdfio, and rbdfio. The tests reported here used radosbench, which comes with the ceph-common package. It contains a benchmarking facility that exercises the cluster using librados, the low-level native object-storage API provided by Ceph. 3 CBT can be downloaded at Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 27

20 CBT requires several settings on the administration node and the client nodes, which can be checked at the CBT website ( To run the tests, you need to configure different roles for the nodes: Administration or head node: This node can access and manage the Ceph cluster and initiates the CBT command. Client nodes: These nodes can access the cluster and generate the load. OSD and MON nodes: These nodes have the usual Ceph functions in a Ceph cluster. They collect the performance data from the CBT benchmark testing and send it to the administration or head node. The tested solution used a separate Cisco UCS C220 M4 node as the administration or head node. For the CBT clients, the solution used 10 physical blades with 24 virtual Red Hat Enterprise Linux machines. The CBT OSD and MON nodes were installed on Cisco UCS S3260 and C220 M4 nodes. Figure 12 shows the CBT setup for the tested solution. Figure 12. CBT Setup 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 27

21 An example of a CBT configuration file for initiating the benchmark is shown here. In this setup, 24 CBT clients, 10 OSD nodes, and 3 MON nodes are configured. The benchmark test runs with an object size of 4 MB and a sequential workload. cluster: user: "cephadm" head: "cephadm" clients: ["client1", "client2", "client3", "client4", "client5", "client6", "client7", "client8", "client9", "client10", "client11", "client12", "client13", "client14", "client15", "client16", "client17", "client18", "client19", "client20", "client osds: ["cephosd1", "cephosd2", "cephosd3", "cephosd4", "cephosd5", "cephosd6", "cephosd7", "cephosd8", "cephosd9", "cephosd10"] mons: cephmon1: a: " :6789" cephmon1: a: " :6789" cephmon1: a: " :6789" iterations: 1 rebuild_every_test: False use_existing: True clusterid: "ceph" tmp_dir: "/tmp/cbt" pool_profiles: replicated: pg_size: 4096 pgp_size: 4096 replication: 3 benchmarks: radosbench: op_size: [ ] write_only: False time: 300 concurrent_ops: [ 128 ] concurrent_procs: Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 27

22 use_existing: True pool_profile: "replicated" pool_per_proc: False target_pool: "rados-bench-cbt" readmode: "seq" osd_ra: [131072] For the CBT performance benchmark, all tests were run in sequential order. The tests also used a patch that helped ensure that the workload of the cluster (CPU and disk) remained under 5 percent because the CBT benchmark deletes all objects in a pool after each run, which can cause a higher-than-normal load in the cluster. Note: If you don t implement the patch, make sure that you wait after each CBT run until the cluster is under normal load. CBT Benchmark Results The CBT benchmarking focused on one single object size of 4 MB, with the Ceph cluster configured with a 6:1 HDD:SSD ratio for throughput- and capacity-optimized workloads. Because latency is not an important factor for these workloads, the testing concentrated on bandwidth performance only. Testing focused on three benchmarks for the whole cluster, with a comparison between Ceph replication pools and Ceph erasure-coding pools: Sequential write performance Sequential read performance Random read performance To provide a better understanding of the result values, the tests used a performance value based on MBps per OSD node. This value is independent of the size of the cluster and provides a better comparison in the event that components of the cluster, such as SSDs and HDDs, are changed. Sequential Write Performance The sequential write performance test compared the sequential write performance of a replication pool with 3x replication and the performance of an erasure-coded pool with coding. Peak performance was 19.7 MBps per OSD node for the replication pool compared to 30.7 MBps per OSD node for the erasure-coded pool. Erasurecoded pools benefit from fewer write operations, in contrast to a replication pool with 3x replication, resulting in higher bandwidth. In general, the write performance depends on the configuration of the journals and the SSD that is being used. An NVMe SSD, which is also available for the Cisco UCS S3260, would result in higher write bandwidth. Figure 13 shows the results of the test Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 27

23 Figure 13. Sequential Write Performance with CBT Sequential Read Performance The sequential read performance test compares a 3x replication pool and an erasure-coded pool. Peak performance was 82.3 MBps per OSD node for the 3x replication pool and 51.2 MBps per OSD node for the erasure-coded pool. Replication pools achieve higher bandwidth because fewer read operations are performed: a single object is read compared to four chunks in an erasure-coded pool. Read operations come straight from the disks. Figure 14 shows the results of the test Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 23 of 27

24 Figure 14. Sequential Read Performance with CBT Random Read Performance The random read performance test compared the random read performance of a 3x replication pool and the performance of an erasure-coded pool. The peak bandwidth was about 94.6 MBps per OSD node for the 3x replication pool and 83.5 MBps per OSD node for the erasure-coded pool. In the tests, more random read operations were performed than sequential read operations because Read Ahead was not enabled for the read policy of each OSD node and HDD, and reading is serialized per placement group. Figure 15 shows the results of the tests Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 24 of 27

25 Figure 15. Random Read Performance with CBT Summary of Benchmark Results To summarize the performance benchmarks, the Cisco UCS S3260 Storage Server showed excellent performance. It also offers additional options to improve the bandwidth. NVMe technology can be added to the Cisco UCS S-Series servers, increasing performance further. In addition, the S-Series is an excellent platform for running multiple different Ceph workloads, and it helps customers manage a comprehensive software-defined storage environment with the easy-to-use Cisco UCS Manager management interface. Recommendations for Cisco UCS S-Series and Red Hat Ceph Storage The main advantages of the Cisco UCS S3260 are its flexibility to host a variety of object-storage workloads, its high network performance, and its management simplicity. The Cisco UCS C220 M4 front-end rack servers can be considered fixed-configuration servers, and the Cisco USC S3260 servers can be customized in a variety of ways to meet different Ceph Storage target workloads. Table 6 provides an overview of the recommended configurations for various workloads Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 25 of 27

26 Table 6. Recommendations for Cisco UCS S3260 with Red Hat Ceph Storage Workload Component Specifications Comments IOPS optimized CPU 2 Intel Xeon processor E v4 CPUs at 2.4 GHz or higher Memory Network Storage 128 GB 10 to 40 Gbps 1 NVMe for journals and 4 SSDs for OSD nodes Physically separated workload in a dual-node S3260 in which 1 node hosts the IOPSoptimized workload and the other node hosts any other workload Minimum of 10 dual-node S3260 servers required Throughput optimized CPU Memory Up to 2 Intel Xeon processor E v4 CPUs 128 to 256 GB CPU and memory can be lower depending on the number of OSD nodes Minimum of 10 single nodes or 5 dual-node Network 40 Gbps S3260 servers required Storage Up to 3 NVMe or 12 SSDs for journals and up to 48 to 56HDDs for OSD nodes Capacity optimized CPU Memory Up to 2 Intel Xeon processor E v4 CPUs 128 to 256 GB CPU and memory can be lower depending on the number of OSD nodes Minimum of 7 single nodes or 4 dual-node Network 40 Gbps S3260 servers required Storage Up to 60 HDDs for OSD nodes (optional use of SSDs for journals) The use of Cisco UCS Manager greatly simplifies management and scaling for the Red Hat Ceph Storage solution compared to traditional management and scaling operations using the local server BIOS and network switch management. Performance can also be enhanced by the use of individual RAID 0 HDD OSD nodes and the enablement of Read Ahead read policy for each OSD node. Conclusion The use of software-defined storage is growing rapidly, and Red Hat Ceph Storage and the Cisco UCS S3260 Storage Server together provide an excellent solution for organizations adopting that technology. The flexibility of the Cisco UCS S3260 helps customers use Ceph Storage in a variety of ways, reducing overall TCO. The solution delivers high performance through flexible composable architecture and an easy-to-use management interface through Cisco UCS Manager, achieving two important features for Ceph Storage. Customers can now scale independently and achieve high performance for all types of workloads. In benchmark testing, the Cisco UCS S-Series demonstrated read and write performance that exceeds the current standards and sets a new milestone for software-defined storage with Ceph Storage Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 26 of 27

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

Veritas NetBackup on Cisco UCS S3260 Storage Server

Veritas NetBackup on Cisco UCS S3260 Storage Server Veritas NetBackup on Cisco UCS S3260 Storage Server This document provides an introduction to the process for deploying the Veritas NetBackup master server and media server on the Cisco UCS S3260 Storage

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Cisco HyperFlex HX220c M4 Node

Cisco HyperFlex HX220c M4 Node Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Cisco UCS B460 M4 Blade Server

Cisco UCS B460 M4 Blade Server Data Sheet Cisco UCS B460 M4 Blade Server Product Overview The new Cisco UCS B460 M4 Blade Server uses the power of the latest Intel Xeon processor E7 v3 product family to add new levels of performance

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Cisco UCS Virtual Interface Card 1225

Cisco UCS Virtual Interface Card 1225 Data Sheet Cisco UCS Virtual Interface Card 1225 Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites compute, networking,

More information

Cisco UCS C24 M3 Server

Cisco UCS C24 M3 Server Data Sheet Cisco UCS C24 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

Cisco UCS B200 M3 Blade Server

Cisco UCS B200 M3 Blade Server Data Sheet Cisco UCS B200 M3 Blade Server Product Overview The Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS B-Series Blade Servers and C- Series Rack Servers with networking and storage

More information

Product Overview >> Cisco R Series Racks: Make Your Infrastructure Highly Secure. Has an innovative design to deliver exceptional power, cooling, and cable management, as well as strength and stability

More information

Release Notes for Cisco UCS C-Series Software, Release 2.0(13)

Release Notes for Cisco UCS C-Series Software, Release 2.0(13) Release Notes for Cisco UCS C-Series Software, Release First Published: 2016-09-17 Last Modified: 2017-06-22 deliver unified computing in an industry-standard form factor to reduce total cost of ownership

More information

Cisco UCS C240 M3 Server

Cisco UCS C240 M3 Server Data Sheet Cisco UCS C240 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

Cisco UCS C240 M3 Server

Cisco UCS C240 M3 Server Data Sheet Cisco UCS C240 M3 Rack Server Product Overview The form-factor-agnostic Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS C-Series Rack Servers and B-Series Blade Servers with networking

More information

Cisco UCS Virtual Interface Card 1227

Cisco UCS Virtual Interface Card 1227 Data Sheet Cisco UCS Virtual Interface Card 1227 Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites computing,

More information

Cisco and Cloudera Deliver WorldClass Solutions for Powering the Enterprise Data Hub alerts, etc. Organizations need the right technology and infrastr

Cisco and Cloudera Deliver WorldClass Solutions for Powering the Enterprise Data Hub alerts, etc. Organizations need the right technology and infrastr Solution Overview Cisco UCS Integrated Infrastructure for Big Data and Analytics with Cloudera Enterprise Bring faster performance and scalability for big data analytics. Highlights Proven platform for

More information

Commvault ScaleProtect with Cisco UCS S3260 Storage Server

Commvault ScaleProtect with Cisco UCS S3260 Storage Server Commvault ScaleProtect with Cisco UCS S3260 Storage Server This document introduces the Commvault Data Platform deployment on the Cisco UCS S3260 Storage Server. 2018 Cisco and/or its affiliates. All rights

More information

Cisco UCS B230 M2 Blade Server

Cisco UCS B230 M2 Blade Server Data Sheet Cisco UCS B230 M2 Blade Server Product Overview The Cisco UCS B230 M2 Blade Server is one of the industry s highest-density two-socket blade server platforms. It is a critical new building block

More information

Cisco HyperFlex HX220c Edge M5

Cisco HyperFlex HX220c Edge M5 Data Sheet Cisco HyperFlex HX220c Edge M5 Hyperconvergence engineered on the fifth-generation Cisco UCS platform Rich digital experiences need always-on, local, high-performance computing that is close

More information

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs) Based on Cisco UCS C460 M4 Rack Servers Solution Brief May 2015 With Intelligent Intel Xeon Processors Highlights Integrate with Your Existing Data Center Our SAP HANA appliances help you get up and running

More information

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision At-A-Glance Unified Computing Realized Today, IT organizations assemble their data center environments from individual components.

More information

ScaleProtect with Cisco UCS on the Cisco UCS C240 M5 Rack Server

ScaleProtect with Cisco UCS on the Cisco UCS C240 M5 Rack Server ScaleProtect with Cisco UCS on the Cisco UCS C240 M5 Rack Server This document provides an introduction to the process of deploying the Commvault Data Platform including Commvault HyperScale Software on

More information

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices Solution Overview Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices BENEFITS Cisco UCS and StorMagic SvSAN deliver a solution to the edge: Single addressable storage pool

More information

Veeam Availability Suite on Cisco UCS S3260

Veeam Availability Suite on Cisco UCS S3260 Veeam Availability Suite on Cisco UCS S3260 April 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 101 Contents Introduction Technology Overview

More information

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016 Enterprise Ceph: Everyway, your way! Amit Bhutani @ Dell Kyle Bader @ Red Hat Red Hat Summit June 2016 Agenda Overview of Ceph Components and Architecture Evolution of Ceph in Dell-Red Hat Joint OpenStack

More information

Docker Enterprise Edition on Cisco UCS C220 M5 Servers for Container Management

Docker Enterprise Edition on Cisco UCS C220 M5 Servers for Container Management Guide Docker Enterprise Edition on Cisco UCS C220 M5 Servers for Container Management July 2017 Contents Introduction Reference Architecture Cisco UCS Programmable Infrastructure Docker Enterprise Edition

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments 16GFC provides performance boost for Oracle data warehousing workloads. Executive Summary The computing industry is experiencing

More information

Cisco UCS B440 M1High-Performance Blade Server

Cisco UCS B440 M1High-Performance Blade Server Cisco UCS B440 M1 High-Performance Blade Server Product Overview The Cisco UCS B440 M1 High-Performance Blade Server delivers the performance and reliability to power compute-intensive, enterprise-critical

More information

Oracle Database Consolidation on FlashStack

Oracle Database Consolidation on FlashStack White Paper Oracle Database Consolidation on FlashStack with VMware 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Contents Executive Summary Introduction

More information

Cisco UCS C210 M2 General-Purpose Rack-Mount Server

Cisco UCS C210 M2 General-Purpose Rack-Mount Server Cisco UCS C210 M2 General-Purpose Rack-Mount Server Product Overview Cisco UCS C-Series Rack-Mount Servers extend unified computing innovations to an industry-standard form factor to help reduce total

More information

Cisco UCS C200 M2 High-Density Rack-Mount Server

Cisco UCS C200 M2 High-Density Rack-Mount Server Cisco UCS C200 M2 High-Density Rack-Mount Server Product Overview Cisco UCS C-Series Rack-Mount Servers extend unified computing innovations to an industry-standard form factor to help reduce total cost

More information

Cisco UCS Virtual Interface Card 1400 Series

Cisco UCS Virtual Interface Card 1400 Series Data Sheet Cisco UCS Virtual Interface Card 1400 Series Cisco Unified Computing System overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites computing,

More information

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Best practices Roland Mueller IBM Systems and Technology Group ISV Enablement April 2012 Copyright IBM Corporation, 2012

More information

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches . White Paper Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches Introduction Best practices for I/O connectivity in today s data centers configure each server with redundant connections

More information

Overview. Cisco UCS Manager User Documentation

Overview. Cisco UCS Manager User Documentation Cisco UCS Manager User Documentation, page 1 Infrastructure Management Guide, page 2 Cisco Unified Computing System, page 3 Cisco UCS Building Blocks and Connectivity, page 5 Cisco UCS Manager User Documentation

More information

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Cisco Nexus 4000 Series Switches for IBM BladeCenter Cisco Nexus 4000 Series Switches for IBM BladeCenter What You Will Learn This document is targeted at server, storage, and network administrators planning to deploy IBM BladeCenter servers with the unified

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Cisco UCS C210 M1 General-Purpose Rack-Mount Server

Cisco UCS C210 M1 General-Purpose Rack-Mount Server Cisco UCS C210 M1 General-Purpose Rack-Mount Server Product Overview Cisco UCS C-Series Rack-Mount Servers extend unified computing innovations to an industry-standard form factor to help reduce total

More information

Overview. About the Cisco UCS S3260 System

Overview. About the Cisco UCS S3260 System About the Cisco UCS S3260 System, on page 1 How to Use This Guide, on page 3 Cisco UCS S3260 System Architectural, on page 5 Connectivity Matrix, on page 7 Deployment Options, on page 7 Management Through

More information

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster Table of Contents 2 Powering Ceph Storage Cluster with Supermicro All-Flash NVMe Storage Solutions 4 Supermicro Ceph OSD Ready All-Flash NVMe Reference Architecture Planning Consideration Supermicro NVMe

More information

Commvault MediaAgent on Cisco UCS S3260 Storage Server

Commvault MediaAgent on Cisco UCS S3260 Storage Server Commvault MediaAgent on Cisco UCS S3260 Storage Server This document provides an introduction to the process of deploying Commvault Data Platform on the Cisco UCS S3260 Storage Server for a traditional

More information

Deploy a Next-Generation Messaging Platform with Microsoft Exchange Server 2010 on Cisco Unified Computing System Powered by Intel Xeon Processors

Deploy a Next-Generation Messaging Platform with Microsoft Exchange Server 2010 on Cisco Unified Computing System Powered by Intel Xeon Processors Deploy a Next-Generation Messaging Platform with Microsoft Exchange Server 2010 on Cisco Unified Computing System Solution Brief May 2011 Highlights Next-Generation Messaging System with Intel Xeon processors

More information

Cisco Design Guide for ScaleProtect with Cisco UCS

Cisco Design Guide for ScaleProtect with Cisco UCS Cisco Design Guide for ScaleProtect with Cisco UCS Last Updated: December 12, 2018 1 About the Cisco Validated Design Program The Cisco Validated Design (CVD) program consists of systems and solutions

More information

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of

More information

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer Maailman paras palvelinjärjestelmä Tommi Salli Distinguished Engineer Cisco UCS Momentum $1.1B Annualized Run Rate 7,400 UCS Customers; 2,620 repeat customers with average 3.4 repeat buys 350 ATP Channel

More information

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads Power emerging OpenStack use cases with high-performance Samsung/ Red Hat Ceph reference architecture Optimize storage cluster performance

More information

Cisco UCS C240 M4 I/O Characterization

Cisco UCS C240 M4 I/O Characterization White Paper Cisco UCS C240 M4 I/O Characterization Executive Summary This document outlines the I/O performance characteristics of the Cisco UCS C240 M4 Rack Server using the Cisco 12-Gbps SAS modular

More information

SAP High-Performance Analytic Appliance on the Cisco Unified Computing System

SAP High-Performance Analytic Appliance on the Cisco Unified Computing System Solution Overview SAP High-Performance Analytic Appliance on the Cisco Unified Computing System What You Will Learn The SAP High-Performance Analytic Appliance (HANA) is a new non-intrusive hardware and

More information

Managing Cisco UCS C3260 Dense Storage Rack Server

Managing Cisco UCS C3260 Dense Storage Rack Server Managing Cisco UCS C3260 Dense Storage Rack Server This chapter contains the following topics: About Cisco UCS C3260 Dense Storage Rack Server, page 1 Cisco UCS C3260 Dense Storage Rack Server Architectural

More information

Cisco Application Centric Infrastructure (ACI) Simulator

Cisco Application Centric Infrastructure (ACI) Simulator Data Sheet Cisco Application Centric Infrastructure (ACI) Simulator Cisco Application Centric Infrastructure Overview Cisco Application Centric Infrastructure (ACI) is an innovative architecture that radically

More information

Introducing SUSE Enterprise Storage 5

Introducing SUSE Enterprise Storage 5 Introducing SUSE Enterprise Storage 5 1 SUSE Enterprise Storage 5 SUSE Enterprise Storage 5 is the ideal solution for Compliance, Archive, Backup and Large Data. Customers can simplify and scale the storage

More information

Fibre Channel over Ethernet and 10GBASE-T: Do More with Less

Fibre Channel over Ethernet and 10GBASE-T: Do More with Less White Paper Fibre Channel over Ethernet and 10GBASE-T: Do More with Less What You Will Learn Over the past decade, data centers have grown both in capacity and capabilities. Moore s Law which essentially

More information

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c White Paper Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c What You Will Learn This document demonstrates the benefits

More information

Extremely Fast Distributed Storage for Cloud Service Providers

Extremely Fast Distributed Storage for Cloud Service Providers Solution brief Intel Storage Builders StorPool Storage Intel SSD DC S3510 Series Intel Xeon Processor E3 and E5 Families Intel Ethernet Converged Network Adapter X710 Family Extremely Fast Distributed

More information

VMware Virtual SAN on Cisco UCS S3260 Storage Server Deployment Guide

VMware Virtual SAN on Cisco UCS S3260 Storage Server Deployment Guide VMware Virtual SAN on Cisco UCS S3260 Storage Server Deployment Guide May 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 23 Contents Executive

More information

TITLE. the IT Landscape

TITLE. the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape 1 TITLE Drivers for adoption Lower TCO Speed and Agility Scale Easily Operational Simplicity Hyper-converged Integrated storage & compute

More information

Data Center solutions for SMB

Data Center solutions for SMB Data Center solutions for SMB VSPEX proven infrastructure Cisco Connect 2013 Ciprian Pirv Vendor Distri Integrator Vendor: Vendor: R&D, Marketing R&D, Marketing Distributor: Financing, stocks & logistics,

More information

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server Cisco UCS C250 M2 Extended-Memory Rack-Mount Server Product Overview Cisco UCS C-Series Rack-Mount Servers extend unified computing innovations to an industry-standard form factor to help reduce total

More information

Next Generation Computing Architectures for Cloud Scale Applications

Next Generation Computing Architectures for Cloud Scale Applications Next Generation Computing Architectures for Cloud Scale Applications Steve McQuerry, CCIE #6108, Manager Technical Marketing #clmel Agenda Introduction Cloud Scale Architectures System Link Technology

More information

Broadberry. Hyper-Converged Solution. Date: Q Application: Hyper-Converged S2D Storage. Tags: Storage Spaces Direct, DR, Hyper-V

Broadberry. Hyper-Converged Solution. Date: Q Application: Hyper-Converged S2D Storage. Tags: Storage Spaces Direct, DR, Hyper-V TM Hyper-Converged Solution Date: Q2 2018 Application: Hyper-Converged S2D Storage Tags: Storage Spaces Direct, DR, Hyper-V The Cam Academy Trust Set up in 2011 to oversee the conversion of Comberton Village

More information

Overview. About the Cisco UCS S3260 System

Overview. About the Cisco UCS S3260 System About the Cisco UCS S3260 System, on page 1 How to Use This Guide, on page 3 Cisco UCS S3260 System Architectural, on page 4 Deployment Options, on page 5 Management Through Cisco UCS Manager, on page

More information

UCS Architecture Overview

UCS Architecture Overview UCS Architecture Overview Max Alvarado Brenes, Datacenter Systems Engineer, Central America BRKCOM-1005 Agenda Introduction Cisco s Datacenter Vision Unified Computing Systems UCS Hardware Components UCS

More information

The FlashStack Data Center

The FlashStack Data Center SOLUTION BRIEF The FlashStack Data Center THE CHALLENGE: DATA CENTER COMPLEXITY Deploying, operating, and maintaining data center infrastructure is complex, time consuming, and costly. The result is a

More information

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments 1 2017 2017 Cisco Cisco and/or and/or its

More information

Top 5 Reasons to Consider

Top 5 Reasons to Consider Top 5 Reasons to Consider NVM Express over Fabrics For Your Cloud Data Center White Paper Top 5 Reasons to Consider NVM Express over Fabrics For Your Cloud Data Center Major transformations are occurring

More information

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility White Paper Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility The Cisco 4000 Series Integrated Services Routers (ISRs) are designed for distributed organizations with

More information

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 White Paper Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Introduction Executive

More information

Commvault MediaAgent on Cisco UCS C240 M5 Rack Server

Commvault MediaAgent on Cisco UCS C240 M5 Rack Server Commvault MediaAgent on Cisco UCS C240 M5 Rack Server This document provides an introduction to the process of deploying Commvault Data Platform on the Cisco UCS C240 M5 Rack Server for a traditional Commvault

More information

Cisco UCS. Click to edit Master text styles Second level Third level Fourth level Fifth level

Cisco UCS. Click to edit Master text styles Second level Third level Fourth level Fifth level Cisco UCS Sunucu Platformlarında İnovasyon Ahmet Keçeciler, Uzman Sistem Danışmanı Biltam Agenda Introduction Cisco s Datacenter Vision Unified Computing Systems UCS Hardware Components UCS Management

More information

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server Cisco UCS C250 M2 Extended-Memory Rack-Mount Server Product Overview Cisco UCS C-Series Rack-Mount Servers extend unified computing innovations to an industry-standard form factor to help reduce total

More information

VersaStack Design Guide for IBM Cloud Object Storage with Cisco UCS S3260 Storage Server

VersaStack Design Guide for IBM Cloud Object Storage with Cisco UCS S3260 Storage Server VersaStack Design Guide for IBM Cloud Object Storage with Cisco UCS S3260 Storage Server Last Updated: June 22, 2017 About the Cisco Validated Design (CVD) Program The CVD program consists of systems and

More information

Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture

Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture White Paper Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture Executive Summary Introduction Microsoft SQL Server 2005 has been in extended support since April 2011,

More information

Table 1 The Elastic Stack use cases Use case Industry or vertical market Operational log analytics: Gain real-time operational insight, reduce Mean Ti

Table 1 The Elastic Stack use cases Use case Industry or vertical market Operational log analytics: Gain real-time operational insight, reduce Mean Ti Solution Overview Cisco UCS Integrated Infrastructure for Big Data with the Elastic Stack Cisco and Elastic deliver a powerful, scalable, and programmable IT operations and security analytics platform

More information

UCS Architecture Overview

UCS Architecture Overview BRKINI-1005 UCS Architecture Overview Max Alvarado Brenes Systems Engineer Central America Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker after the session 1. Find this session

More information

Cisco UCS Unified Fabric

Cisco UCS Unified Fabric Solution Overview Unified Fabric Third Generation of Connectivity and Management for Cisco Unified Computing System 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public

More information

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation . White Paper Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation Introduction As organizations increasingly rely on IT to help enable, and even change, their business

More information

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh System-x PLM x86 servers are taking on more demanding roles, including high-end business critical applications x86 server segment is the

More information

Use of the Internet SCSI (iscsi) protocol

Use of the Internet SCSI (iscsi) protocol A unified networking approach to iscsi storage with Broadcom controllers By Dhiraj Sehgal, Abhijit Aswath, and Srinivas Thodati In environments based on Internet SCSI (iscsi) and 10 Gigabit Ethernet, deploying

More information

High performance and functionality

High performance and functionality IBM Storwize V7000F High-performance, highly functional, cost-effective all-flash storage Highlights Deploys all-flash performance with market-leading functionality Helps lower storage costs with data

More information

Sugon TC6600 blade server

Sugon TC6600 blade server Sugon TC6600 blade server The converged-architecture blade server The TC6600 is a new generation, multi-node and high density blade server with shared power, cooling, networking and management infrastructure

More information

Microsoft SharePoint Server 2010 on Cisco Unified Computing System

Microsoft SharePoint Server 2010 on Cisco Unified Computing System Microsoft SharePoint Server 2010 on Cisco Unified Computing System Medium Farm Solution-Performance and Scalability Study White Paper June 2011 Contents Introduction... 4 Objective... 4 SharePoint 2010

More information

The Impact of Hyper- converged Infrastructure on the IT Landscape

The Impact of Hyper- converged Infrastructure on the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape Focus on innovation, not IT integration BUILD Consumes valuables time and resources Go faster Invest in areas that differentiate BUY 3 Integration

More information

IT Agility Delivered: Cisco Unified Computing System

IT Agility Delivered: Cisco Unified Computing System Solution Brief IT Agility Delivered: Cisco Unified Computing System 2011 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public information. Page 1 IT Agility Delivered: Cisco

More information

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale In Collaboration with Intel UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale Aniket Patankar UCS Product Manager May 2015 Cisco UCS - Powering Applications at Every Scale

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

THE OPEN DATA CENTER FABRIC FOR THE CLOUD Product overview THE OPEN DATA CENTER FABRIC FOR THE CLOUD The Open Data Center Fabric for the Cloud The Xsigo Data Center Fabric revolutionizes data center economics by creating an agile, highly efficient

More information

A Cloud WHERE PHYSICAL ARE TOGETHER AT LAST

A Cloud WHERE PHYSICAL ARE TOGETHER AT LAST A Cloud WHERE PHYSICAL AND VIRTUAL STORAGE ARE TOGETHER AT LAST Not all Cloud solutions are the same so how do you know which one is right for your business now and in the future? NTT Communications ICT

More information

The PowerEdge M830 blade server

The PowerEdge M830 blade server The PowerEdge M830 blade server No-compromise compute and memory scalability for data centers and remote or branch offices Now you can boost application performance, consolidation and time-to-value in

More information

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family Dell PowerVault MD Family Modular storage The Dell PowerVault MD storage family Dell PowerVault MD Family The affordable choice The Dell PowerVault MD family is an affordable choice for reliable storage.

More information

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage Solution Brief DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage DataON Next-Generation All NVMe SSD Flash-Based Hyper-Converged

More information

Cisco UCS E-Series Servers

Cisco UCS E-Series Servers Data Sheet Cisco UCS E-Series Servers Product Overview Cisco UCS E-Series Servers, part of the Cisco Unified Computing System (Cisco UCS), are next-generation power-optimized general-purpose x86 64-bit

More information

HP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch

HP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch HP Converged Network Switches and Adapters Family Data sheet Realise the advantages of Converged Infrastructure with HP Converged Network Switches and Adapters Data centres are increasingly being filled

More information

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes A Dell Reference Architecture Dell Engineering August 2015 A Dell Reference Architecture Revisions Date September

More information

Scality RING on Cisco UCS: Store File, Object, and OpenStack Data at Scale

Scality RING on Cisco UCS: Store File, Object, and OpenStack Data at Scale Scality RING on Cisco UCS: Store File, Object, and OpenStack Data at Scale What You Will Learn Cisco and Scality provide a joint solution for storing and protecting file, object, and OpenStack data at

More information

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES Jan - Mar 2009 SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES For more details visit: http://www-07preview.ibm.com/smb/in/expressadvantage/xoffers/index.html IBM Servers & Storage Configured

More information

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family Dell MD Family Modular storage The Dell MD storage family Dell MD Family Simplifying IT The Dell MD Family simplifies IT by optimizing your data storage architecture and ensuring the availability of your

More information

Cisco Unified Computing System for SAP Landscapes

Cisco Unified Computing System for SAP Landscapes Cisco Unified Computing System for SAP Landscapes Improve IT Responsiveness and Agility for Rapidly Changing Business Demands by Using the Cisco Unified Computing System White Paper November 2010 Introduction

More information

Cisco UCS C220 M5 Rack Server Disk I/O Characterization

Cisco UCS C220 M5 Rack Server Disk I/O Characterization Cisco UCS C220 M5 Rack Server Disk I/O Characterization June 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 27 Executive summary This document

More information