Cloud Scale Architectures

Similar documents
Next Generation Computing Architectures for Cloud Scale Applications

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale

Cisco Unified Computing System

UCS Technical Deep Dive: Getting to the Heart of the Matter

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Overview. About the Cisco UCS S3260 System

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

UCS Fundamentals Aaron Kerr, Consulting Systems Engineer

UCS Management Deep Dive

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Cisco HyperFlex HX220c M4 Node

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco UCS C24 M3 Server

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Overview. About the Cisco UCS S3260 System

UCS Architecture Overview

Cisco Integrated System for Microsoft Azure Stack

Veritas NetBackup on Cisco UCS S3260 Storage Server

UCS Networking Deep Dive. Neehal Dass - Customer Support Engineer

Cisco UCS C240 M3 Server

Resilient WAN and Security for Distributed Networks with Cisco Meraki MX

UCS Architecture Overview

Overview. Cisco UCS Manager User Documentation

UCS Engineering Details for the SAN Administrator

Veeam Availability Suite on Cisco UCS S3260

Commvault ScaleProtect with Cisco UCS S3260 Storage Server

Cisco UCS C240 M3 Server

Cisco UCS S3260 System Storage Management

Cisco HyperFlex HX220c Edge M5

Cisco UCS S3260 System Storage Management

Intelligent WAN Sumanth Kakaraparthi Principal Product Manager PSOCRS-2010

Cisco UCS B200 M3 Blade Server

Cisco UCS Virtual Interface Card 1227

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

Enterprise Network Compute System (ENCS)

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

Cisco UCS Virtual Interface Card 1225

ScaleProtect with Cisco UCS on the Cisco UCS C240 M5 Rack Server

Configuring Cisco UCS Server Pools and Policies


Cisco and Cloudera Deliver WorldClass Solutions for Powering the Enterprise Data Hub alerts, etc. Organizations need the right technology and infrastr

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

UCS Management Deep Dive

Cisco UCS S3260 System Storage Management

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

Release Notes for Cisco Integrated System for Microsoft Azure Stack, Release 1.0. Release Notes for Cisco Integrated System for Microsoft

Cisco UCS C200 M2 High-Density Rack-Mount Server

Cisco UCS. Click to edit Master text styles Second level Third level Fourth level Fifth level

Commvault MediaAgent on Cisco UCS S3260 Storage Server

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

SAP High-Performance Analytic Appliance on the Cisco Unified Computing System

Managing Cisco UCS C3260 Dense Storage Rack Server

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

Virtualized Video Processing: Video Infrastructure Transformation Yoav Schreiber, Product Marketing Manager, Service Provider Video BRKSPV-1112

Cisco UCS C210 M2 General-Purpose Rack-Mount Server

Using Advanced Features on Cisco UCS Dan Hanson, Technical Marketing Manager, Data Center Group

Cisco UCS B440 M1High-Performance Blade Server

UCS Firmware Management Architecture

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC

UCS Networking 201 Deep Dive

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

Dell Storage NX Windows NAS Series Configuration Guide

UCS C-Series Deployment Options, Best Practice and UCSM Integration

Exam Questions C

UCS Management Architecture Deep Dive

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE VSAN INFRASTRUCTURE

Cisco UCS Unified Fabric

DELL STORAGE NX WINDOWS NAS SERIES CONFIGURATION GUIDE

VMware Virtual SAN on Cisco UCS S3260 Storage Server Deployment Guide

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

Midmarket Data Center Architecture: Cisco Unified Computing System with the Cisco Nexus 1000V Switch

Cisco UCS C210 M1 General-Purpose Rack-Mount Server

HPE Synergy HPE SimpliVity 380

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

DELL EMC NX WINDOWS NAS SERIES CONFIGURATION GUIDE

Altos R320 F3 Specifications. Product overview. Product views. Internal view

2014 VMware Inc. All rights reserved.

Overview of Cisco Unified Computing System

Cisco UCS B230 M2 Blade Server

Cisco CCIE Data Center Written (Beta) Download Full Version :

Ron Emerick, Oracle Corporation

All Roads Lead to Convergence

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

C-Series Jason Shaw, UCS Technical Marketing Steve McQuerry, CCIE # 6108, UCS Technical Marketing

Direct Attached Storage

Oracle Database Consolidation on FlashStack

T-CAP (Converged Appliance Platform)

Cisco UCS B460 M4 Blade Server

Cisco UCS-Mini with B200 M4 Blade Servers High Capacity/High Performance Citrix Virtual Desktop and App Solutions

Commvault MediaAgent on Cisco UCS C240 M5 Rack Server

Microsoft Office SharePoint Server 2007

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices

Dell EMC Microsoft Exchange 2016 Solution

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Configuration Maximums

Transcription:

Next Generation Computing Architectures for Cloud Scale Applications Steve McQuerry, CCIE #6108, Engineering Manager, Technical Marketing James Leach, Principle Engineer, Product Management BRKCOM-2028

Agenda Introduction Cloud Scale Architectures System Link Technology Mapping Application Architecture to Infrastructure Scaling and Maintaining the infrastructure Incorporating Storage into the Infrastructure Summary

Cloud Scale Architectures 4

Cloud-Scale Inverts Computing Architecture Core Enterprise Workloads Cloud Scale SCM ERP/ Financial Legacy CRM Email Online Content Gaming Mobile IoT ecommerce Server Hypervisor Single Server Single Application Many Applications Many Servers 5

Cloud-Scale Application Components Cloud Scale Cloud Scale Online Content Gaming Mobile IoT ecommerce Single Application Web Server App Server DB Server Content Web App DB Content Server Server Server Server Server Many Servers 6

Sample Cloud Scale Application Cloud scale applications distribute the workload across multiple component nodes These nodes have various system requirements Distributed Components report into manager nodes Manager nodes note availability, farm out workloads and may receive data from worker nodes Worker nodes provide the bulk of cloud scale applications Application Data Consumer Analytics Data Object Data Store 7

Compute Infrastructure Requirements Manager Node Dual-Socket/8-16 core 2.5Ghz or better 128-512GB Memory 1/10Gbps Ethernet 300GB-4TB HDD (RAID) Redundancy at HW & app level Web Node Single Socket/2-4 cores 1.0-2.0Ghz 8-16GB Memory 1Gbps Ethernet 20-100GB HDD Redundancy at app level Content Node Single Socket/2-4 Core 2.0-3.7 Ghz 16-32GB Memory 1/10Gbps Ethernet 50-200GB HDD Redundancy at app level App Node Single or Dual Socket/4-18 Core 2.0-2.5Ghz 16-128GB Memory 1Gbps Ethernet 50-100GB HDD Redundancy handled at app level Db Node Single or Dual Socket/4-24 Core 2.0-3.0Ghz 32-256GB Memory 1Gbps Ethernet 100-250GB HDD Redundancy handled at app level 8

Storage Infrastructure Requirements Object Store 1-500TB Storage SSD Options JBOD/RAID capabilities 1-40Gbps Network BW FC/FCoE initiator capabilities Dual Socket/24-48 Cores 2.0-2.5Ghz Redundancy at HW level Application Data High Performance I/O Application Acceleration Data Optimization Various Workloads High Availability Scalability FC or iscsi connectivity Analytics Data Typically a combination of HDFS, Analytics SW, and Database SW running on various rack servers. See Big Data Reference architectures for more information. Object Data Store Analytics Data Application Data 9

Cisco System Link Technology Extending the UCS Fabric inside the server Compute Shared Infrastructure 10

UCS M-Series Composable Server Shared Local Resources Improved utilization of resources Resource amortization over smaller nodes Shared Local Resources Based on Cisco System Link Technology PCIe Third Gen VIC extends UCS fabric to within the server UCS M Series Compute Cartridges Modular Design Improved subsystem lifecycle management Ability to scale individual subsystems independently Lean Componentry Improved compute density Cost and power optimization 11

M-Series Design Considerations UCS M-Series was designed to complement the compute infrastructure requirements in the data center The goal of the M-Series is to offer smaller compute nodes to meet the needs of scale out applications, while taking advantage of the management infrastructure and converged innovation of UCS By disaggregating the server components, UCS M-Series helps provide a component life cycle management strategy as opposed to a server-by-server strategy UCS M-Series provides a platform that will provide flexible and rapid deployment of compute, storage, and networking resources 12

2 RU UCS M-Series Specifics Front View Rear View Aggregate Capacity per Chassis 16 Servers, 64 Cores, 512 GB Memory M-142 Compute Cartridge 2 Server Nodes CPU Intel Xeon E3 1275Lv3, 1240Lv3, 1220Lv3 Memory 8 GB UDIMM 32 GB Max/Cartridge Disks 2 or 4 SSDs SATA (240 GB, 480 GB) SAS (400 GB, 800 GB, 1.6 TB) RAID Controller Network Power Cisco 12 Gb Modular RAID Controller with 2GB Flash- Backed Write Cache (FBWC) 2 x 40 Gbps 2 x 1400W 13

UCS M-Series Overview Front View 2 RU 8 Cartridges Typical Rack Deployment 10 Gb 62xx Fabric Interconnects Intel Xeon E3 (4 Cores) 1 2 32 GB Memory 4 x SSD Scale from 480 GB SATA to 6.4 TB SAS M142 Compute Cartridge Rack Capacity Chassis 20 Servers 320 Cores 1280 Memory 10TB Storage 128TB 40 Gb ports Rear View Power Supplies 2 x 1400 Watts 2 x 40 Gb Uplinks 14

UCS M-Series Architecture Cisco System Link Technology Power & Cooling 2 x 40 Gb Uplinks 4 x SSD Drives w/controller Shared Local Disk + I/O Midplane + 16 Processors w/ RAM 16 Independent Servers 2U Rack Mount Chassis Physical Machine? Virtual Machine? Composite Machine

UCS M4308 Chassis 2U Rack-Mount Chassis (3.5 H x 30.5 L x 17.5 W) 3 rd Generation UCS VIC ASIC (System Link Technology) 1 Chassis Management Controller - CMC (Manages Chassis resources) 8 cartridge slots (x4 PCIe Gen3 lanes per slot) Slots and ASIC adaptable for future use Four 2.5 SFF Drive Bays (SSD Only) 1 Internal x 8 PCIe Gen3 connection to Cisco 12 SAS RAID card 1 Internal x8 PCIe Gen3 slot, ½ height, ½ Width (future use) 6 hot swappable fans, accessed by top rear cover removal 2 external Ethernet management ports, 1 external serial console port (connected to CMC for out of band troubleshooting) 2 AC Power Module bays - (1+1 redundant, 220V only) 2 QSFP 40Gbps ports (data and management - server and chassis) 16

UCS M142 Cartridge UCS M142 Cartridge contains 2 distinct E3 servers Each Server independently manageable and has it s own memory, CPU, management controller (CIMC) Cartridge connects to Mid-plane for access to power, network, storage, management. x2 PCIe Gen3 Lanes Connect each server to System Link Technology for access to storage & network ~15.76 Gbps I/O Bandwidth per server Host PCIe Interface x2 PCIe Gen3 Lanes Host PCIe Interface x2 PCIe Gen3 Lanes 17

Cisco System Link Technology UCS M-Series Architecture Application sees Policy-based profiles applied to resources Power & Cooling Power & Cooling Resources 2 x Network 40 Gb Uplinks Resources 4 x SSD Drives w/controller Storage Resources UCS Management Midplane Compute Resources 2U Rack Mount Chassis 18

System Link Technology 19

System Link Technology Overview System Link Technology is built on proven Cisco Virtual Interface Card (VIC) technologies VIC technologies use standard PCIe architecture to present an endpoint device to the compute resources VIC technology is a key component to the UCS converged infrastructure In the M-Series platform this technology has been extended to provide access to PCIe resources local to the chassis like storage eth0 eth1 operating system eth0 eth1 eth2 eth3 operating system 20

System Link Technology Overview Traditional PCIe Cards require additional PCIe slots for additional resources eth0 eth1 operating system eth0 eth1 eth2 eth3 operating system System Link Technology provides a mechanism to connect to the PCIe architecture of the host This presents unique PCIe devices to each server PCIe BUS PCIe Slot x8 Gen3 lanes PCIe BUS Each device created on the ASIC is a unique PCIe physical function PCIe Slot x8 Gen3 lanes System Link ASIC x32 PCIe Gen3 lanes distributed across compute resources 21

Cisco Innovation -Converged Network Adapter (M71KR-Q/E) The Initial Converged Network adapter from Cisco combined two physical PCIe devices on one card. Intel/Broadcom Ethernet Adapter Qlogic / Emulex HBA The ASIC provided a simulated PCIe switch for each device to be connected to the OS so they used native drivers and PCIe communications. The ASIC also provided the mechanism to transport the FC traffic in an FCoE frame to the Fabric Interconnect. The number of PCIe devices is limited to that of the hardware installed behind the ASIC 22

Cisco Innovation The Cisco VIC (M81KR, 1280, & 1380) The Cisco VIC was an extension of the first Converged networking adapters but created 2 new PCIe devices. vnic vhba Each device is a PCIe physical function and is presented to the operating system as a unique PCIe endpoint. Drivers were created for each device. enic driver for vnic fnic driver for vhba The ASIC allows for the creation of multiple devices (e.g. 128, 256, 1024), limited only by the ASIC capabilities 23

Cisco VIC technology vs. SR-IOV A point that is often confused is that of SR-IOV and the operation of the VIC. SR-IOV allows for the creation of virtual functions on a physical PCIe card. The main difference is that a virtual function does not allow for direct configuration and must use the configuration of the primary physical card they are created on. In addition SR-IOV devices require that the operating system be SR-IOV aware to communicate with the virtual endpoints. VIC technology differs because each vnic or vhba is a PCIe physical function with full independent configuration options for each device, and requiring no OS dependencies. It is important to note that VIC technology is SR-IOV capable. For operating systems that can use and/or require SR-IOV support the capability does exist and is supported on the card. 24

System Link Technology 3 rd Generation Innovation System Link technology provides the same capabilities as a VIC to configure PCIe devices for use by the server The difference with System Link is that it is an ASIC within the chassis and not a PCIe card The ASIC is core to the M-Series platform and provides access to I/O resources The ASIC connects devices to the compute resource through the system mid plane System Link provides the ability to access network and storage shared resources SCSI Commands Virtual Drive 25

System Link Technology 3 rd Generation Innovation Same ASIC used in the 3 rd Generation VIC Cartridges Network M-Series takes advantage of additional features which include: Gen3 PCIe root complex for connectivity to Chassis PCIe cards (e.g Storage) 32 Gen3 PCIe lanes connected to cartridges CPUs 2 x 40Gbps uplinks 32 Lanes Gen3 PCIe 40Gbps QSFP x2 Scale to 1024 PCIe devices created on ASIC (e.g. vnic) Storage 26

Cisco System Link Technology Storage Resources Storage Resources 2U Rack Mount Chassis 27

Introduction of the snic The SCSI NIC (snic) is the PCIe device that provides access to the storage components of the UCS M- Series Chassis The snic presents to the operating system as a PCIe connected local storage controller The communication between the operating system to the drive is via standard SCSI commands SCSI Commands Virtual Drive LUN0 28

Chassis Storage Components Chassis Storage Consist of: Cisco Modular 12Gb SAS RAID controller with 2GB Flash Drive Mid-Plane SSD Drives Controller supports RAID 0,1,5,6,10,50, & 60 Support for 6Gb and 12Gb SAS or SATA Drives No support for Spinning Media SSD only (power, heat, performance) All drives are hot Swappable RAID rebuild functionality 29

Storage Controller - Drive Groups RAID configuration groups drives together to form a RAID volume Drive Groups define the operation of the physical derives (RAID level, write back, stripe size) Drive Groups can bas as small as 1 1drive (R0) or 4 drives (R0, R1, R5, R10) M-Series chassis supports multiple drive groups Drive groups are configured through new policy setting in UCS Manager RAID 0 Drive RAID Group 1 1 Drive Group 1 RAID 0 Drive Group RAID 01 2 Drive Group 1 RAID 0 Drive Group 3 RAID 0 Drive Group 2 RAID 0 Drive Group 4 30

Storage Controller - Virtual Drives After creating a drive group on a controller a virtual drive is created to be presented as a LUN (drive) to the operating system RAID 1 Drive Group 1 1.6TB RAID 0 Drive Group 2 1.6TB It is common to use the entire amount of space available on the drive group If you do not use all the available space on the drive group it becomes possible to create more virtual drives on that drive group to be presented as LUNs to the device For UCS M-Series the virtual drives are created on a drive group to be assigned to a specific server within the chassis for local storage Virtual Drive 14 11 10 12 13 43 15 50 26 81 79 200GB 100GB 100B Virtual Drive 23 16 20 19 22 17 18 100GB 100B RAID 0 Drive Group 3 1.6TB 31

Storage Capabilities The Cisco Modular Storage Controller supports 16 Drive Groups 64 Virtual Drives across all drive groups at initial ship 64 virtual drives in a single drive group UCS M-Series Supports 2 Virtual Drives per server (service profile) Service Profile mobility within the chassis Maximum Flexibility: Virtual drives can be different sizes within the drive group All virtual drives in the drive group have the same r/w and stripe size settings Drive groups configured as fault tolerant RAID groups will support: UCSM reporting of degraded virtual drive UCSM reporting of Failed Drive Hot swap capabilities from the back of the chassis Automatic RAID rebuild UCS Manager provides status of RAID rebuild Drive groups can have different number of virtual drives 32

/ Mapping Disk resources to M-Series Servers The server sees the storage controller and the virtual drive(s) as local resources. Through the use of policies and service profiles, UCS Manager create the snic in the System Link Technology as a PCIe endpoint for the server Host PCIe Interface /dev/sda /dev/sdb C:\ D:\ Through the use of policies and service profiles, UCS Manager maps the virtual drive from the RAID disk group to the server through the configured snic RAID 1 Drive Group 1 1.6TB RAID 0 Drive Group 2 1.6TB Each virtual drive configured in the system is applied to only 1 server and appears as a local drive to the server resource The operating system and snic send SCSI commands directly to the virtual drive through the PCIe architecture in the System Link Technology vd5 200GB RAID 0 Drive Group 3 1.6TB vd29 100GB The end result are SCSI drives attached to the servers as LOCAL storage 33

SCSI Packet Flow / SCSI Read Lun0 100GB /dev/sda SCSI Read RAID 1 Drive Group 1 1.6TB RAID 0 Drive Group 2 1.6TB Cartridge 1 Server 1 -> vd25 Cartridge 1 Server 2 -> vd16 Cartridge 2 Server 1 -> vd9 Cartridge 2 Server 2 -> vd22 Cartridge 3 Server 1 -> vd12 Cartridge 3 Server 2 -> vd29 Cartridge 4 Server 1 -> vd4 RAID 0 Drive Group 3 1,6TB vd29 100GB 34

System Link vs. MR-IOV MR-IOV is a PCIe specification that allows Multiple end host CPU to access a PCIe endpoint connected through a multi-root aware (MRA) PCIe Switch PCIe PCIe PCIe PCIe The same endpoint is shared between the hosts and must be able to identify and communicate with each host directly. MR-IOV Protocol changes are introduced to PCIe to support MR-IOV support MRA Switch MR-IOV Endpoint PCIe device is the same endpoint for each host snic is PCIE Endpoint for CPU Operating systems must understand and support these protocol change to be MR-IOV aware Storage Controller is PCIe endpoint for ASIC 35

System Link vs. MR-IOV System Link DOES NOT require MR-IOV to support multi host access to storage devices System link technology provides a single root port that connects the ASIC to the storage subsystem PCIe PCIe PCIe PCIe The snic and SCSI commands from the host are translated by System Link directly to the controller so that from the controller perspective it is only communicating with one host (the System Link Technology ASIC) For the M-Series chassis the PCIe endpoint is the snic adapter and the storage device is the virtual drive provided through the mapping in the System Link Technology ASIC MRA Switch MR-IOV Endpoint PCIe device is the same endpoint for each host snic is PCIE Endpoint for CPU Storage Controller is PCIe endpoint for ASIC 36

Cisco System Link Technology Network Resources Network Resources 2U Rack Mount Chassis 37

Mapping Network resources to the M-Series Servers / The System Link Technology provides the network interface connectivity for all of the servers Virtual NICs (vnic) are created for each server and are mapped to the appropriate fabric through the service profile on UCS Manager M-142 servers can have up to 4 vnics. The operating system sees each vnic as a 40Gbps Ethernet Interface but they can be rate limited and provide hardware QoS marking. Interfaces are 802.1Q capable Fabric Failover is supported, so in the event of a failure traffic is automatically moved to the second fabric Host PCIe Interface eth0 eth1 eth0 eth1 Fabric Interconnect A Fabric Interconnect B 38

Networking Capabilities The System Link ASIC supports 1024 virtual devices. Scale for a particular M- Series chassis will depend on the number of uplinks to the Fabric interconnect All network forwarding is provided by the fabric interconnects, there is no forwarding local to the chassis At initial ship System Link on M-Series supports Ethernet traffic only. The M-Series devices can connect to external storage volumes like NFS, CIFS, HTTPS, or iscsi. FCoE connectivity will be supported in a future release. iscsi boot is supported see the UCS Interoperability Matrix for details. 39

Mapping application architecture to infrastructure 40

Sample Cloud Scale Application Cloud scale applications distribute the workload across multiple component nodes These nodes have various system requirements Distributed Components report into manager nodes Manager nodes note availability, farm out workloads and may receive data from worker nodes Worker nodes provide the bulk of cloud scale applications Application Data Consumer Analytics Data Object Data Store 41

Compute Infrastructure Requirements Manager Node Dual-Socket/8-16 core 2.5Ghz or better 128-512GB Memory 1/10Gbps Ethernet 300GB-4TB HDD (RAID) Redundancy at HW & app level Web Node Single Socket/2-4 cores 1.0-2.0Ghz 8-16GB Memory 1Gbps Ethernet 20-100GB HDD Redundancy at app level Content Node Single Socket/2-4 Core 2.0-3.7 Ghz 16-32GB Memory 1/10Gbps Ethernet 50-200GB HDD Redundancy at app level App Node Single or Dual Socket/4-18 Core 2.0-2.5Ghz 16-128GB Memory 1Gbps Ethernet 50-100GB HDD Redundancy handled at app level Db Node Single or Dual Socket/4-24 Core 2.0-3.0Ghz 32-256GB Memory 1Gbps Ethernet 100-250GB HDD Redundancy handled at app level 42

Storage Infrastructure Requirements Object Store 1-500TB Storage SSD Options JBOD/RAID capabilities 1-40Gbps Network BW FC/FCoE initiator capabilities Dual Socket/24-48 Cores 2.0-2.5Ghz Redundancy at HW level Application Data High Performance I/O Application Acceleration Data Optimization Various Workloads High Availability Scalability FC or iscsi connectivity Analytics Data Typically a combination of HDFS, Analytics SW, and Database SW running on various rack servers. See Big Data Reference architectures for more information. Object Data Store Analytics Data Application Data 43

Cisco System Link Technology Compute Resources Compute Resources 2U Rack Mount Chassis 44

Application Profiles Service Profiles allow you to build an application profile for each type of node within the application stack The service profile defines the node type, network connectivity, and storage configuration for each node These can easily be associated with compute resources across the domain Service Profile Templates Map to application profiles Network Policy Storage Policy Server Policy 45

Application Profiles Cloud applications are built to withstand loss of multiple nodes, but the individual nodes should be striped across the chassis Striping the nodes also allows for better distribution of the network and storage resources Service Profile Templates Map to application profiles Network Policy Storage Policy Server Policy 46

Mapping Applications to Shared Resources Shared Resources Example 2x800GB SAS SDD 2x1.6TB SAS SDD 2x40Gbps Network Connections Web Node 50GB Drive RAID1 2x1Gbps Ethernet Content Node 50GB Drive RAID 1 200GB Drive 1x1Gbps & 1x10Gbps Ethernet App Node 50GB Drive RAID 1 200GB Drive 2x1Gbps & 1x10Gbps Db Node 50GB Drive RAID 1 250GB Drive 2x1Gbps & 1x10Gbps 47

Storage Resource Configuration Disk groups will be used by storage profile configurations to specify how the resources are consumed by the server nodes Create a RAID 1 disk group with the 2x800GB drives to host the 50GB RAID 1 required by each server node Create 2 separate RAID 0 disk groups to accommodate the 200GB and 250 GB drive requirements of the other server nodes Use specific drive numbers for the RAID 0 groups so that you can control the mapping of specific applications to specific drives Virtual Virtual Drive Virtual 8 Drive Virtual 15 Drive Virtual 50GB17 25 10 13 14 79 Drive Virtual 50GB22 23 26 11 12 03 Drive 200GB 50GB21 19 18 16 24 27 12 45 6 Drive 200GB 250GB 200B 2500GB 250GB 200GB 250GB Disk Group 1 800GB RAID 1 Available 16x 50GB Drives Disk Group 2 1.6TB Single Drive RAID 0 Available 4x 200GB Drives 2x 250GB Drives Disk Group 3 1.6TB Single Drive RAID 0 Available 4x 200GB Drives 2x 250GB Drives RAID 1 Disk Group 1 800GB RAID 0 Disk Group 2 1.6TB RAID 0 Disk Group 3 1.6TB 48

/ Mapping Disk resources to M-Series Servers Create a storage profile for each application type that defines the number of drives, the size of the drives and which disk group to use The storage profile is then consumed by Host PCIe Interface App Node Service Profile Network Policy Storage Policy /dev/sda /dev/sdb RAID 1 Disk Group 1 800GB C:\ D:\ RAID 0 Disk Group 2 1.6TB the service profile to specify which Server Policy resources are available for the compute node These resources belong to the service Vd5 50GB RAID 0 Disk Group 3 1.6TB Vd29 200GB profile and not the physical compute node 49

Demo Creating Storage Policies 50

Application Networking Needs A cloud scale application will typically have many network segments and requirements. Within the construct of the service profile we can connect the server node to the required networks Networks may need to be added or upgraded over the life of the application. 1Gbps App Cluster Application Data Analytics Data 10Gbps iscs/fcoe Consumer 1Gbps DMZ Object Data Store 1Gbps Internal 10Gbps Content 51

/ Mapping Network Resources to the M-Series Servers Interfaces are created within the service profile depending on the needs of the application and mapped to a specific Host PCIe Interface fabric or configured for redundancy. VLANs are mapped to an interface based on the network connectivity needs. Each interface can has full access to the 40Gbps uplink, but QoS policies and BW throttling can be configured on a per adapter basis. App Node Service Profile Network Policy Storage Policy Server Policy Fabric Interconnect A Fabric Interconnect B Fabric Failover provides redundancy without the need for eth0 eth0 configuration within the Operating System eth1 eth1 Interfaces and networks can be added without entering the data center to change any cabling structure or add adapters. 52

Demo Creating Network Policies 53

Application Profiles The service profile is used to combine; Network, Storage, and Server policies to map to the needs of a specific application The profile defines the server and it is applied to a compute resource Once the service profile is defined other service profiles can easily be created through cloning or templates Through templates changes can be made to multiple compute nodes at one time Service Profiles Map to application profiles Network Policy Storage Policy Server Policy 54

Demo Creating Profiles 55

Scaling and Maintaining the infrastructure 56

The Scale in Cloud-Scale Cloud Scale 57

Scale Out Automation UCS Manager Allows servers to be qualified into resource pools depending on available resources Service Profile templates can be assigned to consume resources in pools When new chassis are added to a system the resources can immediately be added to pools and have service profiles assigned Expansion requires no additional configuration or cabling other than installing the chassis in the rack, connecting to the Fabric, and applying power Web Server Template Network Policy Web Server Pool 1.1Ghz + 8GB Storage Policy App Server Template Network Policy Storage Policy Server Policy Server Policy Db Server Template Network Db Server Pool Policy 2.7Ghz + 64GB Storage Policy Con Server Template Network Policy Storage Policy Server Policy App Server Pool 2.0Ghz + 32GB Server Policy Con Server Pool 2.7Ghz + 16GB 58

Maintenance and Elasticity Storage and Data is specific to the chassis and not the server node. If there is a failure of a critical component you can move the application to a new server node if one is available It is also possible to repurpose a server node during peek times 59

Service Profile Mobility When a virtual drive (Local LUN) is created for a service profile, it is physically located on the hard drives for the chassis where it was first applied Service profiles can be moved to any system in the domain, BUT the Local LUN and the data remains on the chassis where it was first associated Service profile mobility between chassis will NOT move the data LUN to the new chassis UCS Service Profile Template Unified Device Management Network Policy Storage Policy Server Policy 60

Component Life-Cycle Management Server lifecycle is separate from storage, power, networking You can upgrade memory, processors and cartridges without the need to change network cabling or rebuild/recover drive data As new cartridges are released they can be installed in existing chassis providing a longer life-cycle for the platform As the upstream network is upgraded from 10Gbps to 40Gbps the chassis platform remains the same Drive failures for RAID protected volumes do not require downtime on the server node for repair Drive and controller life-cycle are separated from the individual server nodes If a chassis replacement were required the data from the drives would be recoverable in the replacement chassis 61

Incorporating Storage into the infrastructure 62

Object Store Within cloud scale application the emergence of object based network file systems are becoming an important part of the architecture The Cisco C3160 dense storage server can be loaded with a cloud scale file system (e.g. Ceph, Swift, Gluster, etc.) to provide this component of the architecture This can be added to a UCS infrastructure through and appliance port on the fabric interconnect Longer term the storage server platform will become integrated into UCS Manager 63

Performance Application Storage Many applications are starting to take advantage of high speed application storage. Third party network attached Flash storage devices can be added to the UCS M-Series infrastructure through and appliance port 64

Demo Inserting a C3160 into the architecture 65

Summary 66

Cisco Modular Server Vision Cisco System Link Technology Enables Next-Gen UCS Architectures Cisco UCS Cisco System Link Technology Compute Network/ Storage Access UCS Management 2009: Disrupted the industry with platform-level control plane 2014 and Beyond: Disrupting ourselves and the industry with the first server subsystem control plane 67

I/O Devices as Composable Infrastructure Elements Blade and rack server selection pivots mainly on form-factor driven options I/O devices and storage typically dictate compute density Disaggregation beyond the chassis frees scale, lifecycle, and availability of subsystems 68

Cisco SystemLink: Engine for Innovation Silicon-based innovation platform Combined capabilities of Cisco VIC with subsystem disaggregation The heart of M-Series and C3260 Next Generation further specialized and optimized for Edge Scale and Core Data Center platforms Offers offload capabilities for other network services (Encryption, Firewall, etc.) 69

Summary The UCS M-Series platform provides a dense scale-out platform for cloud scale applications Shared resources like networking and local storage make it easy to map applications to specific resources within the infrastructure Service profiles provide a means to map the shared resources and abstract the application from the physical infrastructure The disaggregation of the server components provide an infrastructure that separates component life-cycles and maintenance and provides system elasticity As advanced storage capabilities becomes an important part of the UCS infrastructure, these components can be utilized within the infrastructure to build complete cloud architecture. 70

Participate in the My Favorite Speaker Contest Promote Your Favorite Speaker and You Could Be a Winner Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress) Send a tweet and include Your favorite speaker s Twitter handle @smcquerry or @JamesAtCisco Two hashtags: #CLUS #MyFavoriteSpeaker You can submit an entry for more than one of your favorite speakers Don t forget to follow @CiscoLive and @CiscoPress View the official rules at http://bit.ly/cluswin

Complete Your Online Session Evaluation Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card. Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect. Don t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online 72

Continue Your Education Demos in the Cisco campus Walk-in Self-Paced Labs Table Topics Meet the Engineer 1:1 meetings Related sessions 73

Thank you 74