Hyperscale Networking For All

Similar documents
BIG MON CONTROLLERS BIG MON ANALYTICS NODE. Multi-Terabytes L2-GRE 1/10/25/40/100G ETHERNET SWITCH FABRIC. Optional BIG MON BIG MON SERVICE NODES

Nutanix and Big Switch: Cloud-First Networking for the Enterprise Cloud

The Next Opportunity in the Data Centre

By John Kim, Chair SNIA Ethernet Storage Forum. Several technology changes are collectively driving the need for faster networking speeds.

Hitachi Unified Compute Platform Pro for VMware vsphere

Nutanix and Big Switch: Cloud-First Networking for the Enterprise Cloud

New Approach to Unstructured Data

DELL EMC TECHNICAL SOLUTION BRIEF. ARCHITECTING A DELL EMC HYPERCONVERGED SOLUTION WITH VMware vsan. Version 1.0. Author: VICTOR LAMA

Mellanox Virtual Modular Switch

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Cloud Computing: Making the Right Choice for Your Organization

Introduction. Figure 1 - AFS Applicability in CORD Use Cases

Introducing VMware Validated Designs for Software-Defined Data Center

The Impact of Virtualization on Cloud Networking

White Paper. OCP Enabled Switching. SDN Solutions Guide

VMWARE EBOOK. Easily Deployed Software-Defined Storage: A Customer Love Story

Agile Data Center Solutions for the Enterprise

AMD Disaggregates the Server, Defines New Hyperscale Building Block

The Next Evolution of Enterprise Public Cloud. Bring the Oracle Cloud to Your Data Center

The Market Disruptor. Mark Pearce EMEA Director Channel Networking November 16 th Networking Solutions for the Future-Ready Enterprise

Introduction: PURPOSE BUILT HARDWARE. ARISTA WHITE PAPER HPC Deployment Scenarios

BUILD A BUSINESS CASE

Taking Hyper-converged Infrastructure to a New Level of Performance, Efficiency and TCO

VMware vcloud Networking and Security Overview

INNOVATOR AWARDS: NETWORKING

FIVE REASONS YOU SHOULD RUN CONTAINERS ON BARE METAL, NOT VMS

Architectural overview Turbonomic accesses Cisco Tetration Analytics data through Representational State Transfer (REST) APIs. It uses telemetry data

Contrail Networking: Evolve your cloud with Containers

Virtualization & On-Premise Cloud

Distributed Core Architecture Using Dell Networking Z9000 and S4810 Switches

Dell EMC Networking: the Modern Infrastructure Platform

Nuage Networks Product Architecture. White Paper

The Five Phases of Virtualization: From Hardware Rigidity to Web-Scale Flexibility

CloudVision Macro-Segmentation Service

Big Cloud Fabric Hyperscale Networking for All Release Version 2.0

Introducing VMware Validated Design Use Cases. Modified on 21 DEC 2017 VMware Validated Design 4.1

Future-Ready Networking for the Data Center. Dell EMC Forum

GCN Lead Greece Cyprus & Malta GLOBAL SPONSORS

Title DC Automation: It s a MARVEL!

F5 Reference Architecture for Cisco ACI

Deploying Data Center Switching Solutions

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases

The Impact of Hyper- converged Infrastructure on the IT Landscape

FROM A RIGID ECOSYSTEM TO A LOGICAL AND FLEXIBLE ENTITY: THE SOFTWARE- DEFINED DATA CENTRE

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Automating Cloud Networking with RedHat OpenStack

Using the Network to Optimize a Virtualized Data Center

OpEx Drivers in the Enterprise

Why Datrium DVX is Best for VDI

Use Case Brief BUILDING A PRIVATE CLOUD PROVIDING PUBLIC CLOUD FUNCTIONALITY WITHIN THE SAFETY OF YOUR ORGANIZATION

I D C M A R K E T S P O T L I G H T

MidoNet Scalability Report

Intel Open Network Platform. Recep Ozdag Intel Networking Division May 8, 2013

Solution Brief: Commvault HyperScale Software

Why Converged Infrastructure?

Top 5 Reasons to Consider

SD-WAN. Enabling the Enterprise to Overcome Barriers to Digital Transformation. An IDC InfoBrief Sponsored by Comcast

Data Sheet Gigamon Visibility Platform for AWS

DELL EMC VSCALE FABRIC

Solution Overview Gigamon Visibility Platform for AWS

Network Programmability and Automation with Cisco Nexus 9000 Series Switches

Introduction to Cumulus Linux

The Why, What, and How of Cisco Tetration

3 Ways Businesses Use Network Virtualization. A Faster Path to Improved Security, Automated IT, and App Continuity

Falling Out of the Clouds: When Your Big Data Needs a New Home

DC: Le Converged Infrastructure per Software Defined e Cloud Cisco NetApp - Softway. Luigi MARCOCCHIA SOFTWAY

PSOACI Why ACI: An overview and a customer (BBVA) perspective. Technology Officer DC EMEAR Cisco

DEFINING SECURITY FOR TODAY S CLOUD ENVIRONMENTS. Security Without Compromise

Choosing the Right Cloud Computing Model for Data Center Management

THE RISE OF THE MODERN DATA CENTER

Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung. Alexei Agueev, Systems Engineer

TEN ESSENTIAL NETWORK VIRTUALIZATION DEFINITIONS

Move, manage, and run SAP applications in the cloud. SAP-Certified Infrastructure from IBM Cloud

OpenStack Networking: Where to Next?

I D C T E C H N O L O G Y S P O T L I G H T. V i r t u a l and Cloud D a t a Center Management

IBM POWER SYSTEMS: YOUR UNFAIR ADVANTAGE

Enabling Efficient and Scalable Zero-Trust Security

Cisco Unified Data Center Strategy

Huawei CloudFabric and VMware Collaboration Innovation Solution in Data Centers

Executive Summary. The Need for Shared Storage. The Shared Storage Dilemma for the SMB. The SMB Answer - DroboElite. Enhancing your VMware Environment

Micro Focus Network Operations Management Suite Supports SDN and Network Virtualization Engineering and Operations

Networking for a smarter data center: Getting it right

Realities and Risks of Software-Defined Everything (SDx) John P. Morency Research Vice President

SEVONE END USER EXPERIENCE

Merging Enterprise Applications with Docker* Container Technology

Bringing OpenStack to the Enterprise. An enterprise-class solution ensures you get the required performance, reliability, and security

Accelerate Your Enterprise Private Cloud Initiative

TAP Aggregation with DANZ

Evolution For Enterprises In A Cloud World

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

SOFTWARE DEFINED STORAGE VS. TRADITIONAL SAN AND NAS

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director

Hyperconverged Infrastructure: Cost-effectively Simplifying IT to Improve Business Agility at Scale

Dell EMC Hyper-Converged Infrastructure

Get Your Datacenter SDN Ready. Ahmad Chehime Cisco ACI Strategic Product Sales Specialist SPSS Emerging Region

Dell EMC Hyper-Converged Infrastructure

10 QUESTIONS, 10 ANSWERS. Get to know VMware Cloud on AWS The Best-in-Class Hybrid Cloud Service

Migration and Building of Data Centers in IBM SoftLayer

Transcription:

A Big Switch Perspective Hyperscale networking is a design philosophy, not a cookie-cutter formula. It has attributes that are very attractive to any data center architect: a significant reduction in capex, a quantum leap in automation and operational simplification, a rigorous approach to network resiliency and, a focus on replicable building blocks known as pods. A Philosophy That Comes From Resiliency, Elasticity & Scale Over the last decade, hyperscale data center architects have consistently built out networks with orders of magnitude improvement in operating and cost metrics compared to traditional designs. Despite their scale, they take advantage of cost/performance improvements faster, and adopt new networking technologies earlier. They have been quietly outpacing the vendor community in innovation while the rest of the industry has been stuck in a legacy Core/Aggregation/Edge paradigm. Can aspects of their model be replicated by the rest of us? Yes. This trend is underway. While each hyperscale datacenter is unique, design patterns have emerged from their R&D that are applicable to a broader audience of network architects. By understanding the design philosophy, embracing key tools that hyperscale data centers architects use and, understanding the evolution path from a Core/Aggregation/Edge design, a broad range of data center operators at many different scales can make hyperscale networking appropriate for their data center. Table of Contents A Philosophy That Comes From Resiliency, Elasticity and Scale 1 The Tools: Bare Metal Switch Hardware, SDN Software, Core-and-Pod Design 2 The Evolution: Applying Hyperscale Networking at Non-Hyper Scale 3 What About Legacy Workloads And Operating Models? 4 Does The Networking Team Need To Write Code? 4 Where To Start? 5 The Results: 6 Hyperscale networking is a design philosophy, not a cookie-cutter formula. The architects who pioneered this were faced with the question how do you create a network that never goes down, can grow at the speed of demand and scale to massive size?

The answer is a holistic approach with three key pillars: 1. Use the least expensive networking hardware possible, but highly redundant configurations with an n+1 design approach in many dimensions so that failures are expected but have little practical impact. 2. Automate and centralize provisioning, troubleshooting and (where practical) network control functions so that configuration is simplified and individual hardware elements can be added or replaced with minimal effort 3. Design atomic units of compute/networking/storage (pods) that are small enough to be purchased as a unit in practice, but large enough to be automated autonomously in practice. Scale by adding pod after pod. The result is a design philosophy where individual element failures are expected, but the network as a system provides extremely high uptime. By centralizing configuration, the basic operations of the network as a system get dramatically simplified, and while the underpinnings of the centralized system are complex distributed systems problems, automating data center-specific workflows on top become vastly simpler. Last, by focusing on a Pod as a unit of design rather than tackling entire data centers, the variables involved in infrastructure engineering and automation projects goes down dramatically, bringing many projects in to scope that were previously too risky or too complex. In the hyperscale networking paradigm, scale is just one of many advantages. The Tools: Bare Metal Switch Hardware, SDN Software, Core-and-Pod Design It is widely known in networking circles that many hyperscale data centers have stopped buying their leaf and spine switches from incumbent networking vendors over the last ten years. Rumors abound that some of them build their own switches. Really? Design example: The L2, L3 and now SDN-based CLOS designs adopted by hyperscale data centers over the last ten years at large scale is a case study of hyperscale networking. By replacing the traditional aggregation layer of active/standby chassis switches with an active/active/active spine of 4-6 1RU switches, cost goes down but the aggregate bandwidth available goes up. Assuming 40GE links from ToR to spine or ToR to aggregation, dual ToRs, distributed port channeling from ToR to compute nodes and six spines, the hyperscale design delivers 240GE of bandwidth from each rack to the spine where the traditional active/standby (spanning tree) design would deliver only 40GE. Given this over-subscription and the many possible paths between any two servers, a hardware failure at ToR or spine in this model is a minor event. Bare Metal Switch Hardware: Hyperscale network operators have created an emergent ecosystem around bare metal switches. These are high end data center Ethernet switches that are sold without software, and at a small fraction of the price of a tier-1 branded switch sold with a switch OS. Bare metal switches are most often produced by the same OEMs that also build the hardware for tier-1 switch vendors sometimes the only hardware difference is the color of the sheet metal on the box. Using bare metal switch hardware reduces cost. SDN Software: They also focus on SDN-style software, an approach that moves network intelligence in to logically centralized SDN Controllers for management and control. Instead of configuring or automating switches on a box-bybox basis, all automation is done through the controllers. The result is so often correlated with a move from spanning tree to a multi-path design that, in practice, resiliency goes up. Core-and-Pod Designs: Instead of a traditional Core-Agg-Edge design, we see hyperscale operators and an increasingly broad community around them embracing a Core-and-Pod design. Each pod is a static, atomic unit of networking, compute and storage, attached to the data center s routed core network. Pods often have version numbers, so there may be 5 instances of Pod v1 and ten instances of Pod v2 in the same data center. As each pod is a fixed configuration, automation at the pod level is simple and stable. Over time, new pod design versions (v3, v4, v5) are introduced that take advantage of continuous improvement in infrastructure price/performance and automation capabilities. As opposed to a Core-Agg-Edge approach where the data center network is a monolithic design that can not evolve, the Core-and-Pod design allows operators of any scale to improve faster. PAGE page 2

The Evolution: Applying Hyperscale Networking at Non-Hyper Scale Figure 1: : Core and Pod Design Approach Original hyperscale network designs were so custom that they were not applicable to non-hyper scale data centers. That is changing as vendors (like Big Switch Networks) popularize variants on these designs that have the requisite flexibility to be useful in data centers dominated by a mix of old and new application software. The transition path from a Core-Agg-Edge design to a Core-and-Pod design for many has turned out to be surprisingly simple: The pod: A new hyperscale networking pod (typically 2-16 racks) is hung off the data center Core routers. Using older fabric technologies, these were typically all L2 or all L3-to-the-ToR pods. Using new SDN Fabric technologies, these may be multi-tenant hybrid L2/L3 designs. In practice, these are often projectcentric pods, e.g. for Private IaaS Clouds, Big Data or VDI builds. Multi-path: Inside the new hyperscale networking pod, either SDN Fabrics or multi-path protocols are used to avoid spanning tree protocols and deliver an active/active/active design across an n+1 spine. 1RU Spines: Within the pod, in addition to the 1RU leaf or ToR switches, the spine layer consists of 4-6 inexpensive 1RU spine switches (e.g. 32x40GE) that replace an active/standby pair of expensive chassis-based aggregation switches. Controllers: Provisioning, monitoring and troubleshooting inside the pod is done by a centralized controller, one that is either from a vendor (e.g. an SDN controller) or built in house with a library of scripts. Demarcation: An L3 boundary, often simply a static route, from the Pod serves as the demark point between the modern hyperscale design and the data center core routers that also serve the legacy business as usual data center. Services: L4-L7 services are sometimes placed in the new pod, and sometimes physically separate but made available as a service by static routes from the Pod. Storage: Storage traffic within the pod is typically run over the Ethernet fabric due to the predictable latency and plentiful east-west bandwidth of a leaf-spine fabric design. page 3

What About Legacy Workloads And Operating Models? The common objection to embracing (older) hyperscale networking designs in more traditional data centers was typically that these designs were not compatible with legacy workloads and corresponding operating models. Objections typically stemmed from early hyperscale design choices for all-l3 environments with a heavy reliance on home grown controller software. In an environment with legacy workloads, L2 adjacency is often convolved with security, resiliency and audit considerations. Firewall placement, router demarcation points, and organizational checks and balances all come in to play. Newer SDN controllers, however, can manage hybrid L2/L3 fabrics. These are typically backwards compatible with legacy L4-L7 (hardware or software) services, allowing legacy workload design to exist side-by-side with software that may use other isolation and resiliency techniques. Does The Networking Team Need To Write Code? Original hyperscale network implementations leveraged considerable custom software, including writing SDN Controllers and SDN operating systems for bare metal switch hardware in house. These operators have significant sized networking software development and infrastructure teams, and have a business case that justifies custom development given a small numbers of applications that run at massive scale. As hyperscale networking is being introduced to a broader audience, significant swaths of this R&D are being packaged for consumption by today s enterprise and service provider networking teams by young silicon valley startups. Industry standard CLI and GUIs in addition to open APIs are the norm, with pre-packaged software integrations for popular orchestration packages (e.g. vsphere, OpenStack and CloudStack). The do I need to write code in order to adopt bare metal switches or SDN? is increasingly a question of the past. Where To Start? As discussed above, the design philosophy is based on an atomic unit of a Pod. For most organizations, this is synonymous with choosing an appropriate starting project that is just large enough to justify its own infrastructure build-out. In most data centers, an initial pod will range from two to sixteen racks, generally a quarter, half or full row. While any workload is a candidate for hyperscale networking design, frequent projects at the time of this writing include IaaS build-outs, VDI or very large Big Data builds. These are typically projects where traditional L2/L3/policy design leads to expensive hardware and manual processes, and are often large enough to justify a new pod. In some organizations, experimenting with Monitoring Fabrics that connect tap and span ports to security and monitoring tools is a wise half-step to experiment with SDN controller software and bare metal switch hardware before going to the in-band production network. Figure 2: Where to start with Hyperscale Networking? A new pod to build with Big Cloud Fabric, or An older pod to monitor with Big Tap Monitoring Fabric PAGE page 4

The Results: Hyperscale networking is a design philosophy, and one that was out of reach for all but a few operators until recently. With hyperscale designs being popularized (by companies like Big Switch Networks), the design philosophy is being opened up to a much broader audience. Architects that are finding the most success with this are embracing the design philosophy holistically, learning about the tools of bare metal hardware, SDN software and Core-and-pod design, and are making judicious choices about the projects / pods where this will be introduced in to their existing data centers. This is an exciting time in networking. Hyperscale networking for all. If you have any questions about this paper, or are interesting in a conversation about kicking off your own hyperscale networking project, please don t hesitate to contact us at info@bigswitch.com. Headquarters 3965 Freedom Circle, Suite 300, Santa Clara, CA 95054 +1.650.322.6510 TEL +1.800.653.0565 TOLL FREE www.bigswitch.com info@bigswitch.com Copyright 2014 Big Switch Networks, Inc. All rights reserved. Big Switch Networks, Big Cloud Fabric, Big Tap, Switch Light OS, and Switch Light vswitch are trademarks or registered trademarks of Big Switch Networks, Inc. All other trademarks, service marks, registered marks or registered service marks are the property of their respective owners. Big Switch Networks assumes no responsibility for any inaccuracies in this document. Big Switch Networks reserves the right to change, modify, transfer or otherwise revise this publication without notice. HNA WP V1 EN July 2014 page 5