NVMe over Universal RDMA Fabrics

Similar documents
NVMe Direct. Next-Generation Offload Technology. White Paper

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

Network Function Virtualization Using Data Plane Developer s Kit

Benefits of Offloading I/O Processing to the Adapter

FC-NVMe. NVMe over Fabrics. Fibre Channel the most trusted fabric can transport NVMe natively. White Paper

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Visualize I/O Connectivity for VMware vsphere

QLogic/Lenovo 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Introduction to Ethernet Latency

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

The QLogic 8200 Series is the Adapter of Choice for Converged Data Centers

QLogic 2500 Series FC HBAs Accelerate Application Performance

QLogic 10GbE High-Performance Adapters for Dell PowerEdge Servers

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

Disabling the FCoE Function on HPE FlexFabric Adapters from QLogic. Products Affected. HPE FlexFabric 10Gb 2-port 534FLR-SFP B21

Configuring Cavium SmartAN for HPE Ethernet 10/25GbE Adapters. Products Supported. HPE Ethernet 10/25Gb 621SFP28 Adapter B21

Universal RDMA: A Unique Approach for Heterogeneous Data-Center Acceleration

QLE10000 Series Adapter Provides Application Benefits Through I/O Caching

Low latency and high throughput storage access

QLogic FastLinQ QL45462HLCU 40GbE Converged Network Adapter

Best Practices for Deployments using DCB and RoCE

Application Note Setting the QL41xxx Link Speed Using QCS CLI

iscsi or iser? Asgeir Eiriksson CTO Chelsio Communications Inc

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

Why NVMe/TCP is the better choice for your Data Center

Concurrent Support of NVMe over RDMA Fabrics and Established Networked Block and File Storage

FastLinQ QL41162HLRJ. 8th Generation 10Gb Converged Network Adapter with iscsi, FCoE, and Universal RDMA. Product Brief OVERVIEW

RoCE vs. iwarp Competitive Analysis

Emulex OCe11102-N and Mellanox ConnectX-3 EN on Windows Server 2008 R2

Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

QLogic FastLinQ QL41232HMKR

HPE Adapters from Marvell FastLinQ Ethernet and QLogic Fibre Channel Adapters Power HPE Servers

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage

25Gb Ethernet. Accelerated Network Performance and Lower Costs for Enterprise Data Center and Cloud Environments. White Paper

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

N V M e o v e r F a b r i c s -

Optimize New Intel Xeon E based Ser vers with Emulex OneConnect and OneCommand Manager

Storage Protocol Offload for Virtualized Environments Session 301-F

EXPERIENCES WITH NVME OVER FABRICS

FastLinQ QL41112HLRJ. 8th Generation 10Gb Ethernet Adapter with Universal RDMA. Product Brief OVERVIEW

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

How to Network Flash Storage Efficiently at Hyperscale. Flash Memory Summit 2017 Santa Clara, CA 1

Technology Insight Series

OCP Engineering Workshop - Telco

iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)

All Roads Lead to Convergence

ARISTA: Improving Application Performance While Reducing Complexity

SAN Virtuosity Fibre Channel over Ethernet

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

SOFTWARE-DEFINED BLOCK STORAGE FOR HYPERSCALE APPLICATIONS

RoCE vs. iwarp A Great Storage Debate. Live Webcast August 22, :00 am PT

InfiniBand Networked Flash Storage

NETWORKING FLASH STORAGE? FIBRE CHANNEL WAS ALWAYS THE ANSWER!

Top 5 Reasons to Consider

Data-at-Rest Encryption Addresses SAN Security Requirements

VM Aware Fibre Channel

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc.

Ethernet. High-Performance Ethernet Adapter Cards

Fast and Easy Persistent Storage for Docker* Containers with Storidge and Intel

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Extremely Fast Distributed Storage for Cloud Service Providers

25Gb Ethernet. Accelerated Network Performance and Lower Costs for Enterprise Data Center and Cloud Environments. White Paper

QLogic 2700 Series. Gen 6 FC (32GFC) Fibre Channel Adapters. Data Sheet OVERVIEW

How Are The Networks Coping Up With Flash Storage

Hardened Security in the Cloud Bob Doud, Sr. Director Marketing March, 2018

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

COMPUTING. MC1500 MaxCore Micro Versatile Compute and Acceleration Platform

The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way

Application Acceleration Beyond Flash Storage

Future of datacenter STORAGE. Carol Wilder, Niels Reimers,

An Approach for Implementing NVMeOF based Solutions

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

Evaluation of the Chelsio T580-CR iscsi Offload adapter

NVMe Over Fabrics (NVMe-oF)

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics

THE STORAGE PERFORMANCE DEVELOPMENT KIT AND NVME-OF

Broadcom Adapters for Dell PowerEdge 12G Servers

iscsi Technology: A Convergence of Networking and Storage

Audience This paper is targeted for IT managers and architects. It showcases how to utilize your network efficiently and gain higher performance using

Operating in a Flash: Microsoft Windows Server Hyper-V

NVM Express Awakening a New Storage and Networking Titan Shaun Walsh G2M Research

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

QLogic TrueScale InfiniBand and Teraflop Simulations

The NE010 iwarp Adapter

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved.

Cisco UCS Virtual Interface Card 1225

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC

Arista AgilePorts INTRODUCTION

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010

New Approach to Unstructured Data

G2M Research Fall 2017 NVMe Market Sizing Webinar

Scale your Data Center with SAS Marty Czekalski Market and Ecosystem Development for Emerging Technology, Western Digital Corporation

Designing Next Generation FS for NVMe and NVMe-oF

DDN. DDN Updates. Data DirectNeworks Japan, Inc Shuichi Ihara. DDN Storage 2017 DDN Storage

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Cisco UCS Virtual Interface Card 1227

QuickSpecs. HP Z 10GbE Dual Port Module. Models

memory VT-PM8 & VT-PM16 EVALUATION WHITEPAPER Persistent Memory Dual Port Persistent Memory with Unlimited DWPD Endurance

Transcription:

NVMe over Universal RDMA Fabrics Build a Flexible Scale-Out NVMe Fabric with Concurrent RoCE and iwarp Acceleration Broad spectrum Ethernet connectivity Universal RDMA NVMe Direct End-to-end solutions Seamless storage migration KEY BENEFITS Cavium FastLinQ NVMe-oF solution benefits: Reduce latency by 82% compared to traditional iscsi Choice of 10/25/40/50/100GbE connectivity to host the most demanding storage workloads End-to-end NVMe-oF (NVMe over Fabrics) connectivity with both server and storage target solutions, including SPDK Delivers a flexible and future proof solution with support for concurrent RDMA over Converged Ethernet (RoCE), RoCEv2, and Internet Wide Area RDMA Protocol (iwarp) EXECUTIVE SUMMARY Non-volatile memory express (NVMe) technology has made significant inroads within the storage industry over the past few years and is now recognized as a streamlined, high-performance, low latency, and low overhead interface for solid-state drives (SSDs). The industry and standard bodies have extended the specification beyond local attached server applications to deliver the same performance benefits across a network through the development of the NVMe over Fabrics (NVMe-oF) specification. This enables flash devices (such as PCIe SSDs and external storage arrays) to communicate over Ethernet-based remote direct memory access (RDMA) networks like RoCE and iwarp. This paper presents a unique value proposition of Cavium FastLinQ 10/25/40/50/100GbE NICs for NVMe-oF that delivers the ultimate in choice and flexibility by enabling concurrent support for RoCE and iwarp universal RDMA, along with acceleration of workloads without burdening server CPU with FastLinQ NVMe-Direct technology. NVMe OVER FABRICS NVMe over Fabrics (NVMe-oF) enables the use of alternate transports to PCIe to extend the distance over which an NVMe host device and an NVMe storage drive or subsystem can connect. The NVMe-oF standard defines a common architecture that supports a range of storage networking fabrics for NVMe block storage protocol over a storage networking fabric. This includes enabling a front-side interface into storage systems, scaling out to large numbers of NVMe devices and extending the distance within a data center over which NVMe devices and NVMe subsystems can be accessed. The development of NVMe over Fabrics with RDMA is defined by a technical sub-group of the NVM Express organization. SN0530956-00 Rev. B 10/17 1

NVMe over Universal RDMA Fabrics Considering the ubiquity of Ethernet, RDMA fabrics like RoCE and iwarp are ideally suited for extending the reach of NVMe. With the availability of up to 100Gbps Ethernet connectivity in the data center and the highly scalable and deterministic nature of DCB-enabled Ethernet fabrics, NVMe over Fabrics solutions leveraging Ethernet based RDMA have the potential to deliver on cost, power, and performance characteristics required to scale out NVMe. (See Figure 1.) Figure 2. Cavium FastLinQ NVMe over Fabrics Benefits Figure 1. Extend the reach of NVMe with NVMe over Fabrics CAVIUM FastLinQ NVMe OVER FABRICS SOLUTIONS Cavium delivers the industry s most comprehensive network adapter portfolio FastLinQ Standard Ethernet Adapters and Converged Networking Adapters that cover the entire spectrum of customer Ethernet connectivity and offload requirements for architecting and deploying NVMe over Fabrics solutions. Purpose built for accelerating and simplifying data center storage solutions, Cavium FastLinQ Ethernet technology delivers the following advantages (Figure 2): 1. Broad Spectrum of Ethernet Connectivity 10/25/40/50/100GbE to host the most demanding storage workloads and deliver scalability, backed by a programmable architecture for future proof networking 2. Universal RDMA Ultimate in choice and investment protection with concurrent support for RoCE, RoCEv2, and iwarp 3. NVMe Direct Peer-to-peer flash access protocol enabling new efficiencies by enabling direct access to flash while offloading server CPU 4. End-to-end Solutions Initiator and target mode solutions, including SPDK that leverage years of storage and storage networking experience 5. Seamless Storage Migration Concurrent offload for NVMe-oF, iser, iscsi, and FCoE enables seamless upgrade paths to next gen storage Broad Spectrum of Ethernet Connectivity Cavium FastLinQ NICs deliver enterprise-class performance and reliability. With millions of Ethernet ports shipped, and a flexible architecture that delivers faster time to market, and adaptation to new and emerging technologies, FastLinQ Ethernet NICs are among the top choices for enterprise data centers. Supporting a broad range of Ethernet connectivity speeds of 10/25/40/50/100GbE, Cavium FastLinQ NICs deliver scalability to drive business growth for the most demanding enterprise, telco, and cloud storage applications leveraging NVMe. SN0530956-00 Rev. B 10/17 2

NVMe over Universal RDMA Fabrics Universal RDMA While RDMA has unique benefits of accelerating performance and offloading server CPU cycles and enabling a low latency fabric for scaling out NVMe, there exists multiple mutually incompatible standards for RDMA two prominent Ethernet based such standards are RoCE and iwarp. RoCE requires a lossless Ethernet fabric and has a routable version called RoCEv2. iwarp relies on standard TCP offload, supports packet routing, and can operate on any Ethernet fabric. The Cavium FastLinQ 41000 and 45000 Series family is the only network adapter family that provides customers the technology choice and investment protection for NVMe-oF solutions with support for concurrent RoCE, RoCEv2 and iwarp. (See Figure 3.) NVMe-Direct Cavium FastLinQ Non-Volatile Memory Express (NVMe) Direct (NVMe- Direct) is a solution that efficiently supports high-speed, low-latency NVMe solid state drive (SSD)-based external storage over Ethernet networks. NVMe Direct offload is an advanced software technology for the Cavium FastLinQ 45000 and 41000 Series of Universal RDMA enabled Ethernet Adapters. Using the programmable engines of Cavium FastLinQ Adapters, NVMe- Direct allows the network interface card (NIC) to execute read/write commands directly on the NVMe SSD, bypassing the host CPU. NVMe- Direct dramatically reduces CPU utilization, since the host CPU is bypassed during read/write transactions. In addition, using NVMe Direct, no host software is used in any part of the read/write operations, which further minimizes latency. The FastLinQ Series Adapters are designed to support read/write operations at wire speed on 100Gb Ethernet (100GbE) with I/Os as small as 4KB and performance that scales linearly when adding more Ethernet ports. NVMe-Direct 1 is an extension of a mature target technology that is transparent and compatible with existing initiator solutions. It significantly reduces risks and streamlines the deployment of NVMe over Fabrics, making it ideal for any application requiring Ethernet-based, highperformance flash storage. End-to-end Solutions Storage architects and customers require a solution that delivers maximum performance, is easy to implement, and is an all-in-one solution. Cavium FastLinQ end-to-end NVMe-oF solutions provide just that with best-in-class performance, functionality, and reliability. Figure 3. FastLinQ Universal RDMA For today s high-performance environments, the Cavium FastLinQ 10/25/40/50/100GbE Universal RDMA-enabled adapters provide a true end-to-end NVMe-oF RDMA solution. This solution consists of PCI Express Universal RDMA enabled NICs, Kernel mode and user mode (SPDK) integrations (Figure 4), initiator and target mode frameworks and enterprise-class management software. Figure 4. End-to-end NVMe over Fabrics Solutions from Cavium FastLinQ 1. Read the NVMe Direct white paper: http://www.cavium.com/resources/documents/whitepapers/adapters/wp_nvmedirect.pdf SN0530956-00 Rev. B 10/17 3

NVMe over Universal RDMA Fabrics Figure 5. FastLinQ Integrations for Software Defined Storage with SPDK Seamless Storage Migration FastLinQ 10/25/40/50GbE Converged Network adapters support full protocol concurrent offload for NVMe-oF, iser, iscsi, and FCoE. FastLinQ adapters deliver up to 3M IOPS while consuming the fewest server CPU cycles, leaving headroom for virtual applications and higher ROI on server investments with seamless upgrade paths for next gen storage connectivity. Customers connecting to legacy SSDs or hybrid flash arrays considering a transition to NVMe-based all flash array would continue to leverage the seamless concurrent storage protocol offloads of FastLinQ adapters as they transition to next gen storage paradigms without a rip and replace of their networking interconnects. PERFORMANCE BENEFITS OF NVME OVER FABRICS UNIVERSAL RDMA Cavium conducted a head-to-head performance analysis, comparing the efficiencies of existing/legacy storage transports to NVMe over Fabrics RDMA transports to a single NVMe drive. The results clearly indicate that to leverage the best potential of data center investments in NVMe, RDMA is the clear and consistent choice. Figure 6. Comparison of Latency Between iscsi and RoCE as a Fabric for NVMe Figures 6 and 7 show the results from the NVMe transport comparisons. While the iscsi software initiators added 82% more latency (Figure 6), NVMe-oF delivers near native performance (Figure 7). SN0530956-00 Rev. B 10/17 4

NVMe over Universal RDMA Fabrics SUMMARY NVMe over Fabrics is a standard that allows new high-performance SSD interface, Non-Volatile Memory Express (NVMe), to be connected across RDMA-capable networks. Deploying high-performance Cavium FastLinQ 10/25/40/50/100GbE networking solutions with universal RDMA, enable an efficient, flexible, and future-proof deployment of NVMe over Fabrics. With broad end-to-end support for NVMe over Fabrics and a decade of storage and storage networking expertise, Cavium technologies are perfectly suited to bridge the gap between storage performance and storage network performance. Figure 7. Comparison of IOPS Between iscsi and RoCE as a Fabric for NVMe ABOUT CAVIUM Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. Follow us: Corporate Headquarters Cavium, Inc. 2315 N. First Street San Jose, CA 95131 408-943-7100 International Offices UK Ireland Germany France India Japan China Hong Kong Singapore Taiwan Israel Copyright 2017 Cavium, Inc. All rights reserved worldwide. Cavium, FastLinQ, and NVMe-oF are registered trademarks or trademarks of Cavium Incorporated, registered in the United States and other countries. All other brand and product names are registered trademarks or trademarks of their respective owners. This document is provided for informational purposes only and may contain errors. Cavium reserves the right, without notice, to make changes to this document or in product design or specifications. Cavium disclaims any warranty of any kind, expressed or implied, and does not guarantee that any results or performance described in the document will be achieved by you. All statements regarding Cavium s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only. SN0530956-00 Rev. B 10/17 5