SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

Similar documents
Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

InfiniBand Networked Flash Storage

Storage Protocol Offload for Virtualized Environments Session 301-F

iscsi Extensions for RDMA Updates and news Sagi Grimberg Mellanox Technologies

N V M e o v e r F a b r i c s -

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)

NVMe over Universal RDMA Fabrics

iscsi or iser? Asgeir Eiriksson CTO Chelsio Communications Inc

EXPERIENCES WITH NVME OVER FABRICS

RoCE vs. iwarp A Great Storage Debate. Live Webcast August 22, :00 am PT

NVMe Over Fabrics (NVMe-oF)

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Universal RDMA: A Unique Approach for Heterogeneous Data-Center Acceleration

Application Acceleration Beyond Flash Storage

Concurrent Support of NVMe over RDMA Fabrics and Established Networked Block and File Storage

NVMe over Fabrics. High Performance SSDs networked over Ethernet. Rob Davis Vice President Storage Technology, Mellanox

Creating an agile infrastructure with Virtualized I/O

Architected for Performance. NVMe over Fabrics. September 20 th, Brandon Hoff, Broadcom.

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

The Future of Interconnect Technology

NVMe Direct. Next-Generation Offload Technology. White Paper

VMware Virtual SAN Technology

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

ZD-XL SQL Accelerator 1.6

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

Next Generation Storage Networking for Next Generation Data Centers. PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek

RoCE Accelerates Data Center Performance, Cost Efficiency, and Scalability

Fibre Channel vs. iscsi. January 31, 2018

The NE010 iwarp Adapter

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics

NETWORKING FLASH STORAGE? FIBRE CHANNEL WAS ALWAYS THE ANSWER!

Application Advantages of NVMe over Fabrics RDMA and Fibre Channel

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc.

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved.

Annual Update on Flash Memory for Non-Technologists

Flash In the Data Center

SurFS Product Description

How To Get The Most Out Of Flash Deployments

How to Network Flash Storage Efficiently at Hyperscale. Flash Memory Summit 2017 Santa Clara, CA 1

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

Audience This paper is targeted for IT managers and architects. It showcases how to utilize your network efficiently and gain higher performance using

Meltdown and Spectre Interconnect Performance Evaluation Jan Mellanox Technologies

Introduction to Infiniband

Jake Howering. Director, Product Management

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Low latency and high throughput storage access

RoCE vs. iwarp Competitive Analysis

OFED Storage Protocols

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

G2M Research Fall 2017 NVMe Market Sizing Webinar

QLogic FastLinQ QL45462HLCU 40GbE Converged Network Adapter

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO

NVM Express Awakening a New Storage and Networking Titan Shaun Walsh G2M Research

Designing Next Generation FS for NVMe and NVMe-oF

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

All Roads Lead to Convergence

Software-Defined Data Infrastructure Essentials

Enterprise Architectures The Pace Accelerates Camberley Bates Managing Partner & Analyst

Session 201-B: Accelerating Enterprise Applications with Flash Memory

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Virtualization of the MS Exchange Server Environment

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

SMB Direct Update. Tom Talpey and Greg Kramer Microsoft Storage Developer Conference. Microsoft Corporation. All Rights Reserved.

How Are The Networks Coping Up With Flash Storage

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

RAIDIX Data Storage Solution. Data Storage for a VMware Virtualization Cluster

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Best Practices for Deployments using DCB and RoCE

Accessing NVM Locally and over RDMA Challenges and Opportunities

Ethernet. High-Performance Ethernet Adapter Cards

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

IBM A Assessment: PureFlex Technical Expert V1.

What is QES 2.1? Agenda. Supported Model. Live demo

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Evaluation of the Chelsio T580-CR iscsi Offload adapter

FastLinQ QL41162HLRJ. 8th Generation 10Gb Converged Network Adapter with iscsi, FCoE, and Universal RDMA. Product Brief OVERVIEW

THE STORAGE PERFORMANCE DEVELOPMENT KIT AND NVME-OF

DRBD SDS. Open Source Software defined Storage for Block IO - Appliances and Cloud Philipp Reisner. Flash Memory Summit 2016 Santa Clara, CA 1

NVM Express over Fabrics Storage Solutions for Real-time Analytics

Active System Manager Release 8.2 Compatibility Matrix

BUILDING A BLOCK STORAGE APPLICATION ON OFED - CHALLENGES

Multifunction Networking Adapters

Transform Your SAN with Transparent Application Acceleration

ARISTA: Improving Application Performance While Reducing Complexity

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

IBM IBM PureFlex Technical Expert V1.

At the heart of a new generation of data center infrastructures and appliances. Sept 2017

Hyperconverged Infrastructure Use Cases and Buyer s Guide

Analyst Perspective: Test Lab Report 16 Gb Fibre Channel Performance and Recommendations

Evolution of Fibre Channel. Mark Jones FCIA / Emulex Corporation

An Approach for Implementing NVMeOF based Solutions

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Lightning Fast Rock Solid

Flash Storage with 24G SAS Leads the Way in Crunching Big Data

Transcription:

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem Rob Davis Mellanox Technologies robd@mellanox.com

The FASTEST Storage Protocol: iser The FASTEST Storage: Flash What it is: iscsi With RDMA Transport Runs over Ethernet or InfiniBand at speeds up to 100Gb/s Works with all applications that support SCSI/iSCSI Benefits High Performance Ethernet Storage: Highest bandwidth, Highest IOPs, Lowest Latency Ethernet TCO iscsi storage features, management and tools (security, HA, discovery...) Faster than iscsi, FC, FCoE Ideal for Flash Storage Applications Latency-sensitive workloads; Small, random I/O Databases, Virtualization, VDI Bandwidth-sensitive workloads; Large, sequential I/O Post production, oil/gas 2

Flash Performance Creates Bottleneck at Network Layer 1000 Storage Media Technology 100% Networked Storage Protocol and Network Access Time (micro-sec) 10 0.1 HD SSD NVM Network Storage Media Storage Media NVM SSD HDD Access Time (micro-sec) 100 1 The Network and the Protocol MUST get faster 0.01 HDD SSD NVM FC, TCP RDMA RDMA+ 50% Storage Protocol (SW) Network 3

iser Has No TCP/IP Stack iser 4

Protocol / Transport Comparison Need Highest Performance InfiniBand iser RoCE SMB Direct NFSoRDMA SMB, NFS, or iscsi on TCP Want Ethernet Ethernet FCP FCoE Want Fibre Channel Fibre Channel Transport InfiniBand Ethernet RoCE Ethernet TCP FCoE FC Speed 20/56/100 Gb/s 10/25/40/50/100 Gb/s 10/25/40/50/100 Gb/s 10/40 Gb/s 8/16/32 Gb/s RDMA Yes Yes No No No Routable Yes Yes Yes No No 5

iser Protocol Overview (Read) iscsi Initiator Send_Control (SCSI Read Cmd) iser HCA Send_Control + Buffer advertisement RDMA Write for Data HCA iser Target Control_Notify Data_Put (Data-In PDU) for Read Target Storage SCSI Reads Control_Notify Initiator Send Command PDU (Protocol data unit) to Target Target return data using RDMA Write Send_Control (status, sense data) Target send Response PDU back when completed transaction Initiator receives Response and complete SCSI operation Send_Control (SCSI Response) 6

iser Protocol Overview (Write) iscsi Initiator Send_Control (SCSI Write Cmd) iser HCA Send_Control + Buffer advertisement RDMA Read for Data (Optional) HCA iser Target Control_Notify (SCSI Command) Get_Data (R2T PDU) Target Storage SCSI Writes Control_Notify Send_Control (status, sense data) Send Command PDU (optionally with Immediate Data to improve latency) Map R2T to RDMA Read operation (retrieve data) Target send Response PDU back when completed transaction Send_Control (SCSI Response) 7

Requirements to Deploy iser Application(s) that can use SCSI/iSCSI All applications that use SCSI-based block storage work with iser OS or Hypervisor that Supports an iser initiator Today: Linux & VMware ESXi, Oracle Solaris Expected soon: Windows, FreeBSD iser Storage Target(unless Hyper Converged) NetApp, HP SL4500, Oracle ZFS, Violin Memory, Zadara, Saratoga Speed, others Create in Linux using LIO, TGT, or SCST target Network that supports RDMA Adapters support InfiniBand, iwarp or RoCE Switches support InfiniBand or Ethernet DCBx with ECN 8

iser Ethernet Performance Higher Bandwidth and IOPS with Less CPU Utilization than iscsi 9

VMWare: iser over ESXi iser/rdma has 10X Bandwidth Advantage vs TCP/IP and 2.5X IOPs Test Setup: ESXi 5.0, 2 VMs, 2 LUNS per VM 10

VDI Real World iser Application iser eliminates storage bottlenecks in VDI deployments iser accelerates the access to cache over RDMA 140 Virtual desktops with iser/roce vs. 60 virtual desktops over TCP/IP ConnectX iscsi using RDMA Active Nytro MegaRAID Flash Cache ConnectX iscsi using RDMA Active Nytro MegaRAID Flash Cache 11

ROI Comparison Shows iser Value Interconnect # Virtual Desktop per Server # Servers # Switches CapEx CapEx per Virtual Desktop 10GbE with TCP/IP 10GbE with RoCE 60 84 2 $3,418,600 $684 140 36 1 $1,855,900 $371 iser Delivers $1.5M CapEx Savings For VDI Deployments ConnectX iscsi using RDMA Active Nytro MegaRAID Flash Cache ConnectX iscsi using RDMA Active Nytro MegaRAID Flash Cache http://www.mellanox.com/related-docs/whitepapers/sb_virtual_desktop_infrastructure_storage_acceleration_final.pdf 12

iser OpenStack Support Compute Servers VM OS Hypervisor (KVM) Open-iSCSI w iser Adapter VM OS VM OS Storage Servers OpenStack (Cinder) iscsi/iser Target (tgt) RDMA Adapter Cache Local Disks Bandwidth [GBytes/s] 7 6 5 4 3 2 1 iser 16 VMs Write iscsi Write 16 VMs PCIe Limit 6X Switching Fabric 0 1 4 16 64 256 I/O Size [KBytes] Built-in OpenStack components and management No additional software required RDMA is already inbox and ready for OpenStack users 13

iser in the Cloud 14

iser in the Enterprise Models: E5500/5600: Hybrid HDD/SSD EF550/560: All-flash Performance: 530K IOPS 12 GB/s 15

All Flash Array Startups Model: AltamontXP: All-flash 40Gb/s Ethernet 16

NAND Suppliers & iser Target node Dual-socket x86 server 4x40GbE NICs iser LIO target 20xPM953 NVMe drives Initiators Dual-socket x86 server 1x40GbE NIC Performance 2.1M 4K Random Read 17.2GB/s 128K Seq Read Target Node Initiator Nodes 17

NAND Suppliers & iser + 2.7MIOPS @ 512B + 1.8MIOPS @ 4K + 1.2MIOPS @ 8K + 850KIOPS @ 16K + 480KIOPS @ 32K + 15GB/s Max client client client client switch superbox Storage Appliance Proof of Concept + 2 dual port 40GbE NICs + Datasheet 4K + 122us read, 43us write + 1MIOPS @ 8K 50/50 + 471us average 18

Conclusions iser gives users the full performance benefits of flash based solutions across a Ethernet or InfiniBand network RDMA technology like RoCE enables iser performance by bypassing the TCP/IP network stack A growing number of storage solutions providers support iser in their Flash based products 19

Questions? Rob Davis robd@mellanox.com