Low latency and high throughput storage access

Similar documents
Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Storage Protocol Offload for Virtualized Environments Session 301-F

NVMe over Universal RDMA Fabrics

NETWORKING FLASH STORAGE? FIBRE CHANNEL WAS ALWAYS THE ANSWER!

Emulex -branded Fibre Channel HBA Product Line

Unified Storage and FCoE

Performance Benefits of NVMe over Fibre Channel A New, Parallel, Efficient Protocol

How Are The Networks Coping Up With Flash Storage

Analyst Perspective: Test Lab Report 16 Gb Fibre Channel Performance and Recommendations

FC-NVMe. NVMe over Fabrics. Fibre Channel the most trusted fabric can transport NVMe natively. White Paper

NVMe Direct. Next-Generation Offload Technology. White Paper

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

Fibre Channel vs. iscsi. January 31, 2018

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

How Flash-Based Storage Performs on Real Applications Session 102-C

SAN Virtuosity Fibre Channel over Ethernet

Application Advantages of NVMe over Fabrics RDMA and Fibre Channel

NVMe over Fabrics Discussion on Transports

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

InfiniBand Networked Flash Storage

Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

Evaluation of the Chelsio T580-CR iscsi Offload adapter

Expand the Power of Flash with FC-NVMe. Live Webcast November 1, :00 am PT

Demartek September Intel 10GbE Adapter Performance Evaluation for FCoE and iscsi. Introduction. Evaluation Environment. Evaluation Summary

Flash Storage with 24G SAS Leads the Way in Crunching Big Data

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

Agenda. Introduction Fibre Channel terms and background NVMe terms and background Deep dive on FC-NVMe FC-NVMe use cases Summary

Solid State Storage is Everywhere Where Does it Work Best?

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Best Practices for Deployments using DCB and RoCE

Gen 6 Fibre Channel Evaluation of Products from Emulex and Brocade

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

24G SAS Opens the Way to the Latest High-Performance Applications

ESG Lab Review The Performance Benefits of Fibre Channel Compared to iscsi for All-flash Storage Arrays Supporting Enterprise Workloads

Next Generation Storage Networking for Next Generation Data Centers. PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek

Delivering High-Throughput SAN with Brocade Gen 6 Fibre Channel. Friedrich Hartmann SE Manager DACH & BeNeLux

Universal RDMA: A Unique Approach for Heterogeneous Data-Center Acceleration

16GFC Sets The Pace For Storage Networks

Use Cases for iscsi and FCoE: Where Each Makes Sense

The Best Ethernet Storage Fabric

IBM Emulex 16Gb Fibre Channel HBA Evaluation

QLogic 2700 Series. Gen 6 FC (32GFC) Fibre Channel Adapters. Data Sheet OVERVIEW

NVMe Over Fabrics (NVMe-oF)

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

QLogic/Lenovo 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Jake Howering. Director, Product Management

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

Industry Brief. Gen 5 16Gb Fibre Channel from HP Eases Data Center Traffic Jams. Featured Applications

How to Network Flash Storage Efficiently at Hyperscale. Flash Memory Summit 2017 Santa Clara, CA 1

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC

NetApp AFF A300 Gen 6 Fibre Channel

Designing Winning Enterprise Storage

Designing Next Generation FS for NVMe and NVMe-oF

NVM Express over Fabrics Storage Solutions for Real-time Analytics

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone

Benefits of Offloading I/O Processing to the Adapter

Cisco UCS Virtual Interface Card 1225

Build a Better Data Center with 24G SAS

Network Function Virtualization Using Data Plane Developer s Kit

HP Storage Summit 2015

Annual Update on Flash Memory for Non-Technologists

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Unified Storage Networking. Dennis Martin President, Demartek

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces

Architected for Performance. NVMe over Fabrics. September 20 th, Brandon Hoff, Broadcom.

Cisco Nexus 5000 and Emulex LP21000

All Roads Lead to Convergence

ESG Lab Review Accelerating Time to Value: Automated SAN and Federated Zoning with HPE 3PAR and Smart SAN for 3PAR

An Approach for Implementing NVMeOF based Solutions

Concurrent Support of NVMe over RDMA Fabrics and Established Networked Block and File Storage

Evaluation of Multi-protocol Storage Latency in a Multi-switch Environment

STORAGE NETWORKING TECHNOLOGY STEPS UP TO PERFORMANCE CHALLENGES

Introduction to iscsi

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

HPE Adapters from Marvell FastLinQ Ethernet and QLogic Fibre Channel Adapters Power HPE Servers


Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

N V M e o v e r F a b r i c s -

Why NVMe/TCP is the better choice for your Data Center

NVMe over Fabrics. High Performance SSDs networked over Ethernet. Rob Davis Vice President Storage Technology, Mellanox

Unified Storage Networking

Maximize the All-Flash Data Center with Brocade Gen 6 Fibre Channel

NetApp AFF. Datasheet. Leading the future of flash

VM Aware Fibre Channel

NVMe Modern SAN Primer

Cisco SAN Analytics and SAN Telemetry Streaming

EXPERIENCES WITH NVME OVER FABRICS

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

HP P6000 Enterprise Virtual Array

Storage Area Networks SAN. Shane Healy

Emulex LPe16000B Gen 5 Fibre Channel HBA Feature Comparison

Fibre Channel. Roadmap to your SAN Future! HP Technology Forum & Expo June, 2007, Las Vegas Skip Jones, FCIA Chair, QLogic

儲存網路, 與時俱進 JASON LIN SENIOR TECHNICAL CONSULTANT BROCADE, TAIWAN

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

Evolution of Fibre Channel. Mark Jones FCIA / Emulex Corporation

Transcription:

Low latency and high throughput storage access Journey from SCSI to NVMe in FC fabrics Raj Lalsangi Santa Clara, CA 1

Protocol neutral architecture principles Reduce / avoid interrupts Avoid context switches Avoid locks Lean code path length Santa Clara, CA 2

NVMe performance features NVMe Namespace state vs. SCSI LUN state in data path (device server, task set, ordered commands, ACA and so on) NVMe multi-queue model NVMe lends itself for lower code path length Santa Clara, CA 3

FC-NVMe benefits Built for low latency and high throughput from ground up Better latency and throughput over existing 32G fabric and host HBA ecosystem with software upgrade Seamless fabric management name services, zoning, and auto discovery Santa Clara, CA 4

The Needs of NVMe over Fabrics (NVMe-oF) Dennis Martin Founder, Demartek Santa Clara, CA 5

Demartek NVMe-oF Rules of Thumb To achieve maximum throughput in a storage target (without oversubscription): At least one 25Gb or faster network port for each NVMe drive (PCIe 3.0 x4) for large-block sequential I/O Dual-port 25GbE or 32GFC adapters need PCIe 3.0 x8 For every two NVMe drives and network ports 16 lanes of PCIe 3.0 are needed (FC has more headroom) Prospects are better with PCIe 4.0 Santa Clara, CA 6

Benefits of FCP + FC-NVMe Curt Beckmann Principal Architect Brocade Santa Clara, CA 7

The Benefits of Dual Protocol FCP / FC-NVMe SANs Emulex HBAs by Enhances performance of existing SAN; no need for costly, disruptive infrastructure duplication / replacement SCSI NVMe SCSI NVMe SCSI Migrate application volumes 1 by 1 with clean, easy rollback options Makes interesting dual-protocol use cases (DB mining) available Full fabric awareness, visibility and manageability with familiar tools (+ 16GFC enh ) 16GFC NVMe Traffic SCSI Traffic 8

NVMe over Fabrics Comparisons Rupin Mohan, Director R&D, HPE August 2016

Shared Storage with NVMe RDMA-Based NVMe-oF NVMe over Fibre Channel Completely new fabric protocol being developed and standardized Standards group dealing with same type of challenges, shipping I/O commands/status, data over distance. Uses Fibre Channel as a base, existing fabric protocol, shipping, standardized by T11 Fibre Channel solved these problems when FCP protocol was developed to ship SCSI commands/status over distance over a FC network FC, FCoE, iscsi.4 2.5 ms RDMA is available as per protocol Zero-copy capability Transport options are iwarp, RoCE (v1 or v2) and even considering TCP/IP RDMA is not available, uses FCP Zero-copy capability Transport is FC. No changes to switching infrastructure / ASICs required to support NVMe over FC Latency NVMe 50 150 us Complex integrated fabric configuration Could be lower cost if onboard NIC s on servers and cheap non-dcb switches are used New I/O protocol, New transport FC fabrics are well understood Higher cost, especially the newer generations of FC are expensive New I/O protocol, Existing reliable transport Today 2017 & beyond Lower latency if RDMA is enabled Latency improvements with hardware assists on Adapters. No RDMA option. Timeframe

FC Goodness of Fit Praveen Midha Cavium Santa Clara, CA 11

Is FC a Good Fit/Transport for NVMe? Purpose built Dedicated Fabric - FC does Storage only! Extensible Add FC-NVMe to your Existing FCP-SCSI SAN. Discovery and State Change Services Fabric Resident Reliable Lossless - Not best effort. Robust Suite of Provisioning, Orchestration, Diagnostics Tools Field Proven Millions of Ports Deployed. Billions Invested in FC. Battle Hardened Extremely scalable Thousands of Initiator and Target ports. Enterprise Grade - T10DIF, MPIO Zero Copy Zero Copy, Fully Offloaded, Low latency and High IOPS. Deterministic Performance Trusted Trusted to carry the most mission critical storage traffic in the Data Center. Secure FC-NVMe is the Perfect Harmony between Low Latency NVMe Flash and a High Performance FC Transport!