InfiniBand Networked Flash Storage

Similar documents
SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

How to Network Flash Storage Efficiently at Hyperscale. Flash Memory Summit 2017 Santa Clara, CA 1

Storage Protocol Offload for Virtualized Environments Session 301-F

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Introduction to Infiniband

EXPERIENCES WITH NVME OVER FABRICS

RoCE vs. iwarp Competitive Analysis

Application Acceleration Beyond Flash Storage

RoCE vs. iwarp A Great Storage Debate. Live Webcast August 22, :00 am PT

NVMe Over Fabrics (NVMe-oF)

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

Introduction to High-Speed InfiniBand Interconnect

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

N V M e o v e r F a b r i c s -

SurFS Product Description

Building a High IOPS Flash Array: A Software-Defined Approach

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved.

DRBD SDS. Open Source Software defined Storage for Block IO - Appliances and Cloud Philipp Reisner. Flash Memory Summit 2016 Santa Clara, CA 1

Meltdown and Spectre Interconnect Performance Evaluation Jan Mellanox Technologies

The NE010 iwarp Adapter

NVMe over Fabrics. High Performance SSDs networked over Ethernet. Rob Davis Vice President Storage Technology, Mellanox

Universal RDMA: A Unique Approach for Heterogeneous Data-Center Acceleration

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

RoCE Accelerates Data Center Performance, Cost Efficiency, and Scalability

Birds of a Feather Presentation

Designing Next Generation FS for NVMe and NVMe-oF

Audience This paper is targeted for IT managers and architects. It showcases how to utilize your network efficiently and gain higher performance using

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

RAIDIX Data Storage Solution. Data Storage for a VMware Virtualization Cluster

NVMe over Universal RDMA Fabrics

StarWind Storage Appliance

Creating an agile infrastructure with Virtualized I/O

NVMe Direct. Next-Generation Offload Technology. White Paper

iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)

The New Scale-Out Storage

iscsi or iser? Asgeir Eiriksson CTO Chelsio Communications Inc

DDN. DDN Updates. Data DirectNeworks Japan, Inc Shuichi Ihara. DDN Storage 2017 DDN Storage

SOFTWARE-DEFINED BLOCK STORAGE FOR HYPERSCALE APPLICATIONS

Evaluation of Chelsio Terminator 6 (T6) Unified Wire Adapter iscsi Offload

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Application Advantages of NVMe over Fabrics RDMA and Fibre Channel

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads

Concurrent Support of NVMe over RDMA Fabrics and Established Networked Block and File Storage

Deep Learning Performance and Cost Evaluation

Flash In the Data Center

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Challenges of High-IOPS Enterprise-level NVMeoF-based All Flash Array From the Viewpoint of Software Vendor

NVM Express over Fabrics Storage Solutions for Real-time Analytics

Hardened Security in the Cloud Bob Doud, Sr. Director Marketing March, 2018

Low latency and high throughput storage access

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Deep Learning Performance and Cost Evaluation

DDN. DDN Updates. DataDirect Neworks Japan, Inc Nobu Hashizume. DDN Storage 2018 DDN Storage 1

Highly Scalable, Non-RDMA NVMe Fabric. Bob Hansen,, VP System Architecture

Infiniband and RDMA Technology. Doug Ledford

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9

Hitachi Virtual Storage Platform Family

REQUEST FOR PROPOSAL FOR PROCUREMENT OF

Oracle IaaS, a modern felhő infrastruktúra

朱义普. Resolving High Performance Computing and Big Data Application Bottlenecks with Application-Defined Flash Acceleration. Director, North Asia, HPC

Solutions for Scalable HPC

The Road to ExaScale. Advances in High-Performance Interconnect Infrastructure. September 2011

How Are The Networks Coping Up With Flash Storage

Solaris Engineered Systems

Software Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec

Annual Update on Flash Memory for Non-Technologists

Xyratex ClusterStor6000 & OneStor

HPE SimpliVity. The new powerhouse in hyperconvergence. Boštjan Dolinar HPE. Maribor Lancom

SMB Direct Update. Tom Talpey and Greg Kramer Microsoft Storage Developer Conference. Microsoft Corporation. All Rights Reserved.

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage

Mark Falco Oracle Coherence Development

SGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012

Accelerating Ceph with Flash and High Speed Networks

BUILDING A BLOCK STORAGE APPLICATION ON OFED - CHALLENGES

Architected for Performance. NVMe over Fabrics. September 20 th, Brandon Hoff, Broadcom.

In-Network Computing. Paving the Road to Exascale. 5th Annual MVAPICH User Group (MUG) Meeting, August 2017

Emerging Technologies for HPC Storage

The Future of Interconnect Technology

Accelerating Data Centers Using NVMe and CUDA

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX

2008 International ANSYS Conference

Performance Implications Libiscsi RDMA support

Isilon Scale Out NAS. Morten Petersen, Senior Systems Engineer, Isilon Division

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

Interconnect Your Future

ZD-XL SQL Accelerator 1.6

DDN About Us Solving Large Enterprise and Web Scale Challenges

HIGH PERFORMANCE STORAGE SOLUTION PRESENTATION All rights reserved RAIDIX

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

Future Routing Schemes in Petascale clusters

Transcription:

InfiniBand Networked Flash Storage Superior Performance, Efficiency and Scalability Motti Beck Director Enterprise Market Development, Mellanox Technologies Flash Memory Summit 2016 Santa Clara, CA 1

17PB File System with Sustainable180GB/s The parallel file system will run Lustre 2.8 with ZFS OSDs and multiple metadata servers. The Lustre file system contains 36 OSS nodes, with each node capable of 5GB/s of sustained data performance, and 16 metadata servers with 25TB of SSD storage capacity. The solution is anchored by enterprise 4U 84 bay 12Gb SAS JBODs, LSI/Avago 12Gb SAS adapters, Mellanox EDR IB, HGST 12Gb Enterprise SAS HDDs, and Intel server technologies. The file system incorporates 6 scalable storage units each containing six Lustre OSS and six 4U-84Bay JBODs with 480 8TB SAS drives. The solution will be employing ZFS on Linux with raidz2 data parity protection. Resiliency is provided by multipath and HA failover connectivity, intended to eliminate single points of failure. Flash Memory Summit 2016 Santa Clara, CA Source: Raid Inc. May 31 st, 2016 2

Why InfiniBand in Storage Industry standard defined by the InfiniBand Trade Association (IBTA) Defines System Area Network architecture Comprehensive specification - from physical to applications Simplicity drives higher efficiency and resiliency Reliable, lossless, self-managed fabric Transport offload - Remote Direct Memory Access (RDMA) Centralized fabric management Subnet Manger (SM) Consoles HCA Processor Node HCA Switch InfiniBand Subnet Switch HCA Processor Node HCA Processor Node Subnet Manager Switch Switch Gateway Gateway RAID Storage Subsystem Ethernet Fibre Channel

Reliable, Lossless, Self-Managed Fabric End to end flow control Credit based flow control for each link End to end Congestion management Automatic path migration

Remote Direct Memory Access RDMA Transport offload Kernel bypass Remote Data Transfer Application Application Buffer Buffer Kernel Bypass Protocol Offload

InfiniBand Cuts SAN Cost by 50% Delivers SAN-like functionality from the Windows Stack Using SMB Direct (SMB 3.0 over RDMA) Utilize inexpensive, industry-standard, commodity hardware Eliminate the cost of proprietary hardware and software from SAN solutions Source: Microsoft

RDMA Delivers Higher Cloud Efficiency 2X better performance with RDMA 2X higher bandwidth & 2X better CPU efficiency RDMA achieves full Flash storage bandwidth Remote storage without compromises

RDMA Frees Up CPU for Application Processing 54Gb/s 90Gb/s CPU Users limited by CPU resources CPU More CPU means More users & apps X X X X X X X X Available for Applications Processing CPU Penalty without offload All cores Available for Applications Processing Regular Network (without Offload) Unreliable Low level I/O Half of CPU capacity consumed moving data Even though achieving only half throughput Cores unavailable for application processing Smart RoCE Network (with Offload) Reliable Network Transport All CPU available to run applications Better efficiency = more users Smart network delivers better TCO

RDMA Removes Storage Bottlenecks in Enterprise Stellar Storage Performance with 100Gb/s Microsoft labs benchmarked Storage Spaces @ 100Gb/s 4 Node Cluster NVMe Flash & ConnectX-4 100GbE Adapters Storage Spaces Direct over RoCE Achieved 480Gb/s (60GB/s) throughput Transmit entire content of Wikipedia in 5 seconds Storage Spaces Direct will be part of Azure Stack Private cloud package for 2017 https://blogs.technet.microsoft.com/filecab/2016/05/11/s2dthroughputtp5/

Storage Acceleration Running iser* Higher is Better Higher is Better RDMA Superior Across the Board Throughput & IOP s Efficiency & CPU Utilization Scalability *iscsi Extensions for RDMA 10x Bandwidth Performance Advantage vs TCP/IP 2.5x IOPS Performance With iser Initiator * iscsi over RDMA Test Setup: ESXi 5.0, 2 VMs, 2 LUNS per VM

InfiniBand Enables Most Cost Effective Database Storage Flash Memory Summit 2016 Santa Clara, CA Source: Oracle 11

Dominant in Storage and Database Platforms Leading in Performance, Efficiency and Scalability