How to Network Flash Storage Efficiently at Hyperscale. Flash Memory Summit 2017 Santa Clara, CA 1

Similar documents
N V M e o v e r F a b r i c s -

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved.

InfiniBand Networked Flash Storage

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Storage Protocol Offload for Virtualized Environments Session 301-F

NVMe over Universal RDMA Fabrics

NVMe Over Fabrics (NVMe-oF)

NVM Express Awakening a New Storage and Networking Titan Shaun Walsh G2M Research

Accelerating Data Centers Using NVMe and CUDA

NVMe over Fabrics. High Performance SSDs networked over Ethernet. Rob Davis Vice President Storage Technology, Mellanox

Hardened Security in the Cloud Bob Doud, Sr. Director Marketing March, 2018

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017

SOFTWARE-DEFINED BLOCK STORAGE FOR HYPERSCALE APPLICATIONS

Designing Next Generation FS for NVMe and NVMe-oF

The Best Ethernet Storage Fabric

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

We will also specifically discuss concept of a pooled system, storage node, pooling of PCIe as well as NVMe based storage.

iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)

Application Acceleration Beyond Flash Storage

Accelerating Ceph with Flash and High Speed Networks

Interconnect Your Future

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Сетевые технологии для систем хранения данных

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc.

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

The Future of High Performance Interconnects

NVMe Takes It All, SCSI Has To Fall. Brave New Storage World. Lugano April Alexander Ruebensaal

Why NVMe/TCP is the better choice for your Data Center

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

EXPERIENCES WITH NVME OVER FABRICS

IO virtualization. Michael Kagan Mellanox Technologies

FMS18 Invited Session 101-B1 Hardware Acceleration Techniques for NVMe-over-Fabric

By John Kim, Chair SNIA Ethernet Storage Forum. Several technology changes are collectively driving the need for faster networking speeds.

In-Network Computing. Paving the Road to Exascale. 5th Annual MVAPICH User Group (MUG) Meeting, August 2017

The True Performance of Flash Storage. Flash Memory Summit 2018 Santa Clara, CA

ARISTA: Improving Application Performance While Reducing Complexity

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

SmartNICs: Giving Rise To Smarter Offload at The Edge and In The Data Center

How Are The Networks Coping Up With Flash Storage

NVM Express over Fabrics Storage Solutions for Real-time Analytics

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage

Annual Update on Flash Memory for Non-Technologists

Low latency and high throughput storage access

RoCE Accelerates Data Center Performance, Cost Efficiency, and Scalability

Application Advantages of NVMe over Fabrics RDMA and Fibre Channel

Company. Intellectual Property. Headquartered in the Silicon Valley

Interconnect Your Future

G2M Research Presentation Flash Memory Summit 2018

Networking at the Speed of Light

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

Meltdown and Spectre Interconnect Performance Evaluation Jan Mellanox Technologies

Architected for Performance. NVMe over Fabrics. September 20 th, Brandon Hoff, Broadcom.

Kaminario and K2 All Flash Array At A Glance 275+ Employees Boston HQ Locations in Israel, London, Paris, Beijing & Seoul 200+ Channel Partners K2 All

The NE010 iwarp Adapter

Ethernet. High-Performance Ethernet Adapter Cards

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

From Rack Scale to Network Scale: NVMe Over Fabrics Enables Exabyte Applica>ons. Zivan Ori, CEO & Co-founder, E8 Storage

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Toward a Memory-centric Architecture

Lenovo - Excelero NVMesh Reference Architecture

Software Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

Highly Scalable, Non-RDMA NVMe Fabric. Bob Hansen,, VP System Architecture

Using FPGAs to accelerate NVMe-oF based Storage Networks

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

At the heart of a new generation of data center infrastructures and appliances. Sept 2017

Intelligent Hybrid Flash Management

Persistent Memory over Fabrics

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

HPE Synergy die optimale Plattform für eine IT der zwei Geschwindigkeiten

NVMe Direct. Next-Generation Offload Technology. White Paper

RoCE vs. iwarp Competitive Analysis

NVM PCIe Networked Flash Storage

G2M Research Fall 2017 NVMe Market Sizing Webinar

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

Beyond the Hype of NVMe & NVMeoF:

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Why Composable Infrastructure? Live Webcast February 13, :00 am PT

Decentralized Distributed Storage System for Big Data

Enhancing NVMe-oF Capabilities Using Storage Abstraction

Interconnect Your Future

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Maximum Performance. How to get it and how to avoid pitfalls. Christoph Lameter, PhD

Flash Considerations for Software Composable Infrastructure. Brian Pawlowski CTO, DriveScale Inc.

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

In-Network Computing. Paving the Road to Exascale. June 2017

Birds of a Feather Presentation

OCP Engineering Workshop - Telco

Interconnect Your Future

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

Evolution of Rack Scale Architecture Storage

Optimizing the Data Center with an End to End Solutions Approach

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Transcription:

How to Network Flash Storage Efficiently at Hyperscale Manoj Wadekar Michael Kagan Flash Memory Summit 2017 Santa Clara, CA 1

ebay Hyper scale Infrastructure Search Front-End & Product Hadoop Object Store Deep Learning AI Databases 2

Flash Typical hyper scale servers Performance Needs In Memory/Search Databases Hadoop Object Store FE/Dev Archival/Cold 3

Typical hyper scale servers: Design goals Efficiency: Utilization, commonality Growth: Performance, Capacity 4

Converged Infrastructure: challenges Mismatched App Needs: - Compute/Storage needs can change for different clusters. - Can result in underperformance or wastage Inefficiency: - Spend, infrastructure space, power utilization - Further challenge to justify high density/high performance drives Scale Challenge: - Cattle use cases for data-heavy workloads may result in large data movement - Complicated storage scheduler leads to constrained scaling Server Platform: Shackled to local storage 5

What s needed: Disaggregated Storage Separate out storage and compute resources 6

What s needed: Rack-As-A-Compute Ethernet Node Local Storage Rack Local Storage 7

Why: Rack-As-A-Compute Right Sizing: - Clusters can use optimized ratio of compute and storage. - Allows reducing wastage and improve performance Independent Scaling: Compute and storage capacities can be scaled per need Ethernet 8

Disaggregated Storage: Interconnect Needs Throughput: - Sequential workloads driven by throughput - Aggregated storage drives higher needs Latency: - IOPs sensitive workloads - Appropriate deployment topologies Simplicity: - Known, ubiquitous network Is Ethernet Ready? 9

The Paradigm Shift Resource as a Service 2017 Mellanox Technologies 10

RDMA: Data Center Infrastructure Foundation Direct access to remote resource App App BUF BUF OS NIC NIC OS Kernel Bypass Acceleration RDMA over Converged Ethernet RoCE 2017 Mellanox Technologies 11

RDMA-Enabled Cloud Infrastructure 2017 Mellanox Technologies 12

Storage Media Technology 10000 100,000 1000 100 10 Storage Media Access Time 10,000X Improvement 1 1 0.1 Hard Disk SSD NVDIMM Hard disk SSD NVDIMM 2017 Mellanox Technologies 13

Networked Storage Continues Growth and Movies to Ethernet Networked Storage (SANs) Better utilization: capacity, rack space, power Scalability, management, fault isolation Ethernet growing very rapidly driven by: Cloud & Hyper Converged Infrastructure (HCI) - No Fibre Channel in the Cloud NVMe Over Fabrics Software Defined Storage 2017 Mellanox Technologies 14

Ethernet Storage Fabric We ve Got You Covered Ethernet Storage Fabric Everything a Traditional SAN Offers but Faster, Smarter, & Less Expensive Performance Efficiency Ubiquity 2017 Mellanox Technologies 15

NVMe-oF Performance with Open Source Linux 100 2017 Mellanox Technologies 16

BlueField: High Performance Yet Cost Effective Storage 200 Gb/s of throughput More than 5 million IOPS BlueField Industry-Leading ConnectX Intelligent Offload Tile Multicore ARM Architecture 2017 Mellanox Technologies 17

BlueField Building Blocks for Storage Platforms ASIC Storage Controller Adapter Family of products 4, 8, 16 cores NMVe storage solutions Customized per-oem Smart NIC Standard PCIe slot Open Platform Software Standards Based BlueOS Linux Built from kernel.org Standard Linux development tools to build your application 2017 Mellanox Technologies 18

Storage Class Memory (SCM) 1980 2010 2017+ SCM Over Fabrics Definition WIP in IBTA 2017 Mellanox Technologies 19

Technology Leadership RAID in network BlueField Storage-optimized NVMEoF SoC Target offload Crypto T10DIF offload RDMA Boards Silicon RDMA NAS Systems Boards Silicon RDMA block storage (iser) Cables Systems Boards Silicon Software Cables Systems Boards Silicon Optics Software Cables Systems Boards Silicon Processing Optics Software Cables Systems Boards Silicon 2005 2008 2010 2012 2015 2017 2017 Mellanox Technologies 20

Network is a Computer Deliver Value $$$$$$$/User Users Smart Network Increases Datacenter Value $$/User Users Intelligent Tasks Network functions On Network Offload: 100% for Apps 2017 Mellanox Technologies Leadership in Data Center Networking 21

Questions? Manoj Wadekar Michael Kagan Flash Memory Summit 2017 Santa Clara, CA 22