The Comparison of Ceph and Commercial Server SAN. Yuting Wu AWcloud

Similar documents
Deploying Software Defined Storage for the Enterprise with Ceph. PRESENTATION TITLE GOES HERE Paul von Stamwitz Fujitsu

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

A product by CloudFounders. Wim Provoost Open vstorage

SolidFire and Ceph Architectural Comparison

Disruptive Forces Affecting the Future

A Gentle Introduction to Ceph

MySQL and Ceph. A tale of two friends

Ceph Intro & Architectural Overview. Abbas Bangash Intercloud Systems

A fields' Introduction to SUSE Enterprise Storage TUT91098

A New Key-value Data Store For Heterogeneous Storage Architecture Intel APAC R&D Ltd.

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

QNAP OpenStack Ready NAS For a Robust and Reliable Cloud Platform

THE CEPH POWER SHOW. Episode 2 : The Jewel Story. Daniel Messer Technical Marketing Red Hat Storage. Karan Singh Sr. Storage Architect Red Hat Storage

The Fastest And Most Efficient Block Storage Software (SDS)

Introduction to Ceph Speaker : Thor

DRBD SDS. Open Source Software defined Storage for Block IO - Appliances and Cloud Philipp Reisner. Flash Memory Summit 2016 Santa Clara, CA 1

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016

Hyper-converged infrastructure with Proxmox VE virtualization platform and integrated Ceph Storage.

SurFS Product Description

Contrail Cloud Platform Architecture

RED HAT CEPH STORAGE ROADMAP. Cesar Pinto Account Manager, Red Hat Norway

Contrail Cloud Platform Architecture

All-NVMe Performance Deep Dive Into Ceph + Sneak Preview of QLC + NVMe Ceph

RED HAT CEPH STORAGE ON THE INFINIFLASH ALL-FLASH STORAGE SYSTEM FROM SANDISK

Introducing SUSE Enterprise Storage 5

The ScaleIO plugin for Fuel Documentation

Deterministic Storage Performance

OPEN HYBRID CLOUD. ALEXANDRE BLIN CLOUD BUSINESS DEVELOPMENT Red Hat France

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved.

클라우드스토리지구축을 위한 ceph 설치및설정

Block Storage Service: Status and Performance

Challenges of High-IOPS Enterprise-level NVMeoF-based All Flash Array From the Viewpoint of Software Vendor

Deterministic Storage Performance

What is QES 2.1? Agenda. Supported Model. Live demo

StorPool Distributed Storage Software Technical Overview

Getting Started Guide. StorPool version xx Document version

Introduction to Atlantis HyperScale

How To Get The Most Out Of Flash Deployments

VMware vsan Ready Nodes

A FLEXIBLE ARM-BASED CEPH SOLUTION

Ceph Software Defined Storage Appliance

MICHAËL BORGERS, SYSTEM ENGINEER MATTHIAS SPELIER, SYSTEM ENGINEER. Software Defined Datacenter

Cisco HyperConverged Infrastructure

EMC STORAGE SOLUTIONS WITH MIRANTIS OPENSTACK

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Andrzej Jakowski, Armoun Forghan. Apr 2017 Santa Clara, CA

The ScaleIO Cinder Plugin for Fuel Documentation

Accelerate with IBM Storage: IBM FlashSystem A9000/A9000R and Hyper-Scale Manager (HSM) 5.1 update

Provisioning with SUSE Enterprise Storage. Nyers Gábor Trainer &

Redefining Enterprise Storage: EMC Storage Strategy

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

Workspace & Storage Infrastructure for Service Providers

Cloud-Oriented Converged Storage

SolidFire and Pure Storage Architectural Comparison

Cloud-Oriented Converged Storage

Is Open Source good enough? A deep study of Swift and Ceph performance. 11/2013

Scalable, High-Performance Software Defined Storage Solution

New HPE 3PAR StoreServ 8000 and series Optimized for Flash

Hitachi Virtual Storage Platform Family

Enterprise power with everyday simplicity. Dell Storage PS Series

Open vstorage EMC SCALEIO Architectural Comparison

SEP sesam Backup & Recovery to SUSE Enterprise Storage. Hybrid Backup & Disaster Recovery

SOFTWARE DEFINED STORAGE

Ceph in a Flash. Micron s Adventures in All-Flash Ceph Storage. Ryan Meredith & Brad Spiers, Micron Principal Solutions Engineer and Architect

WHITE PAPER Software-Defined Storage IzumoFS with Cisco UCS and Cisco UCS Director Solutions

CloudOpen Europe 2013 SYNNEFO: A COMPLETE CLOUD STACK OVER TECHNICAL LEAD, SYNNEFO

Build Cloud like Rackspace with OpenStack Ansible

Building a flexible and large-scale softwaredefined storage platform for OpenStack. OPENSTACK DAY Italy May 30 th 2014, Milan

MySQL and Ceph. MySQL in the Cloud Head-to-Head Performance Lab. 1:20pm 2:10pm Room :20pm 3:10pm Room 203

ROCK INK PAPER COMPUTER

All you need to know about OpenStack Block Storage in less than an hour. Dhruv Bhatnagar NirendraAwasthi

Using persistent memory and RDMA for Ceph client write-back caching Scott Peterson, Senior Software Engineer Intel

Red Hat OpenStack Platform on Red Hat Ceph Storage

What's new in Jewel for RADOS? SAMUEL JUST 2015 VAULT

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

EMC Backup and Recovery for Microsoft Exchange 2007

OceanStor 5300F&5500F& 5600F&5800F V5 All-Flash Storage Systems

INTRODUCTION TO CEPH. Orit Wasserman Red Hat August Penguin 2017

Enterprise power with everyday simplicity

Enterprise power with everyday simplicity

Why software defined storage matters? Sergey Goncharov Solution Architect, Red Hat

Benchmarking Ceph for Real World Scenarios

UH-Sky informasjonsmøte

SUSE Enterprise Storage Technical Overview

SSDs in the Cloud Dave Wright, CEO SolidFire, Inc.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

Open vstorage RedHat Ceph Architectural Comparison

HPE SimpliVity. The new powerhouse in hyperconvergence. Boštjan Dolinar HPE. Maribor Lancom

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Installation runbook for Hedvig + Cinder Driver

Jumpstart your Production OpenStack Deployment with

Accelerate OpenStack* Together. * OpenStack is a registered trademark of the OpenStack Foundation

IBM Storwize V7000 Unified

Extremely Fast Distributed Storage for Cloud Service Providers

Intel Solid State Drive Data Center Family for PCIe* in Baidu s Data Center Environment

EMC Forum EMC ViPR and ECS: A Lap Around Software-Defined Services

SUSE Enterprise Storage Case Study Town of Orchard Park New York

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

Transcription:

The Comparison of Ceph and Commercial Server SAN Yuting Wu wuyuting@awcloud.com AWcloud

Agenda Introduction to AWcloud Introduction to Ceph Storage Introduction to ScaleIO and SolidFire Comparison of Ceph and Server SAN Performance test of Ceph and ScaleIO Summary

Who are we Pure OpenStack player in China China s leading enterprise cloud service provider Broad deployment of production clouds in China Highly diverse set of workloads and use cases

Ceph Architectural Ceph uniquely delivers object, block, and file storage in one unified system.

Ceph Components OSDs: A OSD stores data, handles data replication, recovery, backfilling, rebalancing, and provides some monitoring information to Ceph Monitors by checking other Ceph OSD Daemons for a heartbeat. Monitors: A Ceph Monitor maintains maps of the cluster state, including the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map.

ScaleIO Introduction ScaleIO: A software-only solution Installed on common commodity servers Turn existing DAS storage into shared block storage

ScaleIO Architecture MDM SDC S D S MDM SDC S D S MDM SDC S D S MDM: Configures and monitors the ScaleIO system Cluster Mode or Single Mode SDS: Manages the capacity of a single server Data access SDC: Exposes ScaleIO volumes as block device

Solidfire Introduction Node: A collection of Solid State Drives Cluster: Make up of a collection of nodes At least four nodes in a cluster(five or more nodes are recommended) Nodes connect to each other with 10GbE 1GbE for management 10GbE for storage iscsi Scaled-out by adding nodes

Deployment

Deployment-Ceph CeTune Intel Ceph-deplpy Ceph-disk

Deployment-ScaleIO Installation Manager Window Installation topology file

Deployment-SolidFire SolidFire storage nodes are delivered as an appliance with SolidFire Element OS installed and ready to be configured. After configured, each node can be added to a SolidFire Cluster

Operations and Management

Operations and Management Ceph ScaleIO Solidfire CLI Web UI REST Gateway Other VSM TUI

Features and Volume Methods

Ceph ScaleIO SolidFire Shared Filesystem Object Storage Block Storage Deduplication Compression Thin provision Minimum IOPS Maximum IOPS IOPS burst control Bandwidth control Replication OpenStack Hyper-V VMware XenServer Features

Volume Methods Ceph ScaleIO SolidFire Creating Volume Resizing Volume Modifying Volume(Qos) Deleting Volume Viewing Deleted Volumes Restoring a Deleted Volume Cloning a Volume Viewing Running Tasks Volume Access Group Creating a Volume Snapshot Rolling Back to a Snapshot Backing up a Snapshot Cloing a Volume from a Snapshot Backing up a Volume to an Object Store Restoring a Volume from an Object Store

Integrating with OpenStack

Integrating with OpenStack ScaleIO SolidFire OpenStack Keystone Swift Cinder Glance Nova Ceph Object Gateway Ceph Block Device Ceph Storage Cluster (RADOS)

Performance Test

Test Environment Node1 1 X 146G HDD for OS 2 X 4TB 7200rpm HDD 2 X 800GB DC S3500 Node2 1 X 146G HDD for OS 2 X 4TB 7200rpm HDD 2 X 800GB DC S3500 Node3 1 X 146G HDD for OS 2 X 4TB 7200rpm HDD 2 X 800GB DC S3500 Node4 1 X 146G HDD for OS 2 X 4TB 7200rpm HDD 2 X 800GB DC S3500 Node Hardware CPU(s) 2*2*10 Memory 128GB Storage 2 X 4TB HDD, 2X800GB SSD Network 4 X 10Bb NIC Software Ceph 0.94.4 ScaleIO 1.32 Kernel 3.14 OS Centos 6.5 Tools fio, vdbench

IOPS and Latency 80000 ScaleIO vs. Ceph ( 4k Randwrite) 86854 40 ScaleIO vs. Ceph ( 4k Randwrite) 42.72 60000 30 29.454 IOPS 40000 Latency(ms) 20 20000 15103 10 8.463 0 ScaleIO SSD Ceph SSD 2171 1496 ScaleIO HDD Ceph HDD 0 2.947 ScaleIO SSD Ceph SSD ScaleIO HDD Ceph HDD fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=randwrite -size=10g -filename=/dev/scinia - name= test" -iodepth=64 -runtime=30 [global] ioengine=rbd clientname=admin pool=ssdpool rbdname=image1 invalidate=0 rw=randwrite bs=4k runtime=30 [rbd_iodepth32] iodepth=64

Cluster stability and latency Vdbench 50 Cluster latency at different loads SD: openflags=o_direct,threads=64 WD: xfersize=4096,rdpct=0,seekpct=100 RD: iorate=6000,elapsed=60, interval=1 Latency (ms) 40 30 20 Ceph-SSD ScaleIO-SSD Ceph-HDD ScaleIO-HDD 7200 6800 6400 Cluster stability under certain load ScaleIO Ceph 10 0 10% 25% 50% Cluster loads 75% 100% IOPS 6000 5600 5200 4800 0 10 20 30 40 50 60 Time (second)

What happens when scaling out/in IOPS 70000 60000 50000 40000 30000 20000 Cluster IOPS when changing SDS Delete a SDS Add a SDS OSD recovery limit in ceph.conf [global] [osd] osd_max_backfills = 1 osd_recovery_max_active = 1 10000 0 20000 Cluster IOPS when adding/deleting OSDs 0 100 200 300 400 500 Time (second) 15000 with_recovery_limit without_recovery_limit IOPS 10000 5000 0 0 200 400 600 800 Time (second)

Summary Ceph need a more user-friendly deployment and management tool Ceph lacks of advanced storage features (Qos guarantee, Deduplication, Compression) Ceph is the best integration for OpenStack Ceph is acceptable for HDD but not good enough for high-performance disk Ceph has a lot of configuration parameters, but lacks of relevant instructions and visualization tools

Thank you for your listening. wuyuting@awcloud.com