OpenIO SDS on ARM A practical and cost-effective object storage infrastructure based on SoYouStart dedicated ARM servers.

Similar documents
Core solution description

New Fresh Storage Approach for New IT Challenges Laurent Denel Philippe Nicolas OpenIO

Ceph vs Swift Performance Evaluation on a Small Cluster. edupert monthly call Jul 24, 2014

StorMagic SvSAN 6.1. Product Announcement Webinar and Live Demonstration. Mark Christie Senior Systems Engineer

Branch offices and SMBs: choosing the right hyperconverged solution

SvSAN Data Sheet - StorMagic

The Fastest And Most Efficient Block Storage Software (SDS)

Storage Optimization with Oracle Database 11g

Extremely Fast Distributed Storage for Cloud Service Providers

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

SoftNAS Cloud Performance Evaluation on AWS

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

Accelerate Database Performance and Reduce Response Times in MongoDB Humongous Environments with the LSI Nytro MegaRAID Flash Accelerator Card

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

SoftNAS Cloud Performance Evaluation on Microsoft Azure

Data Sheet FUJITSU Storage ETERNUS CS200c S4

Introducing SUSE Enterprise Storage 5

VMware Virtual SAN Technology

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

A Robust, Flexible Platform for Expanding Your Storage without Limits

Introduction to Amazon Web Services

Price Performance Analysis of NxtGen Vs. Amazon EC2 and Rackspace Cloud.

Benchmark of a Cubieboard cluster

Create a Flexible, Scalable High-Performance Storage Cluster with WekaIO Matrix

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

A product by CloudFounders. Wim Provoost Open vstorage

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

November 7, DAN WILSON Global Operations Architecture, Concur. OpenStack Summit Hong Kong JOE ARNOLD

Open vstorage EMC SCALEIO Architectural Comparison

Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX

PERFORMANCE CHARACTERIZATION OF MICROSOFT SQL SERVER USING VMWARE CLOUD ON AWS PERFORMANCE STUDY JULY 2018

PracticeDump. Free Practice Dumps - Unlimited Free Access of practice exam

Data Sheet FUJITSU Storage ETERNUS CS200c S3

"Software-defined storage Crossing the right bridge"

vstart 50 VMware vsphere Solution Specification

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Version 1.24 Installation Guide for On-Premise Uila Deployment Hyper-V

Data Sheet FUJITSU Storage ETERNUS CS200c S4

Version 1.26 Installation Guide for On-Premise Uila Deployment

Cloudian Sizing and Architecture Guidelines

Evaluating Cloud Storage Strategies. James Bottomley; CTO, Server Virtualization

Virtualization of the MS Exchange Server Environment

Parallels Remote Application Server. Scalability Testing with Login VSI

WHITE PAPER AGILOFT SCALABILITY AND REDUNDANCY

The next step in Software-Defined Storage with Virtual SAN

SAS workload performance improvements with IBM XIV Storage System Gen3

VMware vcloud Air User's Guide

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark

Version 1.26 Installation Guide for SaaS Uila Deployment

HCI: Hyper-Converged Infrastructure

Lenovo Database Configuration Guide

Grow Your Business & Expand Your Service Offerings

VMWARE EBOOK. Easily Deployed Software-Defined Storage: A Customer Love Story

Paperspace. Architecture Overview. 20 Jay St. Suite 312 Brooklyn, NY Technical Whitepaper

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Red Hat Virtualization 4.1 Technical Presentation May Adapted for MSP RHUG Greg Scott

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

Webinar Series: Triangulate your Storage Architecture with SvSAN Caching. Luke Pruen Technical Services Director

StorPool Distributed Storage Software Technical Overview

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

SolidFire and Ceph Architectural Comparison

The Oracle Database Appliance I/O and Performance Architecture

The Future of Virtualization. Jeff Jennings Global Vice President Products & Solutions VMware

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Feedback on BeeGFS. A Parallel File System for High Performance Computing

ThoughtSpot on AWS Quick Start Guide

Demystifying the Cloud With a Look at Hybrid Hosting and OpenStack

Highly accurate simulations of big-data clusters for system planning and optimization

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

Edge for All Business

powered by Cloudian and Veritas

Performance Lab Report & Architecture Overview Summary of SnapVDI Features and Performance Testing Using Login VSI

Distributed Filesystem

Acronis Storage 2.4. Installation Guide

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE

ADDENDUM TO: BENCHMARK TESTING RESULTS UNPARALLELED SCALABILITY OF ITRON ENTERPRISE EDITION ON SQL SERVER

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

THE CEPH POWER SHOW. Episode 2 : The Jewel Story. Daniel Messer Technical Marketing Red Hat Storage. Karan Singh Sr. Storage Architect Red Hat Storage

TITLE. the IT Landscape

Software Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Entry-level Intel RAID RS3 Controller Family

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

StorMagic SvSAN: A virtual SAN made simple

Delivering unprecedented performance, efficiency and flexibility to modernize your IT

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

朱义普. Resolving High Performance Computing and Big Data Application Bottlenecks with Application-Defined Flash Acceleration. Director, North Asia, HPC

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Datasheet Nutanix Enterprise Cloud on PRIMERGY

Oracle IaaS, a modern felhő infrastruktúra

SOFTWARE-DEFINED BLOCK STORAGE FOR HYPERSCALE APPLICATIONS

Transcription:

OpenIO SDS on ARM A practical and cost-effective object storage infrastructure based on SoYouStart dedicated ARM servers. Copyright 217 OpenIO SAS All Rights Reserved. Restriction on Disclosure and Use of Data 3 September 217 1

Table of Contents Introduction 3 Benchmark Description 4 1. Architecture 4 2. Methodology 5 3. Benchmark Tool 5 Results 7 1. 128KB objects 7 Disk and CPU metrics (on 48 nodes) 8 2. 1MB objects 9 Disk and CPU metrics (on 48 nodes) 1 5. 1MB objects 11 Disk and CPU metrics (on 48 nodes) 12 Cluster Scalability 13 Total disks IOps 13 Conclusion 15 2

Introduction In this white paper, OpenIO will demonstrate how to use its SDS Object Storage platform with dedicated SoYouStart ARM servers to build a flexible private cloud. This S3-compatible storage infrastructure is ideal for a wide range of uses, offering full control over data, but without the complexity found in other solutions. OpenIO SDS is a next-generation object storage solution with a modern, lightweight design that associates flexibility, efficiency, and ease of use. It is open source software, and it can be installed on ARM and x86 servers, making it possible to build a hyper scalable storage and compute platform without the risk of lock-in. It offers excellent TCO for the highest and fastest ROI. Object storage is generally associated with large capacities, and its benefits are usually only visible in large installations. But thanks to characteristics of OpenIO SDS, this next-generation object storage solution can be cost effective even with the smallest of installations. And it can easily grow from a simple setup to a full-sized data center, depending on users storage needs: one node at a time, with a linear increase in performance and capacity. OpenIO SDS s simple, lightweight design allows it to be installed on very small nodes. The minimal requirements for a node on ARM are 512MB RAM and 1CPU core, with packages available for Raspbian and Ubuntu Linux (supporting Raspberry Pi computers). The software s true flexibility is powered by Conscience technology, a set of algorithms and mechanisms that continuously monitors the cluster, computing quality scores for all nodes, and choosing the best node for each operation. Thanks to Conscience, all operations are dynamically distributed and there is no need to rebalance the cluster when new resources are added. By partnering with SoYouStart, a dedicated server provider built on OVH s infrastructure around the world, we have been able to build a cluster and run a complete set of benchmarks to demonstrate the benefits of an ARM-based storage solution that can quickly scale from three nodes to an unlimited number of nodes at a reasonable cost. Thanks to its very competitive offer, SoYouStart enables small and medium-sized organizations to build private infrastructures and making them available on high-speed public networks without the hassle of managing physical hardware purchasing and maintenance. For this benchmark, the OpenIO team worked on a 48-node configuration to take advantage of erasure coding and parallelism, but the minimal configuration supported in production starts at three nodes, allowing end users with the smallest storage needs to take advantage of object storage at reasonable prices. SoYouStart offers ARM-based servers in European and North American datacenters, and the configuration described in the following pages is replicable in those countries, allowing end users to be compliant with local laws and regulations. (https://www.soyoustart.com/en/server-storage/) 3

Benchmark Description 1. Architecture For our test, we chose to work on a two-tier architecture. This allowed us to install the necessary software to run the benchmark testing suite on two X86 nodes, which also acted as Swift gateways, while the storage layer was built around 48 ARM servers (2 CPU cores, 2GB RAM, 2TB HDD, unlimited traffic, and a public IP address: https://www.soyoustart.com/fr/offres/162armada1.xml). Each node was configured to reflect the minimum requirements for OpenIO SDS. FileSystem: Rootvol (/): EXT4, 8 GB Datavol for OpenIO SDS data (/var/lib/oio): XFS, 1.9 TB The object store was configured using a dynamic data protection policy which enabled erasure coding for objects larger than 25 KB and three-way replication for smaller ones. OpenIO SDS is easy to deploy and scale thanks to the available Ansible role (available at https:// github.com/open-io/ansible-role-openio-sds) and the OVH/SoYouStart APIs. 4

2. Methodology For the benchmark, the platform was populated with 5 containers and 5 objects in each container. Sizes of objects were 128KB, 1MB, or 1MB distributed in equal parts. A first type of test (8% read / 2% write), which is close to a real use case scenario, was performed for each object size, and we launched seven different runs based on different levels of parallelism (5, 1, 2, 4, 8, 16, and 32 workers). A second test was designed to test the linear scalability of the solution. In this case, a 1% read run was launched against 1MB objects on three different cluster configurations: 12, 24, and 48 nodes. Each runs lasted 5 minutes (long enough to see any performance issues). 3. Benchmark Tool We ran the tests using COSbench, a tool developed by Intel to test object storage solutions. It is open source, so results can be easily compared and verified. In this case, we chose to use the Swift API, but OpenIO SDS is also compatible with the S3 API. COSbench (https://github.com/intel-cloud/cosbench) features include: - Easy to use via a web interface or on the command line - Exports significant metrics for comparative usage - All metrics are saved in CVS format 5

The benchmark was organized in five phases: init: container creation prepare: container population with objects main: bench scenario (with all possible read/write/delete combinations) cleanup: object deletion dispose: container deletion Here is an example of a result page. 6

Results 1. 128KB objects 8 128KB objects 8% read 16 6 4 2 34,13 7,46 98,46 138,9 139,67 141,23 144,59 12 8 4 5 1 2 4 8 16 32 Number of workers 18 128KB objects 2% write 4 135 34,23 34,46 34,82 35,23 3 9 17,94 25,1 2 45 8,72 1 5 1 2 4 8 16 32 Number of workers 7

Disk and CPU metrics (on 48 nodes) 8

2. 1MB objects 8 6 1MB objects 8% read 136,84 137,59 13,81 126,25 14 15 4 2 48,26 84,6 7 35 29,49 24 5 1 2 4 8 16 32 Number of workers 1MB objects 2% write 4 18 31,23 33 33,68 34,43 3 12 2,55 2 6 11,83 1 7,15 5 1 2 4 8 16 32 Number of workers 9

Disk and CPU metrics (on 48 nodes) 1

5. 1MB objects 1MB objects 8% read 4 69,68 7 63,41 3 53,8 52,5 2 1 21,17 32,9 43,99 35 17,5 11,38 5 1 2 4 8 16 32 Number of workers 1MB objects 2% write 6 15,69 15,89 16 45 13,22 12 3 15 5,14 8,13 1,75 8 4 2,95 5 1 2 4 8 16 32 Number of workers 11

Disk and CPU metrics (on 48 nodes) 12

Cluster Scalability To demonstrate the linear scalability of OpenIO SDS, as well as its ability to scale quickly when needed, we simulated a cluster of 12 nodes as it was expanded to 24, then 48 nodes. We ran three benchmarks using 8 workers configured to perform 1% read operations on 1MB objects with the three configurations. 11 Bandwidth (MB/s) 1MB objects 1% read 1.44,48 1 825 75 Bandwidth (MB/s) 55 553,45 99,42 5 275 235,15 54,5 25 22,96 12 24 48 Number of nodes 8 workers 12 nodes 2,5 KIOps 24 nodes 5 KIOps 48 nodes 8,7 KIOps Total disks IOps Each disk delivers between 18 IOps and 21 IOps, which is very good for SATA disks (and is also the limit as we reach 1% disk utilization). 13

On the 48-node cluster, we configured the number of workers to increase progressively from 2 to 4, then 8, 16, and 32. After the COSbench preparation phase, we found that the highest bandwidth was achieved with 8 workers. After that, disks reached saturation and performance decreased. Maximum bandwidth is 1.2 GB/s, with 8.16 Gbps of data delivered to the client application. 14

Conclusion The limitation of this infrastructure comes from the SATA drives, with the exception of the first benchmark run with small 128KB objects. With OpenIO SDS, SoYouStart ARM servers can achieve the full performance of the single drives they host, without being limited by their ARM CPU. Using the default configuration, cluster performance grows linearly as the number of nodes increases. The more nodes there are in a cluster, the better the performance. At peak performance, using a 1% read benchmark scenario, and with objects of 1MB in size, we achieved 8.16 Gbps (1GB/s) on the 48-node cluster. This performance is what could be expected from the SATA drives handling both data (sequential IOps) and metadata (random IOps). The principle of a single-drive ARM server is very appealing, as long as it is able to deliver the full performance of its drive. It reduces the failure domain to its smallest form: one drive. Nowadays, as x86 storage boxes get larger and larger, a failed server can mean that up to 9 disks (9 TB raw) can be taken offline simultaneously. On the contrary, with this type of ARM node, only one drive is affected by a server failure. It also simplifies maintenance as drives are not easy to replace within a large x86 server. This operation requires a lot of care, considering the risk of losing the rest of the server (or switching the wrong drive). In the case of a single-drive server, drives are replaced alongside the rest of the server components. This operation is equivalent to the simple task of adding a drive (and its server) to the cluster, very much like adding additional capacity when needed. Even though this test was meant to demonstrate OpenIO SDS s capabilities, it also highlights the fact that SoYouStart ARM servers are inexpensive and can be the base of an interesting hardware infrastructure to build next generation private cloud services. These can allow end users to maintain full control of data while avoiding lock-in, and provide services at a reasonable price. This is a compelling solution, especially for development and testing scenarios. 15

Next-generation Object Storage and Serverless Computing openio.io FR 2 bis avenue Antoine Pinay Parc d Activité des 4 Vents 5951 Hem US 18 Sansome Street, FI4 San Francisco, 9414 CA JP 1-35-2 Grains Bldg. #61 Nihonbashi-Kakigara-cho, Chuo-ku, Tokyo, Japan 13-14 16