클라우드스토리지구축을 위한 ceph 설치및설정

Similar documents
A fields' Introduction to SUSE Enterprise Storage TUT91098

THE CEPH POWER SHOW. Episode 2 : The Jewel Story. Daniel Messer Technical Marketing Red Hat Storage. Karan Singh Sr. Storage Architect Red Hat Storage

Deploying Software Defined Storage for the Enterprise with Ceph. PRESENTATION TITLE GOES HERE Paul von Stamwitz Fujitsu

Build Cloud like Rackspace with OpenStack Ansible

Ceph Intro & Architectural Overview. Abbas Bangash Intercloud Systems

INTRODUCTION TO CEPH. Orit Wasserman Red Hat August Penguin 2017

Introduction to Ceph Speaker : Thor

ROCK INK PAPER COMPUTER

Ceph vs Swift Performance Evaluation on a Small Cluster. edupert monthly call Jul 24, 2014

A Gentle Introduction to Ceph

-Presented By : Rajeshwari Chatterjee Professor-Andrey Shevel Course: Computing Clusters Grid and Clouds ITMO University, St.

Distributed File Storage in Multi-Tenant Clouds using CephFS

Distributed File Storage in Multi-Tenant Clouds using CephFS

Datacenter Storage with Ceph

MySQL and Ceph. A tale of two friends

Discover CephFS TECHNICAL REPORT SPONSORED BY. image vlastas, 123RF.com

Choosing an Interface

Ceph Software Defined Storage Appliance

All-NVMe Performance Deep Dive Into Ceph + Sneak Preview of QLC + NVMe Ceph

Ceph Block Devices: A Deep Dive. Josh Durgin RBD Lead June 24, 2015

Introducing SUSE Enterprise Storage 5

What's new in Jewel for RADOS? SAMUEL JUST 2015 VAULT

A New Key-value Data Store For Heterogeneous Storage Architecture Intel APAC R&D Ltd.

virtual machine block storage with the ceph distributed storage system sage weil xensummit august 28, 2012

Ceph Intro & Architectural Overview. Federico Lucifredi Product Management Director, Ceph Storage Vancouver & Guadalajara, May 18th, 2015

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016

CephFS A Filesystem for the Future

Ceph Rados Gateway. Orit Wasserman Fosdem 2016

Building Service Platforms using OpenStack and CEPH: A University Cloud at Humboldt University

SUSE Enterprise Storage Technical Overview

RED HAT CEPH STORAGE ROADMAP. Cesar Pinto Account Manager, Red Hat Norway

Samba and Ceph. Release the Kraken! David Disseldorp

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

The Comparison of Ceph and Commercial Server SAN. Yuting Wu AWcloud

SEP sesam Backup & Recovery to SUSE Enterprise Storage. Hybrid Backup & Disaster Recovery

Webtalk Storage Trends

Why software defined storage matters? Sergey Goncharov Solution Architect, Red Hat

CEPHALOPODS AND SAMBA IRA COOPER SNIA SDC

OpenStack Architecture and Pattern Deployment with Heat. OpenStack Architecture and Pattern Deployment using Heat Ruediger Schulze

Ceph at the DRI. Peter Tiernan Systems and Storage Engineer Digital Repository of Ireland TCHPC

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Ubuntu Openstack Installer, Single Machine Mode

Cloud object storage in Ceph. Orit Wasserman Fosdem 2017

Deploying Ceph clusters with Salt

CEPH APPLIANCE Take a leap into the next generation of enterprise storage

Current Status of the Ceph Based Storage Systems at the RACF

SolidFire and Ceph Architectural Comparison

Benchmark of a Cubieboard cluster

Expert Days SUSE Enterprise Storage

Block Storage Service: Status and Performance

Troubleshooting Your SUSE TUT6113. Cloud. Paul Thompson SUSE Technical Consultant. Dirk Müller SUSE OpenStack Engineer

A product by CloudFounders. Wim Provoost Open vstorage

Contrail Cloud Platform Architecture

Deterministic Storage Performance

Open vstorage RedHat Ceph Architectural Comparison

Deterministic Storage Performance

Contrail Cloud Platform Architecture

Ceph: scaling storage for the cloud and beyond

Accelerate OpenStack* Together. * OpenStack is a registered trademark of the OpenStack Foundation

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti

OPEN HYBRID CLOUD. ALEXANDRE BLIN CLOUD BUSINESS DEVELOPMENT Red Hat France

SUSE Enterprise Storage Case Study Town of Orchard Park New York

Ceph: A Scalable, High-Performance Distributed File System PRESENTED BY, NITHIN NAGARAJ KASHYAP

Building reliable Ceph clusters with SUSE Enterprise Storage

A FLEXIBLE ARM-BASED CEPH SOLUTION

Archive Solutions at the Center for High Performance Computing by Sam Liston (University of Utah)

Provisioning with SUSE Enterprise Storage. Nyers Gábor Trainer &

OpenStack Havana All-in-One lab on VMware Workstation

Hyper-converged infrastructure with Proxmox VE virtualization platform and integrated Ceph Storage.

Installation runbook for Hedvig + Cinder Driver

SUSE Enterprise Storage v4

DRBD SDS. Open Source Software defined Storage for Block IO - Appliances and Cloud Philipp Reisner. Flash Memory Summit 2016 Santa Clara, CA 1

Analyzing CBT Benchmarks in Jupyter

Is Open Source good enough? A deep study of Swift and Ceph performance. 11/2013

WHAT S NEW IN LUMINOUS AND BEYOND. Douglas Fuller Red Hat

OpenStack Lab on VMware Workstation Setting up the All-In-One VM

Protecting the Galaxy Multi-Region Disaster Recovery with OpenStack and Ceph

GlusterFS Architecture & Roadmap

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Automated and Massive-scale Life cycle Experiments with Software-Defined SmartX Boxes. GIST Ph.D. Sun Park

Ceph. The link between file systems and octopuses. Udo Seidel. Linuxtag 2012

HPE HELION CLOUDSYSTEM 9.0. Copyright 2015 Hewlett Packard Enterprise Development LP

INSTALLATION RUNBOOK FOR Triliodata + TrilioVault

The Lion of storage systems

POWERED BY OPENSTACK. Powered by OpenStack. Globo.Tech GloboTech Communications

Guide. v5.5 Implementation. Guide. HPE Apollo 4510 Gen10 Series Servers. Implementation Guide. Written by: David Byte, SUSE.

SUSE Enterprise Storage 3

Red Hat Enterprise Linux OpenStack Platform User Group.

Ceph in HPC Environments, BoF, SC15, Austin, Texas November 18, MIMOS. Presented by Hong Ong. 18th h November 2015

Introduction to OpenStack Trove

Table of Contents. GEEK GUIDE Harnessing the Power of the Cloud with SUSE. About the Sponsor Introduction The Cloud OpenStack...

SUBSCRIPTION OVERVIEW

"Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

Getting to Know Apache CloudStack

UH-Sky informasjonsmøte

RED HAT CEPH STORAGE ON THE INFINIFLASH ALL-FLASH STORAGE SYSTEM FROM SANDISK

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

DEEP DIVE: OPENSTACK COMPUTE

architecting block and object geo-replication solutions with ceph sage weil sdc

Transcription:

클라우드스토리지구축을 위한 ceph 설치및설정 Ph.D. Sun Park GIST, NetCS Lab. 2015. 07. 15 1

목차 Cloud Storage Services? Open Source Cloud Storage Softwares Introducing Ceph Storage Ceph Installation & Configuration Automatic Ceph Installation & Configuration with Chef Integrating Ceph with OpenStack Cloud Storage Semi-Production: Ceph Configuration on SmartX Boxes (Type D) Ceph 참고자료 2

Cloud Storage Services? Wikipedia: A file hosting service, cloud storage service, online file storage provider, or cyberlocker is an Internet hosting service specifically designed to host user files. 3

Open Source Cloud Storage Softwares A distributed object storage system for volume (QEMU VM) and container services (Swift/Amazon S3) and manages the disks and nodes intelligently A scalable network filesystem 4

Introducing Ceph Storage 5

History of Ceph ( ) Initially created by Sage Weil For his doctoral dissertation (University of California) After his graduation in fall 2007, Weil continued to work on Ceph fulltime In 2012, Weil created Inktank Storage In April 2014, Red Hat purchased Inktank Version Argonaut (July 3, 2012) Bobtail (v0.56, January 1, 2013), Cuttlefish (v0.61, May 7, 2013), Dumpling (v0.67, August 14, 2013), Emperor (v0.72, November 9, 2013) Firefly (v0.80, May 7, 2014), Giant (v0.87, October 29, 2014) Hammer (v0.94, April 7, 2015) 6

Why Ceph? A free software storage platform to present object, block, and file storage from a single distributed computer cluster The data is replicated Making it fault tolerant Running On commodity hardware Be designed to be both A high-level overview of the Ceph's internal organization Self-healing (Recovery) & Self-managing (Rebalancing) CRUSH (Controlled Replication Under Scalable Hashing) 7

New Version: 2015-04-05 Ceph Storage System Architecture & Component Ceph as a cloud storage solution RADOS: Reliable Autonomic Distributed Object Store 8

Ceph API Architecture 9

New Version: 2015-04-05 Ceph File Storage The Ceph filesystem CephFS has support For the native Linux kernel driver Ceph filesystem library (libcephfs) Ceph storage cluster protocol To store user's data to a reliable and distributed Ceph storage cluster To use CephFS: At least one Ceph metadata server (MDS) To be configured on any of your cluster nodes 10

New Version: 2015-04-05 Ceph Block Storage A new protocol RBD Ceph Block Device Native support for the Linux kernel Proprietary hypervisors Will be supported very soon: VMware and Microsoft HyperV. Full support: OpenStack, CloudStack with the cinder (block) and glance (imaging) components 1,000s of VMs in very little time 11

New Version: 2015-04-05 Ceph Object Storage A distributed object storage system provides an object storage interface via Ceph's object gateway (RADOS gateway: radosgw) The RADOS gateway librgw (the RADOS gateway library) and librados, Allowing applications to establish a connection with the Ceph object storage The most stable multitenant object storage solutions accessible via a RESTful API 12

Ceph Installation & Configuration 13

Ceph Installation & Configuration on PCs Environment Start [admin node] - ceph-deploy - mon - osd.2 - Meta Data Server [client PC] 203.237.53.95 203.237.53.91 1G 203.237.53.92 203.237.53.93 203.237.53.94 1. ceph repository 추가 2. 마스터노드에 ceph-deploy 설치 3. 마스터노드를포함한모든노드에서는 ceph 계정생성 4. ceph 가 root 권한을사용할수있도록설정 5. 마스터노드 ( 어드민 ) 에서패스워드없이다른노드로접속할수있도록 key 생성 각각의노드들에 ssh key 를복사 6. 마스터노드에서 ~/.ssh/config 를수정 7. ceph 계정으로마스터노드에서 ceph 클러스터디렉토리를생성 8. 마스터노드에서 deploy 생성 9. 각노드에 ceph 를설치 10. ceph mon 설치 11. 각노드에서 OSD 생성 / 추가 /Activate 12. 마스터노드의설정파일을각노드에배포 13. keyring 파일퍼미션추가 14. ceph mds 설치 [ceph node #1] - mon - osd.3, osd-4 [ceph node #2] - mon - osd.0, osd-5 [ceph node #3] - mon - osd.1, osd-6 1. File System 설정 2. Block Device 설정 3. Object Storage 설정 End CPU RAM HDD OS Ceph Admin i7 16G SSD 120GB*1, HDD 1TB*1 Ubuntu Server 14.04 LTS ceph-deploy, ceph 0.80.1 Node #1 i3 16G SSD 120GB*1, HDD 1TB*2 Ubuntu Server 14.04 LTS ceph 0.80.1 Node #2 i3 16G SSD 120GB*1, HDD 1TB*2 Ubuntu Server 14.04 LTS ceph 0.80.1 Mode #3 i3 16G SSD 120GB*1, HDD 1TB*2 Ubuntu Server 14.04 LTS ceph 0.80.1 Client PC i7 8G SSD 128G Ubuntu Desktop 14.04 LTS ceph 0.80.1 14

New Version: 2015-05-04 Ceph p+vbox Cluster Management Configuration Internet IBM M4 Provisioning Center Node osd1 osd2 osd3 osd4 vosd5 vosd6 vosd7 vosd8 Coordinator Box pboxes (physical boxes) vboxes (virtual boxes) Ceph p+vbox Cluster Power / Manage ment Control Data P M C D Ceph: H/W Recommendations - Networks 15

New Version: 2015-04-09 osd tree osd node in pbox osd node in vbox Ceph Dashboard 16

Rados Bench Testing Public network (1G) Public(1G)+Cluster(1G) network # rados bench -p data 10 write --no-cleanup # rados bench -p data 10 write --no-cleanup sec Cur ops started finished avg MB/s cur MB/s last lat avg lat Total time run: 12.87265 sec Cur ops started finished avg MB/s cur MB/s last lat avg lat Total time run: 11.349764 0 0 0 0 0 0-0 Total writes made: 172 1 16 35 19 75.9847 76 0.372437 0.52294 Total writes made: 194 1 15 30 15 59.9886 60 0.999767 0.639261 Write size: 4194304 2 16 60 44 87.9859 100 0.489538 0.635107 Write size: 4194304 2 16 45 29 57.991 56 0.51357 0.724668 Bandwidth (MB/sec): 53.447 3 16 63 47 62.6573 72 1.30087 0.83028 pub(1g) network 4 16 77 61 60.9914 56 1.60855 0.844216 Stddev Bandwidth: 31.791 5 16 100 84 67.1904 92 0.608725 0.89635 Max bandwidth (MB/sec): 92 6 16 114 98 65.3244 56 1.13614 0.879081 Min bandwidth (MB/sec): 0 151.694 7 16 137 121 69.1338 92 0.701146 0.879474 Average Latency: 1.19084 8 16 154 138 68.9911 68 0.74944 0.860471 Stddev Latency: 0.957406 9 16 162 146 64.8805 32 0.856039 0.868605 Max latency: 5.10981 10 16 171 155 61.992 36 2.49854 0.931056 Min latency: 0.18006 11 16 172 156 56.7198 4 2.36986 0.940279 12 16 172 156 51.9931 0-0.940279 3 16 75 59 78.6553 60 0.945951 0.674622 Bandwidth (MB/sec): 68.371 pub(1g)+cluster(1g) network 4 16 91 75 74.9897 64 0.768281 0.677215 5 16 108 92 73.5899 68 0.843964 0.783805 Stddev Bandwidth: 29.4988 164.172 162.226 6 16 119 103 68.6577 44 2.12519 0.807576 Max bandwidth (MB/sec): 104 7 16 135 119 67.9914 64 0.258783 0.821067 Min bandwidth (MB/sec): 0 146.078 8 16 147 131 65.4919 48 2.57911 0.87381 Average Latency: 0.908978 9 16 167 151 67.1024 80 0.913337 0.909209 Stddev Latency: 0.551788 10 16 193 177 70.7908 104 0.574172 0.866391 Max latency: 2.75121 11 11 194 183 66.5367 24 0.709246 0.860128 Min latency: 0.196232 # rados bench -p data 10 seq sec Cur ops started finished avg MB/s cur MB/s last lat avg lat Total time run: 4.53545 0 0 0 0 0 0-0 Total writes made: 172 53.447 68.371 1 16 55 39 155.939 156 0.006978 0.274618 Write size: 4194304 2 16 94 78 155.959 156 0.00771 0.329919 Bandwidth (MB/sec): 151.694 3 16 131 115 153.299 148 0.075928 0.36129 4 16 172 156 155.969 164 0.075846 0.370814 Average Latency: 0.417905 Max latency: 1.69258 Min latency: 0.005805 # rados bench -p data 10 seq sec Cur ops started finished avg MB/s cur MB/s last lat avg lat Total time run: 4.72675 1 16 69 53 211.95 212 0.006784 0.185975 Total writes made: 194 2 16 111 95 189.962 168 1.29849 0.249357 Write size: 4194304 3 16 151 135 179.969 160 0.14805 0.297603 Bandwidth (MB/sec): 164.172 4 16 187 171 170.973 144 0.006744 0.33326 Average Latency: 0.389239 Max latency: 1.70361 Min latency: 0.004916 # rados bench -p data 10 rand sec Cur ops started finished avg MB/s cur MB/s last lat avg lat Total time run: 10.56968 0 0 0 0 0 0-0 Total writes made: 386 1 16 51 35 139.957 140 0.787446 0.29442 Write size: 4194304 2 16 90 74 147.964 156 0.796916 0.339155 Bandwidth (MB/sec): 146.078 3 16 128 112 149.298 152 0.677181 0.37488 4 16 166 150 149.968 152 0.834974 0.385668 Average Latency: 0.437838 5 16 199 183 146.371 132 0.094305 0.403091 Max latency: 1.73884 6 16 234 218 145.306 140 0.80105 0.413803 Min latency: 0.005557 7 16 269 253 144.544 140 0.177078 0.405427 8 16 314 298 148.973 180 0.006499 0.400413 9 16 347 331 147.086 132 0.075882 0.414104 10 16 385 369 147.574 152 1.20561 0.412293 # rados bench -p data 10 rand write (MB/s) seq read (MB/s) rand read (MB/s) sec Cur ops started finished avg MB/s cur MB/s last lat avg lat Total time run: 10.528542 1 16 60 44 175.948 176 0.007198 0.240372 Total writes made: 427 2 16 100 84 167.964 160 0.152363 0.310529 Write size: 4194304 3 16 139 123 163.969 156 0.289783 0.336308 Bandwidth (MB/sec): 162.226 4 16 179 163 162.973 160 0.800238 0.360151 5 16 230 214 171.172 204 0.784546 0.35217 Average Latency: 0.39374 6 16 270 254 169.307 160 0.479819 0.356109 Max latency: 1.70318 7 16 308 292 166.831 152 0.00696 0.360745 Min latency: 0.00489 8 16 349 333 166.475 164 0.464825 0.366762 9 16 383 367 163.087 136 1.08101 0.371602 10 16 426 410 163.976 172 0.099748 0.370643 17

Automatic Ceph Installation & Configuration with Chef 18

New Version: 2015-05-04 Ceph pbox Cluster with Chef Internet pboxes (physical boxes) Netgear GS608 V3 8ports Switch Node01 -client node1 Node02 -client node2 Node03 -client node2 Node04 -chef server & workstation ZyXEL 16 ports Switch Ubuntu 14.04 LTS Power / Manage ment Control Data P M C D 19

Open Source Chef Server Installation Chef Version 11.1.6-1: installing Ubuntu 14.04 Nginx: manual configuration for certification Chef Client Node Ceation $ sudo knife bootstrap node01 u chef P chef sudo $ sudo knife bootstrap node02 u chef P chef --sudo $ sudo knife bootstrap node03 u chef P chef --sudo 20

21

Ceph Installation with Chef Ceph cookbook upload to Chef Server: $ knife cookbook upload ceph Ceph Environment Creation & Configuration: Secrets, Monitor, Osd, IP, $ knife environment create Ceph Editing Ceph Environment File Uploading Roles: $ knife role from file ceph-mds.json $ knife role from file ceph-mon.json $ knife role from file ceph-osd.json $ knife role from file ceph-radosgw.json Assigning roles to nodes $ knife node run_list add node01 role[ceph-mon,role[osd] $ knife node run_list add node02 role[ceph-mon,role[osd] $ knife node run_list add node03 role[ceph-mon,role[osd] Ceph Node Environment Configuration: fsid, monitor-secret, osd_devices, $ knife node edit node01 $ knife node edit node02 $ knife node edit node03 Runing chef-client $ sudo knife ssh name:node01 x root chef-client 22

Ceph Environment Creation & Configuration Ceph Installation with Chef Ceph Node Environment Configuration 23

Ceph-cookbook Updated Libraries Issue: Freezing OSD bootstrap Installation Step 24

Integrating Ceph with OpenStack 25

PCs Environment for Integrating Ceph with OpenStack [OpenStack node] - RabbitMQ/MySQL - Keystone - Glance - Nova - Neutron - Cinder - Horizon 203.237.53.91 1G 203.237.53.92 203.237.53.93 203.237.53.94 [Ceph Cluster node] [ceph node #1] - mon - osd.3, osd-4 [ceph node #2] - mon - osd.0, osd-5 [ceph node #3] - mon - osd.1, osd-6 203.237.53.95 CPU RAM HDD OS Platform OpenStack i7 16G SSD 120GB*1, HDD 1TB*1 Ubuntu Server 14.04 LTS OpenStack Kilo Node #1 i3 16G SSD 120GB*1, HDD 1TB*2 Ubuntu Server 14.04 LTS ceph 0.80.1 Node #2 i3 16G SSD 120GB*1, HDD 1TB*2 Ubuntu Server 14.04 LTS ceph 0.80.1 Mode #3 i3 16G SSD 120GB*1, HDD 1TB*2 Ubuntu Server 14.04 LTS ceph 0.80.1 Ceph Cluster i7 8G SSD 128G Ubuntu Desktop 14.04 LTS ceph 0.80.1 26

OpenStack Network Design using Neutron 27

Manual OpenStack Kilo Installation with Ceph 28

Integrating Ceph with OpenStack Services Ceph with Nova Ceph with Glance Ceph with Cinder 29

Cloud Storage Semi-Production: Ceph Configuration on SmartX Boxes (Type D) 30

Ceph Installation & Configuration on SmartX Boxes (Type D) GIST Availability Zone 3 (Type D) SmartX Box M11 SmartX Box M12 SmartX Box M13 SmartX Box M14 SmartX Box M15 SmartX Box M16 24TB (76TB) VLAN ID = 601 Power / Management VLAN ID = 602 Control VLAN ID = 603 Data P M C D sm-box-m3 sm-box-m4 sm-box-m5 sm-box-m6 osd4 Ubuntu 14.04 osd6 Ubuntu 14.04 osd5 osd7 ceph-deploy mon MDS (meta data server) osd0&osd1 (6TGB) SSD (512GB or 1TB) * 1 HDD (3TG) * 4 mon osd2&osd3 (6TB) SSD (512GB or 1TB) * 1 HDD (3TG) * 4 mon osd4&osd5 (6TB) SSD (512GB or 1TB) * 1 HDD (3TG) * 4 osd6&osd7 (6TB) SSD (512GB or 1TB) * 1 HDD (3TG) * 4 osd0 Ubuntu 14.04 osd2 Ubuntu 14.04 osd1 osd3 Use cases Mounting CephFS by ceph-fuse tools Integrating Ceph with OpenStack Nova & Glance & Cinder & Swift Performance Tuning Using SSD for journal part network separation: client network (D) & cluster network (D) 31

Use Case: CephFS by ceph-fuse # ceph osd tree Ceph OSD Tree - sbox m3: osd.0 (3TB) / osd.1 (3TB) - sbox m4: osd.2 (3TB) / osd.3 (3TB) - sbox m5: osd.4 (3TB) / osd.5 (3TB) - sbox m6: osd.6 (3TB) / osd.7 (3TB) # ceph-deploy msd install sm-box-m3 # apt-get install ceph-fuse # ceph-fuse m 210.114.90.213:6789 /mnt/cephfs Client 2000MB Saving Test 32

Ceph Benchmarking using RADOS bench Public Network (1G) # rados bench p data 10 write no-cleanup sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 1 16 34 18 71.9806 72 0.867138 0.556915 pub(1g) 2 16 pub(10g) 60 44 87.9795 pub(10g)+cluster(10g/jp) 104 1.26601 0.661298 pub(10g/jp)+cluster(10g/jp) 3 16 81 65 86.6475 84 0.754953 0.654993 4 16 107 91 90.9807 104 0.578493 0.637948 1870.98 5 16 133 117 93.5805 104 0.544886 0.637891 1734.72 6 16 153 137 91.3146 80 1.1007 0.662175 1705.39 1663.66 7 16 180 164 93.6953 108 0.793236 0.637074 8 16 205 189 94.4812 100 0.359142 0.651841 9 16 230 214 95.0928 100 1.07674 0.653278 10 16 257 241 96.3814 108 0.846355 0.646054 1254.71 1287.90 Total time run: 10.65482 Total writes made: 258 Write size: 4194304 Bandwidth (MB/sec): 96.858 # rados bench p data 10 10 seq # rados bench p data 10 rand sec Cur ops started finished avg MB/s cur MB/s last lat avg lat sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 1 16 41 25 99.9625 100 0.130721 0.398586 1 16 40 24 95.9258 96 0.378305 0.392953 2 16 68 52 103.972 108 0.441164 0.49671 2 16 68 52 103.951 112 0.114737 0.491583 3 16 96 80 106.641 112 0.42965 0.506842 3 16 98 82 109.293 120 1.513 0.501358 4 16 126 96.86 110 123.32 132.65 109.975 120 124.1 0.327548 111.67 0.519009 4 16 111.74 127 111 110.965 116 0.779253 0.506019 5 16 153 137 109.577 108 1.42667 0.539443 5 16 153 137 109.569 104 0.334013 0.516512 6 16 182 166 110.644 116 1.15018 0.550776 6 16 181 165 109.971 112 0.347672 0.533262 7 16 208 192 109.687 108 0.995742 0.550129 7 16 210 194 write 110.834 (MB/s) 112 0.326518 0.538326 seq read (MB/s) rand read (MB/s) 8 16 237 221 110.473 116 0.184386 0.54923 8 16 238 222 110.978 112 1.52294 0.539637 9 16 264 248 110.196 108 0.785067 0.554351 9 16 258 242 107.534 80 0.178685 0.539473 10 16 293 277 110.775 116 0.510849 0.553649 Total time run: 9.241633 Total time run: 10.48864 Total writes made: 258 Total writes made: 293 Read size: 4194304 Read size: 4194304 Bandwidth (MB/sec): 111.669 Bandwidth (MB/sec): 111.74 33

Ceph Cache Tiering for Performance Tuning Ceph Firefly release Creating a pool CRUSH map: decompile & compile Creating a cache tier EC-pool (erasure-coded) & cache-pool Configuring a cache tier To set cache policies 34

Ceph 참고자료 http://ceph.com/docs/master/ Googling Ceph Installation Guide Blog Ceph Book Edited By Karan Singh Published February 25, 2015 35