Decentralized Distributed Storage System for Big Presenter: Wei Xie -Intensive Scalable Computing Laboratory(DISCL) Computer Science Department Texas Tech University
Outline Trends in Big and Cloud Storage Decentralized storage technique UniStore project at Texas Tech
Big Storage Requirements Large capacity: 100s terabytes of data and more Performance-intensive: demanding big data analytics applications, real-time response protection: protect 100s terabytes of data from loss
Why Warehousing Fails in Big warehousing has been used to process very large data sets for decade A core component of Business Intelligence Not designed to handle unstructured data (emails, log files, social media, etc Not designed for real-time and fast response
Comparison Traditional data warehousing problem Big data problem Retrieve the sales figures of a particular item in a chain of retail stores exist in a database Cross-reference sales of a particular item with weather conditions at time of sale, or with various customer details, and to retrieve that information quickly
Scale-out storage Big Storage Trends A number of compute/storage elements connected via network Capacity and performance can be added incrementally Not limited by the RAID controller
Scaled-out NAS Big Storage Trends NAS: network attached storage Scale-out offers more flexible capacity/performance expansion (add NAS instead of disk in the slots of NAS) Parallel/distributed file system (Hadoop) to handle scale-out NAS EMC Isilon, Hitachi Systems, Direct Networks hscaler, IBM SONAS, HP X9000, and NetApp DATA Ontap
Object Storage Big Storage Trends Flat namespace instead of hierarchical namespace of a file system Objects are identified by IDs Better scalability and performance for very large number of objects Amazon S3 Hyperscale Architecture Mainly used for large infrastructure sites by Facebook, Google, Microsoft and Amazon Scaled-out DAS: direct attached server, commodity enterprise server attached with storage devices Redundancy: fail over entire server instead of components Hadoop run on top of a cluster of DAS to support big data analytics Part of the Software Defined Storage platform
Hyper-converged Storage Compute, network, storage and virtualization tightly integrated Buy a hardware box and get all you need VMware, Nutanix, Nimboxx
Scale-out Storage Centralized vs. Decentralized A centralized storage cluster: metadata server, storage servers and interconnections Scalability is bounded by the metadata server Multi-site distributed storage? Redundancy achieved by RAID Decentralized storage cluster No metadata server to limit the scalability Multi-site, geographically distributed replicated across servers, racks or sites
How to distribute data across nodes/servers/disks? P2P based protocol Distributed hash table Advantage Incremental scalability: build a small cluster and expand in the future Self-organizing Redundancy Issues Decentralized Storage migration upon data center expansion and failures Handling heterogeneous servers
Decentralized Storage: Consistent Hashing SHA-1 function SHA-1 function 1 holds D1 2 holds D2 3 holds D3 4 holds D1 2 holds D2 3 holds D3 1 holds nothing
Properties of Consistent Hashing Balance: each server owns equal portion of keys Smoothness: to add the k th server, 1/k fraction of keys located between it and predecessor server should be migrated Fault tolerance: multiple copies for each key, if one server down, find next successor with small change to the cluster view and balance still holds
Unistore Overview To build a unified storage architecture (Unistore) for Cloud storage systems with the co-existence and efficient integration of heterogeneous HDDs and SCM (Storage Class Memory) devices Based on a decentralized consistent hashing based storage system - Sheepdog Characterization Component Workloads Access patterns Devices Bandwidth Throughput Block erasure Concurrency Wear-leveling guide Placement Component I/O Pattern Random/Sequential Read/write Hot/cold I/O Functions Write_to_SSD Read_from_SSD Write_to_HDD Placement Algorithm Modified Consistent Hash
Background: Heterogeneous Storage Heterogeneous storage environment Distinct throughput NVMe SSD: 2000 or more MB/s SATA SSD: ~500 MB/s Enterprise HDD: ~150 MB/s Large SSDs are becoming available, but still expensive 1.2TB NVMe Intel 750 costs $1000 1TB SATA Saumsung 640 EVO costs $500 10 or more costly than HDDs SSDs still co-exist with HDDs as accelerator instead of replacing them 15
Background: How to Use SSDs in Cloud-scale Storage Traditional way of using SCMs (i.e. SSD) in cloud-scale distributed storage: as cache layer Caching/buffering generates extensive writes to SSD, which wears out the device Need fine-tuned caching/buffering scheme Not fully utilize capacity of SSDs The capacity of SSDs is growing fast Tiered Storage placed on SSD or HDD servers according to requirements Throughput Latency Access frequency transfer between tiers when the requirements changed 16
Tiered-CRUSH CRUSH ensures data placed across multiple independent locations to improve data availability Tiered-CRUSH integrates storage tiering into the CRUSH data placement 17
Tiered-CRUSH The virtualized volumes have different access pattern Access frequency of object recorded per volume, hotter data more likely to be placed on faster tiers Fair storage utilization maintained 18
Tiered-CRUSH: Evaluation Implemented in a benchmarks tool compiled with the CRUSH library functions Simulation showed that data distribution uniformity can be maintained Simulation shows 1.5 to 2X improvement in overall bandwidth in our experimental settings Device name Number Capacity(GB) Read bandwidth (MB/s) Samsung NVMe SSD Samsung SATA SSD 1 128 2000 2 256 540 Seagate HDD 3 1000 156 19
Pattern-directed Replication Trace object I/O requests when executing applications at first time Trace analysis, correlation finding and object grouping Reorganize objects for replication in the background 20
Version Consistent Hashing Scheme Build versions into the consistent hashing Avoid data migration when adding nodes or node fails Maintain efficient data lookup
Conclusions Decentralized storage becomes the standard in cloud storage Tiered-CRUSH algorithm achieves better IO performance and higher data availability at the same time for heterogeneous storage system Version consistent hashing scheme for improving manageability of data center PRS for high performance data replication by reorganizing the placement of data replications
Thank you! Questions? Visit: discl.cs.ttu.edu for more details