RED HAT CEPH STORAGE ROADMAP Cesar Pinto Account Manager, Red Hat Norway cpinto@redhat.com
THE RED HAT STORAGE MISSION To offer a unified, open software-defined storage portfolio that delivers a range of data services for next generation workloads thereby accelerating the transition to modern IT infrastructures.
THE RED HAT STORAGE PORTFOLIO
PROPRIETARY SOFTWARE OPEN SOURCE SOFTWARE TODAY S PORTFOLIO: OPTIMIZED POINT SOLUTIONS Share-nothing, scale-out architecture provides durability and adapts to changing demands Ceph management Gluster management Ceph data services Gluster data services Self-managing and self-healing features reduce operational overhead Standards-based interfaces and full APIs ease integration with applications and systems Supported by the experts at Red Hat
OVERVIEW: RED HAT CEPH STORAGE Powerful distributed storage for the cloud and beyond Built from the ground up as a next-generation storage system, based on years of research and suitable for powering infrastructure platforms Highly tunable, extensible, and configurable Offers mature interfaces for block and object storage for the enterprise Customer Highlight: Cisco Cisco uses Red Hat Ceph Storage to deliver storage for next-generation cloud services TARGET USE CASES OpenStack Cinder, Glance & Nova Object storage for tenant apps Object Storage for Applications S3-compatible API
RED HAT STORAGE FUTURE WORKLOADS
USE CASES: TODAY AND FUTURE CURRENT USE CASES Big Data analytics Storage plug-in for Hortonworks Data Platform TARGET USE CASES Big Data analytics Persistent back-end for Spark ANALYTICS Machine data analytics Online cold storage for IT operations data with Splunk Machine data analytics Storage for ELK, Solr
USE CASES: TODAY AND FUTURE CURRENT USE CASES Virtual machine storage OPENSTACK Virtual machine volume storage with Cinder, Nova and Glance Object storage for tenant applications Swift-compatible storage for cloud applications TARGET USE CASES Database storage Storage for relational databases with Trove Storage back-end for Manila Shared file system-as-a-service for tenants
USE CASES: TODAY AND FUTURE CURRENT USE CASES TARGET USE CASES Scale out file store Compliant archives ENTERPRISE SHARING Storage for active archives, media streaming, content repositories, VM images, and general-purpose file shares Enterprise file sync and share Storage for Dropbox-style enterprise shared folders Scalable, cost-effective storage for compliance and regulatory needs File services for containers File storage services for containers and pods
USE CASES: TODAY AND FUTURE CURRENT USE CASES CLOUD STORAGE S3-based object storage for apps Cost-effective, S3-compatible, on-premise object store TARGET USE CASES Enterprise sync and share Storage for shared folders (object backend)
USE CASES: TODAY AND FUTURE CURRENT USE CASES ENTERPRISE VIRTUALIZATION Conventional virtualization storage Integrated storage for Red Hat Enterprise Virtualization (with separate compute and storage clusters) TARGET USE CASES Hyper-converged architectures Hyper-converged architectures
RED HAT CEPH STORAGE ROADMAP DETAIL
ROADMAP: RED HAT CEPH STORAGE Cache tiering RADOS read-affinity User and bucket quotas Ceph FIrefly BLOCK OBJECT OBJECT Erasure coding Foreman/puppet installer CLI :: Calamari API parity Multi-user and multi-cluster OSD w/ssd optimization More robust rebalancing Improved repair process Local and pyramid erasure codes Improved read IOPS Faster booting from clones S3 object versioning Bucket sharing Ceph Hammer New UI Alerting GUI management BLOCK Off-line installer FUTURE ( Tufnell and Beyond) OBJECT 6-9M ( Stockwell ) TODAY (v1.2) Performance Consistency Guided Repair New Backing Store iscsi Mirroring NFS Active/Active multi-site
DETAIL: RED HAT CEPH STORAGE V1.2 Off-line installer All required dependencies are now included within a local package repository, allowing deployment to non-internet-connected storage nodes. GUI management Administrators can now perform basic cluster administration tasks through Calamari, the Ceph visual interface. Erasure coding Erasure-coded storage back-ends are now available, providing durability with lower capacity requirements than traditional, replicated back-ends. Cache tiering A cache tier pool can now be designated as a writeback or read cache for an underlying storage pool in order to provide cost-effective performance. RADOS read-affinity Clients can be configured to read objects from the closest replica, increasing performance and reducing network strain. OBJECT These features were introduced in the most recent release of Red Hat Ceph Storage, and are now supported by Red Hat. User and bucket quotas The Ceph Object Gateway now supports and enforces quotas for users and buckets.
DETAIL: RED HAT CEPH STORAGE STOCKWELL Foreman/puppet installer Support for deployment of new Ceph clusters using Foreman and provided Puppet modules. CLI :: Calamari API parity Improvements to the Calamari API and command-line interface that enable administrators to perform the same set of operations through each. Multi-user and multi-cluster Support in the calamari interface for multiple administrator accounts and multiple deployed clusters. OSD with SSD optimization Performance improvements for both read and write operations, especially applicable for configurations including all-flash cache tiers. More robust rebalancing Improved rebalancing that prioritizes repair of degraded data over rebalancing of sub-optimally-placed data; optimized data placement and improved utilization reporting and management that delivers better distribution of data. These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. Local/pyramid erasure codes Inclusion of locally-stored parity bit (within a rack or data-center) that reduces network bandwidth required to repair degraded data.
DETAIL: RED HAT CEPH STORAGE STOCKWELL BLOCK Improved read IOPS Introduction of allocation hints, which reduce file system fragmentation over time and ensure IOPS performance throughout the life of a block volume. BLOCK Faster booting from clones Addition of copy-on-read functionality to improve initial and subsequent write performance for cloned volumes. OBJECT S3 object versioning New versioning of objects that help users avoid unintended overwrites/ deletions and allow them to archive objects and retrieve previous versions. OBJECT These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. Bucket sharding Sharding of buckets in the Ceph Object Gateway to improve metadata operations on those with a large number of objects.
DETAIL: RED HAT CEPH STORAGE TUFNELL New UI A new user interface with improved sorting and visibility of critical data. Alerting Introduction of altering features that notify administrations of critical issues via email or SMS. Performance Consistency More intelligent scrubbing policies and improved peering logic to reduce impact of common operations on overall cluster performance. Guided Repair More information about objects will be provided to help administrators perform repair operations on corrupted data. These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. New Backing Store (Tech Preview) New backend for OSDs to provide performance benefits on existing and modern drives (SSD, K/V).
DETAIL: RED HAT CEPH STORAGE TUFNELL BLOCK Introduction of a highly-available iscsi interface for the Ceph Block Device, allowing integration with legacy systems Mirroring Capabilities for managing virtual block devices in multiple regions, maintaining consistency through automated mirroring of incremental changes NFS Access to objects stored in the Ceph Object Gateway via standard Network File System (NFS) endpoints, providing storage for legacy systems and applications Active/Active Multi-Site Support for deployment of the Ceph Object Gateway across multiple sites in an active/active configuration (in addition to the currently-available active/passive configuration) OBJECT OBJECT iscsi BLOCK These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity.
THANK YOU