RED HAT GLUSTER STORAGE TECHNICAL PRESENTATION

Similar documents
GlusterFS and RHS for SysAdmins

Next Generation Storage for The Software-Defned World

GlusterFS Architecture & Roadmap

Why software defined storage matters? Sergey Goncharov Solution Architect, Red Hat

Red Hat Storage Server for AWS

RED HAT GLUSTER STORAGE 3.2 MARCEL HERGAARDEN SR. SOLUTION ARCHITECT, RED HAT GLUSTER STORAGE

The Future of Storage

Demystifying Gluster. GlusterFS and RHS for the SysAdmin

Scale-Out backups with Bareos and Gluster. Niels de Vos Gluster co-maintainer Red Hat Storage Developer

OpenShift + Container Native Storage (CNS)

GlusterFS Current Features & Roadmap

RedHat Gluster Storage Administration (RH236)

Gluster roadmap: Recent improvements and upcoming features

Gluster can t scale - Is it a reality or a past? Atin Mukherjee Engineering Manager, Red Hat Gluster Storage

Distributed Storage with GlusterFS

Introduction To Gluster. Thomas Cameron RHCA, RHCSS, RHCDS, RHCVA, RHCX Chief Architect, Central US Red

-Presented By : Rajeshwari Chatterjee Professor-Andrey Shevel Course: Computing Clusters Grid and Clouds ITMO University, St.

Red Hat Gluster Storage performance. Manoj Pillai and Ben England Performance Engineering June 25, 2015

Deploying Software Defined Storage for the Enterprise with Ceph. PRESENTATION TITLE GOES HERE Paul von Stamwitz Fujitsu

Red Hat Gluster Storage 3

Above the clouds with container-native storage

Genomics on Cisco Metacloud + SwiftStack

Evaluating Cloud Storage Strategies. James Bottomley; CTO, Server Virtualization

Quobyte The Data Center File System QUOBYTE INC.

GlusterFS Cloud Storage. John Mark Walker Gluster Community Leader, RED HAT

Demystifying Gluster. GlusterFS and RHS for the SysAdmin. Dustin L. Black, RHCA Sr. Technical Account Manager, Red Hat

Red Hat HyperConverged Infrastructure. RHUG Q Marc Skinner Principal Solutions Architect 8/23/2017

Red Hat Gluster Storage 3.1 Administration Guide

RED HAT CEPH STORAGE ROADMAP. Cesar Pinto Account Manager, Red Hat Norway

The Fastest Scale-Out NAS

OnCommand Cloud Manager 3.2 Deploying and Managing ONTAP Cloud Systems

RED HAT GLUSTER TECHSESSION CONTAINER NATIVE STORAGE OPENSHIFT + RHGS. MARCEL HERGAARDEN SR. SOLUTION ARCHITECT, RED HAT BENELUX April 2017

Backup strategies for Stateful Containers in OpenShift Using Gluster based Container-Native Storage

Red Hat Storage Server: Roadmap & integration with OpenStack

Storage for HPC, HPDA and Machine Learning (ML)

Hedvig as backup target for Veeam

Next Gen Storage StoreVirtual Alex Wilson Solutions Architect

CephFS A Filesystem for the Future

How CloudEndure Disaster Recovery Works

Scale-out Storage Solution and Challenges Mahadev Gaonkar igate

IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage

Samba and Ceph. Release the Kraken! David Disseldorp

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix

Highly Available Database Architectures in AWS. Santa Clara, California April 23th 25th, 2018 Mike Benshoof, Technical Account Manager, Percona

Opendedupe & Veritas NetBackup ARCHITECTURE OVERVIEW AND USE CASES

Amazon Web Services (AWS) Solutions Architect Intermediate Level Course Content

ONTAP 9 Cluster Administration. Course outline. Authorised Vendor e-learning. Guaranteed To Run. DR Digital Learning. Module 1: ONTAP Overview

StorageCraft OneXafe and Veeam 9.5

OnCommand Unified Manager 6.1

OPEN STORAGE IN THE ENTERPRISE with GlusterFS and Ceph

Open Storage in the Enterprise

Document Sub Title. Yotpo. Technical Overview 07/18/ Yotpo

Elastic Cloud Storage (ECS)

White paper Version 3.10

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT

Container-Native Storage

Outline: ONTAP 9 Cluster Administration and Data Protection Bundle (CDOTDP9)

EMC Celerra CNS with CLARiiON Storage

SwiftStack and python-swiftclient

AWS Course Syllabus. Linux Fundamentals. Installation and Initialization:

Ceph Software Defined Storage Appliance

Red Hat Gluster Storage 3.4

StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide

We are ready to serve Latest IT Trends, Are you ready to learn? New Batches Info

DELL POWERVAULT NX3500. A Dell Technical Guide Version 1.0

Welcome to Manila: An OpenStack File Share Service. May 14 th, 2014

Hitachi HH Hitachi Data Systems Storage Architect-Hitachi NAS Platform.

RED HAT GLUSTER STORAGE PRODUCT OVERVIEW

Zadara Enterprise Storage in

The CephFS Gateways Samba and NFS-Ganesha. David Disseldorp Supriti Singh

From an open storage solution to a clustered NAS appliance

OpenStack SwiftOnFile: User Identity for Cross Protocol Access Demystified Dean Hildebrand, Sasikanth Eda Sandeep Patil, Bill Owen IBM

IBM Storwize V7000 Unified

Aurora, RDS, or On-Prem, Which is right for you

IBM Spectrum Scale in an OpenStack Environment

REFERENCE ARCHITECTURE Quantum StorNext and Cloudian HyperStore

Software Defined Storage

OnCommand Unified Manager 6.1 Administration Guide

Making Non-Distributed Databases, Distributed. Ioannis Papapanagiotou, PhD Shailesh Birari

EMC Isilon. Cisco UCS Director Support for EMC Isilon

Discover CephFS TECHNICAL REPORT SPONSORED BY. image vlastas, 123RF.com

SCS Distributed File System Service Proposal

What s New in Red Hat OpenShift Container Platform 3.4. Torben Jäger Red Hat Solution Architect

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

Azure Webinar. Resilient Solutions March Sander van den Hoven Principal Technical Evangelist Microsoft

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

Infinite Volumes Management Guide

Distributed File Storage in Multi-Tenant Clouds using CephFS

A product by CloudFounders. Wim Provoost Open vstorage

DATA PROTECTION IN A ROBO ENVIRONMENT

Linux Administration

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Cluster Setup and Distributed File System

The Leading Parallel Cluster File System

MOC 20417C: Upgrading Your Skills to MCSA Windows Server 2012

An Introduction to GPFS

Scality RING on Cisco UCS: Store File, Object, and OpenStack Data at Scale

SolidFire and Ceph Architectural Comparison

SQL Azure. Abhay Parekh Microsoft Corporation

SvSAN Data Sheet - StorMagic

Transcription:

RED HAT GLUSTER STORAGE TECHNICAL PRESENTATION MARCEL HERGAARDEN Sr. Solution Architect Benelux APRIL, 2017

INTRODUCTION

RED HAT GLUSTER STORAGE Open source, software-defined storage for unstructured file data at petabyte scale Media, video Machine, Log Data GeoSpatial Persistent Storage Documents

RED HAT GLUSTER STORAGE ADVANTAGES OPEN Open, software-defined distributed file and object storage system Based on GlusterFS open source community project Uses proven local file system (XFS) Data is stored in native format SCALABLE No Metadata Server Uses an elastic hashing algorithm for data placement Uses local filesystem s xattrs to store metadata Nothing shared scale-out architecture ACCESSIBLE Multi-Protocol the Same Data Global name space NFS, SMB, Object (SWIFT+S3), Gluster native protocol Posix compliant MODULAR No Kernel Dependencies GlusterFS is based on filesystem in userspace (FUSE) Modular stackable arch allows easy addition of features...without being tied to any kernel version ALWAYS-ON High-Availability across data, systems and applications Synchronous replication with self-healing for server failure Asynchronous geo-replication for site failure

TERMINOLOGY Brick: basic unit of storage, realized as mount point Subvolume: a brick after being processed by at least one translator Volume: logical collection of bricks / subvolumes glusterd: process to manage glusterfsd processes on the server glusterfsd: process to manage one specific brick GFID: 128 bit identifier of file in GlusterFS (Trusted) Storage Pool: Group of GlusterFS servers that know and trust each other

GLUSTER ARCHITECTURE Distributed scale out storage using industry standard hardware NFS CIFS FUSE OBJECT SERVERS WITH LOCAL DISKS Gluster native protocol Aggregates systems to one cohesive unit and presents using common protocols

WHAT IS A SYSTEM? Can be physical, virtual or cloud PHYSICAL VIRTUAL CLOUD SERVER (CPU/MEM)

ANATOMY OF A SYSTEM RHEL XFS BRICKS XFS BRICKS LVM 2 LVM 2 Local disk or RAID LUN Local disk or RAID LUN Local disk or RAID LUN Local disk or RAID LUN RHEL and Gluster make disk resources clustered and available as bricks using proven technology such as LVM and XFS

GLUSTER VOLUME Bricks coming from multiple hosts become one addressable unit CLIENTS High availability as needed Load balanced data SERVER SERVER Managed by Gluster BRICK BRICK BRICK BRICK

RED HAT GLUSTER STORAGE ARCHITECTURE RED HAT GLUSTER STORAGE POOL RED HAT GLUSTER STORAGE CLI CLOUD VOLUME MANAGER SERVER ADMINISTRATOR DISKS NFS CIFS FUSE OBJECT SWIFT & S3 CLOUD VOLUME MANAGER SERVER USER DISKS

TIERING MULTIPLE VOLUMES WITH DIFFERENT STORAGE PROFILES TIER DISTRIBUTE REPLICATE 1 REPLICATE 2 REPLICATE 3 A1 B1 C1 D1 A2 B2 C2 D2

TIERING NVME/SSD TIERING FAST STORAGE TIER A2 B2 C2 D2

DATA PLACEMENT

BASIC COMPONENT CONCEPTS SERVER/NODES Contain the bricks BRICK The basic unit of storage VOLUME A namespace presented as a POSIX mount point

BRICKS A Brick is the combination of a node and file system (hostname:/dir) STORAGE NODE STORAGE NODE STORAGE NODE Each brick inherits limits of underlying file system (XFS) /export1 /export2 /export3 /export4 /export6 /export7 /export8 /export9 /export11 /export12 /export13 /export14 Red Hat Gluster Storage operates at the brick level, not the node level Ideally, each brick in a volume should be the same size /export5 /export10 /export15

ELASTIC HASH ALGORITHM 0 1 1 0 1 0 1 0 BRICK # BRICK No Central Metadata Server Suitable for unstructured data storage No single point of failure Elastic Hashing Files assigned to virtual volumes Virtual volumes assigned to multiple bricks Volumes are easily reassigned on-the-fly Location Hashed on Filename No performance bottleneck Eliminates risk scenarios

GLUSTER VOLUMES A Volume is a number of bricks >1, exported by Red Hat Gluster Storage STORAGE NODE STORAGE NODE STORAGE NODE /export1 /export6 /export11 /export2 /export7 /export12 /export3 /export8 /export13 /export4 /export9 /export14 /export5 /export10 /export15 Volumes have administrators assigned names A brick can be a member of one volume Data in different volumes physically exists on different bricks Volumes can be mounted on clients /scratchspace /homeshare

DATA PLACEMENT STRATEGIES VOLUME TYPE CHARACTERISTICS Distributed Replicated Distributed-Replicated Erasure Coded Tiered Distributes files across bricks in the volume Used where scaling and redundancy requirements are not important, or provided by other hardware or software layers Replicates files across bricks in the volume Used in environments where high availability and high reliability are critical Protection provided by software Offers improved read performance in most environments Used in environments where high reliability and scalability are critical Sharded Volume type Protection without dual or triple replication Economical alternative, very suitable for archive like workload types NVME/SSD Volume Tier Performance enhancement for workloads with often requested files and small files

DEFAULT DATA PLACEMENT Distributed Volume DISTRIBUTED VOLUME MOUNT POINT server1:/exp1 server2:/exp2 Uniformly distribute files across all bricks in the namespace BRICK FILE 1 FILE 2 FILE 3 BRICK

DEFAULT DATA PLACEMENT Replicated Volume REPLICATED VOLUME MOUNT POINT server1:/exp1 server2:/exp2 BRICK BRICK FILE 1 FILE 2 FILE 1 FILE 2

FAULT-TOLERANT DATA PLACEMENT Distributed-Replicated Volume DISTRIBUTED-REPLICATED VOLUME MOUNT POINT Replicated Vol 0 Replicated Vol 1 Creates an FT distributed volume by mirroring same file across 2 bricks server1 BRICK (exp 1) server2 BRICK (exp 2) server3 BRICK (exp 3) server4 BRICK (exp 4) FILE 1 FILE 1 FILE 2 FILE 2

ERASURE CODING Storing more data with less hardware RECONSTRUCT corrupted or lost data ELIMINATES the need for RAID CONSUMES FAR LESS SPACE than replication APPROPRIATE for capacity-optimized use cases. FILE 1 2 3 4 x y ERASURE CODED VOLUME STORAGE CLUSTER

TIERING Cost-effective flash acceleration AUTOMATED promotion and demotion of data between hot and cold sub volumes BASED on frequency of access. FILE HOT SUBVOLUME (FASTER FLASH) COLD SUBVOLUME (SLOWER SPINNING DISK) STORAGE CLUSTER

DATA ACCESSIBILITY

MULTI-PROTOCOL ACCESS Primarily accessed as scale-out file storage with optional APIs, Swift or S3 object FUSE NFS SMB API GLUSTER VOLUME HUNDREDS OF SERVER NODES ON-PREMISE OR IN THE CLOUD S3 SWIFT

GlusterFS NATIVE CLIENT BASED ON FUSE KERNEL MODULE, which allows the file system to operate entirely in userspace SPECIFY MOUNT to any GlusterFS server NATIVE CLIENT fetches volfile from mount server, then communicates directly with all other nodes to access data - Load inherently balanced across distributed volumes - Recommended for high concurrency & high write performance

GlusterFS NATIVE CLIENT Clients talk directly to the data bricks based on elastic hash APP SERVER 1 Running GlusterFS Client Hash Value Calculation Glusterfsd Brick Server Glusterfsd Brick Server Glusterfsd Brick Server Glusterfsd Brick Server Glusterfsd Brick Server Glusterfsd Brick Server MIRROR MIRROR MIRROR

NFS Accessibility from UNIX and Linux systems STANDARD NFS connects to NFS Ganesha process on storage node MOUNT GLUSTERFS VOLUME from any storage node NFS GANESHA includes network lock manager to synchronize locks LOAD BALANCING managed externally STANDARD AUTOMOUNTER is supported. SUPPORTED FEATURES: ACLs, NFSv4, Kerberos auth Better performance reading many small files from a single client

Ganesha NFS Scalable & Secure NFSv4 client support PROVIDES client access with simplified failover and failback in the case of a node or network failure. INTRODUCES ACLs for additional security KERBEROS authentication DYNAMIC export management. CLIENT NFS NFS-GANESHA STORAGE CLUSTER

SMB/CIFS Accessibility from Windows systems STORAGE NODE uses Samba with winbind to connect with AD SMB CLIENTS can connect to any storage node running Samba SMB VERSION 3 supported LOAD BALANCING managed externally CTDB is required for Samba clustering Samba uses RHGS gfapi library to communicate directly with GlusterFS server process without going through FUSE

NFS & CIFS DATA FLOW Clients talk to the mounted storage node, then directed to the data bricks APP SERVER 1 NFS/SMB/HTTP 5 1 IP load balancing & high availability ctdb rrdns, ucarp, haproxy, HW (f5, ace etc.) 4 2 Glusterfsd Brick Server Glusterfsd Brick Server Glusterfsd Brick Server Glusterfsd Brick Server Glusterfsd Brick Server Glusterfsd Brick Server MIRROR MIRROR MIRROR 3

OBJECT ACCESS of GlusterFS Volumes BUILT UPON OpenStack s Swift object storage system, can also do S3 BACK-END FILE SYSTEM for OpenStack Swift Accounts as GlusterFS volumes STORE AND RETRIEVE files using the REST interface SUPPORT INTEGRATION with SWAuth and Keystone authentication service Implements objects as files and directories under the container ( Swift/S3 on File )

OBJECT STORE ARCHITECTURE MEMCACHED End Users Load Balancer Container server Proxy server GLUSTERFSD Object server Account server Red Hat Storage Volume (swift account) NODE 1 GLUSTERFSD NODE 2 GLUSTERFSD NODE N GLUSTERFSD

DEPLOYMENT

DEPLOYMENT IN PUBLIC CLOUDS Primary Datacenter Secondary Datacenter GLUSTER FS Duplicate-Replicate Volume (Primary) Geo-Replication GLUSTER FS Duplicate-Replicate Volume (Primary) RHSS EBS VOLUMES Replicated bricks RHSS RHGS EBS VOLUMES Replicated bricks RHGS Build Gluster volumes across AWS Availability Zones Red Hat Gluster Storage Amazon Machine Images (AMIs) High availability + Multiple EBS devices pooled No application rewrites required Scale-out performance and capacity availability as needed Azure & GCP is also supported along with AWS EBS VOLUMES EBS VOLUMES

CHOICE OF DEPLOYMENT Scale Out Performance, Capacity & Availability Single, Global namespace RED HAT GLUSTER STORAGE FOR ON-PREMISE Deploys on Red Hat-supported servers and underlying storage: DAS, JBOD Scale-out linearly Replicates synchronously and asynchronous Scale Up Capacity SERVER (CPU/MEM) SERVER (CPU/MEM) SERVER (CPU/MEM)

HOW IS GLUSTER DEPLOYED? Red Hat Gluster Storage PHYSICAL VIRTUAL CONTAINERS CLOUD

DATA PROTECTION

AVOIDING SPLIT-BRAIN Server Side Quorum SERVER-SIDE QUORUM is based on the liveliness of glusterd daemon Disk VOLUME LEVEL enforcement of quorum NETWORK OUTAGE breaker switch based on percentage ratio Cluster 2 Cluster 2 TRIGGERED BY ACTIVE NODES is more than 50% of the total storage nodes QUORUM ENFORCEMENT will require an arbitrator in the trusted storage pool # gluster volume set <volname> cluster.server-quorum-type none/server # gluster volume set all cluster.server-quorum-ratio <percentage%>

SERVER-SIDE QUORUM Scenarios In a storage pool with 4-nodes (A, B, C and D) in a 2X2 distributed replicated configuration, A and B are replicated and C and D are replicated. The quorum ratio is set to the default value of > 50% Node A dies, and a write destined for A B pair arrives Write will happen to B When A comes back online, self-heal will kick in to fix the discrepancy No change in this behavior with or without quorum enabled Node A dies, and a write destined for C D pair arrives Write will happen to C and D. No change in this behavior with or without quorum enabled If both A & B die, a write destined for the A B pair arrives Quorum is enabled, and the quorum ratio is not met. All the bricks in A, B, C, and D will go down. Quorum is not enabled. Write will fail, and bricks in C & D will continue to be alive If both A & B die, a write destined for the C D pair arrives Quorum is enabled, and the quorum ratio is not met. All the bricks in A, B, C, and D will go down. Quorum is not enabled. Write to C & D will succeed

CLIENT SIDE QUORUM Avoiding Split-Brain Cluster.quorum-type Cluster.quorum-type Behavior None Not applicable Quorum not in effect Auto Not applicable Allow writes to a file only if more than 50% of the total number of bricks Exception: For replica count=2, first brick in the pair must be online to allow writes. Fixed 1 thru replica-count The minimum number of bricks that must be active in a replica-set to allow writes.

GEO-REPLICATION Multi-site content distribution Asynchronous across LAN, WAN, or Internet Master-slave model, cascading possible Site A One to One replication Site B Continuous and incremental Cascading replication Multiple configurations One to one Site B One to many Cascading Site A Site C

GEO-REPLICATION Features SITE B PERFORMANCE Parallel transfers Efficient source scanning Pipelined and batched File type/layout agnostic SITE A CHECKPOINTS FAILOVER AND FAILBACK

GEO REPLICATION V.S. REPLICATED VOLUMES Geo-Replication Replicated Volumes Mirrors data across geographically distributed trusted storage pools. Provides high-availability. Mirrors data across bricks within one trusted storage pool. Backups of data for disaster recovery. Provides high-availability. Asynchronous replication: checks for changes in files. Syncs them on detecting differences. Synchronous replication: each and every file operation is applied to all the bricks Potential of data loss: minutes/hours Potential of data loss: none

GLUSTER VOLUME SNAPSHOTS Point-in-time state of storage system/data Volume level, ability to create, list, restore, and delete LVM2 based, operates only on thin-provisioned volumes Produces Crash Consistent image Support a max of 256 snapshots per volume Snapshot can be taken on one volume at a time Snapshot names need to be cluster-wide unique Managed via CLI User serviceable snapshots

RED HAT GLUSTER STORAGE BEFORE SNAPSHOT AFTER SNAPSHOT AFTER MODIFICATIONS CURRENT FILE SYSTEM SNAPSHOT CURRENT FILE SYSTEM SNAPSHOT CURRENT FILE SYSTEM A B C D A B C D A B C D B D+ E1 E2 Deleted Data Modified Data New Data

USING SNAPSHOTS 1 2 3 4 5 # gluster snapshot create <snap-name> <volname> [description <description>] [force] # gluster snapshot list [volname] # gluster snapshot restore <snapname> # gluster snapshot delete <snapname> # mount -t glusterfs <hostname>:/snaps/<snapname>/<volname> <mountdir>

BIT ROT DETECTING Detection of silent data corruption RED HAT GLUSTER STORAGE 3.1 provides a mechanism to scan data periodically and detect bit-rot. CHECKSUMS are computed when files are accessed and compared against previously stored values. IF THEY DO NOT MATCH an error is logged for the storage admin. ADMIN!! 0 0 0 0 0 X 0 0

MANAGEMENT AND MONITORING ( Erasure Coding, Tiering, Bit-rot Detection and NFS-Ganesha )

DISK UTILIZATION AND CAPACITY MANAGEMENT Quota CONTROL THE DISK UTILIZATION at both a directory and volume level TWO LEVELS of quota limits: Soft and hard WARNING MESSAGES issued on reaching soft quota limit WRITE FAILURES with EDQUOTA message after hard limit is reached HARD AND SOFT quota timeouts THE DEFAULT SOFT LIMIT is an attribute of the volume that is a percentage

JOBS IN THE QUOTA SYSTEM Accounting MARKER TRANSLATOR loaded on each brick of the volume ACCOUNTING happens in the background UPDATE is sent upwards up to the root of the volume

JOBS IN THE QUOTA SYSTEM Enforcement The enforcer updates its 'view' of directory's disk usage on the incidence of a file operation Enforcer uses quotad to get the aggregated disk usage of a directory

JOBS IN THE QUOTA SYSTEM Aggregator (quotad) QUOTAD IS A DAEMON that serves volume-wide disk usage of a directory QUOTAD IS PRESENT on all nodes in the cluster ONE QUOTAD per node QUOTAD MANAGES all the volumes on which quota is enabled

MONITORING STORAGE USING NAGIOS BASED ON NAGIOS open IT infrastructure monitoring framework MONITOR LOGICAL ENTITIES: Cluster, volume, brick, node MONITOR PHYSICAL ENTITIES: CPU, disk, network ALERTING VIA SNMP when critical components fails REPORTING: Historical record of outages, events, notifications INTERFACE WITH NAGIOS Web Console and/or dashboard view

MONITORING SCENARIOS SCENARIO 1 SCENARIO 2 SCENARIO 3 User has no existing monitoring infrastructure in place or does not use Nagios User already has Nagios infrastructure in place, use plugins only Usage in conjunction with Red Hat Gluster Storage console Support SNMP traps for all scenarios

FUNCTIONAL ARCHITECTURE of Red Hat Gluster Storage monitoring STORAGE ADMINISTRATORS 3rd Party Tools Queries and Commands Http requests 3rd Party Tools REST API SNMP Traps RED HAT GLUSTER STORAGE CONSOLE SYSTEM TRENDS NAGIOS SERVER for RHGS Queries and Commands Results Nagios active checks Nagios active checks

NAGIOS DEPLOYED ON RED HAT GLUSTER NODE RED HAT GLUSTER STORAGE POOL NODE 1 NODE 2 THIRD PARTY SYSTEM SNMP TRAPS NAGIOS NODE 3

SIMPLIFIED AND UNIFIED STORAGE MANAGEMENT Single view for converged storage and compute Storage operations Intuitive user interface Volume management On-premise and public cloud Virtualization and storage Shared management with Red Hat Enterprise Virtualization Manager NEED HIGH RES IMAGE Provisioning Installation and configuration Update management Lifecycle management Familiar Red Hat Enterprise Linux tools

SECURITY Network Encryption at Rest and In Transit SUPPORTS network encryption using TLS/SSL for authentication and authorization, in place of the home grown authentication framework used for normal connections SUPPORT encryption in transit and transparent encryption (at rest) TWO TYPES OF ENCRYPTION: I/O encryption - encryption of the I/O connections between the Red Hat Gluster Storage clients and servers Management encryption - encryption of the management (glusterd) connections within a trusted storage pool

THANK YOU