White paper Version 3.10
|
|
- Eleanore Sullivan
- 6 years ago
- Views:
Transcription
1 White paper Version 3.10
2 Table of Contents About LizardFS 2 Architecture 3 Use Cases of LizardFS 4 Scalability 4 Hardware recommendation 6 Features 7 Snapshots 7 QoS 8 Data replication 8 Replication 9 Standard replica 9 XOR replica 9 Erasure Coding 10 Georeplication 11 Metadata replication 12 LTO library support 13 Quotas 13 POSIX compliance 13 Automated Trash bin 14 Native Windows client 14 ActiveMonitoring 15 Portability 16 Use cases 16 Contact 18 1
3 About LizardFS LizardFS Software Defined Storage is a distributed, scalable, fault-tolerant and highly available file system. It allows users to combine disk space located on several servers into a single name space which is visible on Unix-like and Windows systems in the same way as other file systems. LizardFS makes files secure by keeping all the data in multiple replicas spread over the available servers. It can also be used to build space-efficient storage, because it is designed to run on commodity hardware. Disk and server failures are handled transparently, without any downtime or loss of data. If storage requirements grow, it is straightforward to scale an existing LizardFS installation by simply adding new servers at any time and without any downtime. The system will automatically move data to the newly added servers, as it continuously manages the balancing of disk usage across all connected nodes. Removing a server is just as easy as adding a new one. LizardFS offers unique features such as: support for many data centers and media types, fast snapshots, transparent trash bin, QoS mechanisms, quotas a set of monitoring tools. COMPATIBILITY RELIABILITY ENTERPRISE CLASS SOLUTIONS EXTRA FEATURES SCALABILITY CONSTANT IMPROVEMENTS COMMODITY HARDWARE We have we have set ourselves a clear goal - to deliver the best Software Defined Storage solution. However, we know even when we reach our current objective, we will not stop innovating. 2
4 Architecture LizardFS keeps metadata (e.g. file names, modification timestamps, directory trees) and the data separately. Metadata are kept on metadata servers, while data is kept on chunkservers. A typical installation consists of: At least two metadata servers, which work in master-slave mode for failure recovery. Their role is to manage the whole installation, so the active metadata server is often called the master server. The role of other metadata servers is to keep in sync with the active master server, so they are often called shadow master servers. Any shadow master server is ready to take the role of the master server at any time. A suggested configuration of a metadata server is a machine with fast CPU, at least 32 GB of RAM and at least one drive (preferably SSD) to store several GB of metadata. A set of chunkservers which store the data. Each file is divided into blocks called chunks (each up to 64 MB) which are stored on the chunkservers. A suggested configuration of a chunkserver is a machine with large disk space available either in a JBOD or RAID configuration. CPU and RAM are not very important. You can have as little as 2 chunkservers or as many as hundreds of them. Clients which use the data stored on LizardFS. These machines use LizardFS mount to access files in the installation and process them just as those on their local hard drives. Files stored on LizardFS can be seen and accessed by as many clients as needed. Figure 1: Architecture of LizardFS 3
5 Use Cases Our customers use LizardFS successfully in the following scenarios: archive with LTO Library/Drive support storage for virtual machines (as an OpenNebula backend or similar) storage for media/cctv etc storage for backups as a network share drive for Windows servers DRC (Disaster Recovery Center) HPC (High Performance Computing) Scalability If storage requirements grow, an existing LizardFS installation can be scaled by adding new chunkservers. Adding a new server is possible without any downtime and can be performed at any time, i.e., it is transparent to clients. There are no restrictions about disk space of newly added chunkservers, e.g., adding a 24 TB chunkserver to an installation consisting of 8 TB chunkservers will work without any problems. Performance of the system scales linearly with the number of disks, so adding a new chunkserver will not only increase available storage capacity, but also the overall performance of the storage. LizardFS automatically arranges data within all chunkservers including the newly added chunkserver, as it balances disk usage across all connected nodes. Removing servers (e.g. for longer maintenance) is as easy as adding one. Example Consider the following installation: server total space used space 1 20 TB 15 TB (75%) 2 16 TB 12 TB (75%) 3 16 TB 12 TB (75%) 4 16 TB 12 TB (75%) 5 8 TB 6 TB (75%) 6 8 TB 6 TB (75%) TOTAL 84 TB 63 TB (75%) 75% 75% 75% 75% 75% 75% 4
6 After adding a new server the disk usage statistics will look as follows: server total space used space 1 20 TB 15 TB (75%) 2 16 TB 12 TB (75%) 3 16 TB 12 TB (75%) 4 16 TB 12 TB (75%) 5 8 TB 6 TB (75%) 6 8 TB 6 TB (75%) 7 21 TB 0 TB (0%) TOTAL 105 TB 63 TB (60%) 75% 75% 75% 75% 75% 75% + The data will be rebalanced so that the disk usage is almost even, e.g.: server total space used space 1 20 TB 12 TB (60%) 2 16 TB 9.6 TB (60%) 3 16 TB 9.6 TB (60%) 4 16 TB 9.6 TB (60%) 5 8 TB 4.8 TB (60%) 6 8 TB 4.8 TB (60%) 7 21 TB 12.6 TB (60%) TOTAL 105 TB 63 TB (60%) Administrators stay in full control of this process as they can decide how long this process should take. 60% 60% 60% 60% 60% 60% 60% 5
7 Should you decide to take the 7th server out, you can mark it for removal. LizardFS then automatically copies all the data from the 7th server to the other servers. server total space used space 1 20 TB 15 TB (75%) 2 16 TB 12 TB (75%) 3 16 TB 12 TB (75%) 4 16 TB 6 TB (75%) 5 8 TB 6 TB (75%) 6 8 TB 6 TB (75%) 7 21 TB 12.6 TB (60%) TOTAL 105 TB 75.6 TB (72%) 75% 75% 75% 75% 75% 75% 60% Hardware recommendation LizardFS is fully hardware agnostic. Commodity hardware can be utilized for cost efficiency. The minimum requirements are two dedicated nodes with a number of disks, but to obtain a proper HA installation, you should provision at least 3 nodes. This will also enable you to use Ereasure Coding. We recommend that each node has at least two 1Gbps network interface controllers (NICs). Since most commodity hard disk drives have a throughput of 100MB/s, your NICs should be able to handle the traffic for the chunkservers on your host. Master / Shadow: minimum 2 GHz CPU, 64bit RAM: depends on the number of files (e.g. 4GB often sufficient for a small installation) Disk: 128G, HDD is sufficient, SSD improves performance, Chunkserver: recommended minimum 2GB RAM Metalogger: recommended minimum 2GB RAM 6
8 Features It is the variety of practical and stable features offered by LizardFS that makes the system a mature enterprise solution. You can use LizardFS for Hierarchical Storage Management (HSM), in Disaster Recovery Centers with asynchronous replication, to reduce the disk space required for replication, to effectively manage storage pools (QoS, Quotas) and much more. LizardFS is extremely flexible. If you have another use case that would require additional functionalities, these can be developed to cater to your particular needs. Snapshots Copying large files and directories (eg. virtual machines) can be done extremely efficiently by using the snapshot feature. When creating a snapshot, only the metadata of a target file is copied, speeding up the operation. Chunks of the original and the duplicated file are now shared until one of them is modified. Example Consider a directory /mnt/lizardfs/huge_vm, which contains several large files, totalling 512GB. Making a backup copy using the `cp -r huge_vm huge_vm_backup` command would take a long time and consume an additional 512GB of disk space. We recommend instead to create a copy using the `mfsmakesnapshot huge_vm huge_vm_backup` command. This command in contrast will be executed instantaneously and consume a minimum amount of disk space. Now, consider the case of modifying one of the files in the /mnt/lizardfs/huge_vm directory in a way that affects only 2 of its chunks. When using the `mfsmakesnapshot` option, only the modified chunks would be copied, an operation that uses only 128MB, which is negligible when compared to the 512GB consumed by the `cp -r` alternative. 7
9 QoS LizardFS offers mechanisms that allow administrators to set read/write bandwidth limits for all the traffic generated by a given mount point, as well as for a specific group of processes spread over multiple client machines and mountpoints. This makes LizardFS suitable for critical applications, where proper quality of service guarantees are required. Example Consider an installation with 300 hard drives and provisioned according to our recommendations. One can expect it to serve about 20 GB/s to all clients. Imagine a production system consisting of many virtual machines using LizardFS as their storage. If a group of administrators has access to the host machines and use it for maintenance work, it is possible to use bandwidth limits to ensure the production system runs smoothly. After configuring the host machines to place all ssh processes (and all processes spawned by them) in a cgroup named, for example, interactive and defining a bandwidth limit of 100 MB/s for the interactive group, administrators will not be able to accidently disrupt the production system. Any processes that was started manually from an ssh session will be limited to 100 MB/s in total, even though they may be spread over tens of machines. In the same installation it is also possibile to add a bandwidth limit for a set of virtual machines used by a subsystem (or a customer) with limited performance requirements, to make sure that these machines will not disrupt the operation of other virtual machines. After setting the bandwith limit to 1 GB/s administrators can be sure that all the traffic generated by these machines (both reads and writes) will sum to a total of 1 GB/s or less. Data replication Files stored in LizardFS are divided into blocks called chunks, each up to 64 MB in size. Each chunk is kept on chunkservers and administrators can choose how many copies of each file are maintained. For example, choosing to keep 3 copies (configuration goal=3), all the of the data will survive a failure of any two disks or chunkservers, because LizardFS will never keep 2 copies of the same chunk on the same node. Security levels (the number of copies) can be set independently for each file and directory tree. Whenever a chunk is lost, e.g. due to a disk failure, the system will replicate it using the remining copies of the chunk to maintain the requested number of copies. Data is spread evenly across all the nodes. There are no dedicated mirror servers, which means that in the case when goal=3 each chunk has its three copies on three randomly chosen servers. This unique design makes failure recovery very fast. If one disk is lost, backup copies of its data are evenly spread over all other disks, so all disks in the installation can take part in the process of replication. Consequently, the speed of recovering lost data increases with the number of disks in the system. 8
10 Replication mechanism LizardFS implements two replication mechanisms: standard and XOR copies. Standard replica This kind of replication allows for setting the number of copies every chunk should have. Note that it is not possible to store 2 of the same chunks on 1 chunkserver, meaning standard goals should comply with the number of chunkservers and their labels. Example Standard goal 3 indicates that each chunk should be placed on 3 different chunkservers. The specific location of the chunks will be determined by the master server, which balances space usage evenly. XOR replica This kind of replication divides chunks into smaller parts according to the xor level parameter. A file with XOR level N is kept as N+1 chunks (N for data, 1 for parity check). As a result, the disk space required to store the file is (N+1)/N, while retaining data integrity despite one of the chunkservers failing. The XOR level can range from 2 to 10. 9
11 Example Goal xor3 means that each chunk of a file will be divided into 4 parts and distributed among the chunkservers. Now, if any single chunkserver fails, it is still possible to reconstruct the file from the remaining 3 XOR chunks. EC replica - Erasure Coding replica (since version ) For each file using an ec(k+m) goal, the system will split the file into K parts and generate M parity parts. The K parts are required for reproducing the file content. If labels are specified, labelled parts will be kept on chunkservers. Otherwise, default wildcard labels are used. This kind of goal allows M of the K+M copies to be lost and the file still remains accessible. Using the erasure code goal occupies M/K addional space, e.g. goal 5+1 requires 20% additional space, while 3+2 requires 66.6% additional space. Please note that the minimal K value is 2 and the minimal M value is 1. Erasure code goals with M value equal to 1 are equivalent to XOR replication of type K. Example EC(3,2) 10
12 Georeplication (aka custom goals) With Georeplication you can decide where the chunks are stored. The topology feature allows for suggesting which copy should be read by a client in the case when more than one copy is available. The custom goals feature makes LizardFS more useful when working with many locations and media types (e.g., SSD and HDD drives). Administrators are able to use the feature by labelling chunkservers to decide how the system spreads chunks across available servers. This enables LizardFS to withstand the failure of a whole data center (Disaster Recovery), as shown in the example below. Example When LizardFS is deployed across two data centers, e.g. one located in London and one in Paris, it is possibile to assign the label london to each server in the London location and paris to each server in the Paris location. Additionally, one can create a custom goal named 2_locations, defined as 2_locations: paris london. Each file created in a directory with the 2_locations goal will have its chunks stored in two copies, one on a london labelled server, and one on a paris labelled server. This ensures that even when one entire location is unavailable, the files at the other location are all still available for reading and writing. One can use the topology mechanism to make mounts prefer reading Custom goals (as described in this example) rathen than simple goals, which specify only a number of copies (e.g., goal=3). It is possibile to set and change the goals at any time for any file or any directory tree, and the system will move data accordingly when required. There are more possibilities, e.g.: To define a goal save_2_locations: paris london _ which is interpreted as: 3 copies, at least one in London and one in Paris now we can lose any two servers or a whole data center without any downtime. The system will place the third copy in a way that balances disk usage across all servers, as defined by the administrator. To define a goal fast: ssd ssd if some servers are equipped only with ssd drives and others with hdd drives (and properly labelled), this ensures that files labelled in this way are stored in two copies only on chunkservers labelled ssd, which in most cases will be much faster than the non-ssd ones. If there is only one location with a configuration as in the previous example (SSD and HDD servers) one can use a custom goal such as one_fast_copy: ssd hdd for some files to force access from the specified medium. 11
13 Metadata replication Metadata in LizardFS consists of: names of files and directories POSIX attributes of files and directories, (e.g. owner, owning group, access mask, last modification and access timestamps, size of files, etc.) extended attributes access control lists information about each file s goal and trash time information about which chunks form each file Metadata is stored on metadata servers. At any time, one of the metadata servers also manages the whole installation and is called the master server. Other metadata servers remain in sync with it and are shadow master servers. Each metadata server stores all the metadata in its RAM as well as on a local hard drive. All the changes in the metadata made by the master server are instantly applied by all connected shadow master servers and also saved on their hard drives. At any time, it is possible to add a new shadow master server or to remove an existing one. These operations are completely transparent to the clients. Shadow master servers provide LizardFS with High Availability. If there is at least one shadow master server running and the active master server is lost, one of the shadow master servers takes over. All other shadow master servers, chunkservers, clients and metaloggers will continue to work with the new master server. This process can also be triggered manually when there is need for maintenance work on the master server. Moreover, you can increase security of your metadata by adding metaloggers. Each metalogger is a lightweight daemon (low or no RAM/CPU requirements) which connects to the active metadata server, continuously receives the same information that all shadow masters receive and continuously saves it to its local hard drive. In this way an additional copy of metadata is created and if all the metadata servers are lost, the whole system can be easily recovered from such a cold copy. As metaloggers use almost no CPU nor RAM, the cost of adding many of them is very low. In this way one can have any number of independent and constantly updated copies of all the metadata. Example For a file system which consists of 50 million files which sum up to 1 PB of data, the metadata will occupy about 13 GB of hard drive space. A single metalogger which keeps a couple of copies of the metadata (the most recent version as well as a couple of older backups) will require less than 100 GB of hard drive space. 12
14 After setting up metaloggers on 10 machines which work as chunkservers there will be 12 constantly updated copies of the system s metadata (including one copy kept on the active master server and one on a shadow master server) making it virtually indestructible. LTO library support LizardFS offers native support for LTO libraries. Storing archival backups may consume a lot of memory, even though such files are almost never read, which means that this kind of data can be efficiently stored on a tape. LizardFS offers a comfortable way of working with backend LTO storage. Files can be chosen to have a backup copy on a tape by setting a tape goal. Setting a tape goal to a file makes it read-only for obvious reasons - tape storage does not support random writes. Reading from tape storage is a timely process, so data stored in there should be read very rarely. If a regular copy of a file is still available, it will be used for reading. If a file exists only on tape, it has to be restored to LizardFS first. To achieve that, one must use lizardfs-restore-tape-copy utility: $ lizardfs-restore-tape-copy file_path. After running this command, all needed data will be read from tape storage and loaded to the file system, making the file accessible to clients. Quotas LizardFS support disk quota mechanism known from other POSIX file systems. It offers an option to set soft and hard limits for number of files and their total size for a specific user or a group of users. A user whose hard limit is exceeded cannot write new data to LizardFS. From version LizardFS supports also quotas per directory. POSIX compliance LizardFS offers all the features of POSIX file systems, for example: hierarchical structure of files and directories attributes of files like uid, gid, access mask, access time, modification time, change time support for symlinks, hard links, and other special files (unix sockets, devices) access control lists extended attributes Semantics of access rights in LizardFS are precisely the same as in other POSIX file systems (like ext or xfs). This makes it compatible with all applications that can use a local hard drive as their storage. 13
15 Trash Another feature of LizardFS is a transparent and fully automatic trash bin. After removing any file, it is moved to a trash bin, which is visible only to the administrator. Any file in the trash bin can be restored or deleted permanently. Thus, data stored on LizardFS is more secure than data stored on a hardware RAID. The trash is automatically cleaned up: each file is removed permanently from the trash by the system after some period of time (trash time). At any time, the trash time can be adjusted for any file or directory tree. Example If trash time for the whole installation is set to two hours, every file will be available within two hours after deletion. This should be sufficient for any accidents caused by users. Additionally trash time can be adjusted for any file or directory tree, e.g.: set to 0 for a directory with temporary files, so they would be removed immediately and free up space set to 0 for files before removing when we need to free space and we know that this file is no longer needed set to 1 month for very important data Native Windows client LizardFS offers a native Windows Client which allows users to build a software solution free from single point of failures like SMB servers which are typically used to export Unix file systems to Windows applications. LizardFS Windows Client can be installed on both workstations and servers. It provides access to files stored on LizardFS via a virtual drive, for example L:, which makes it compatible with almost all existing Windows applications. 14
16 Monitoring LizardFS offers two monitoring interfaces. First of all, there is a command line tool useful for systems like Nagios, Zabbix, Icinga, which are typically used for proactive monitoring. It offers a comprehensive set of information about connected servers and disks, client machines, storage space, errors, chunk copies lost due to disk failures, and much more. Moreover, there is a graphical web-based monitoring interface available for administrators, which allows to track almost all aspects of a system. 15
17 Portability LizardFS packages are available for most operating systems, including: Ubuntu LTS Debian Red Hat Enterprise Linux CentOS For these systems installation is performed by the package manager which is a part of these Linux distributions. The software is highly portable and there is a possibility to use a tarball to build LizardFS for almost any operating system from the Unix family. Additionally a client driver is available for Windows machines and can be installed using an installation wizard. Examples of configuration Example 1 2 metadata servers, each equipped with 64 GB RAM and RAID-1 built on top of two 128 GB SSD drives 20 chunkservers, each equipped with RAID-6 controller with 8 SAS drives, 3 TB each 10 Gbit network replication level 2 a metalogger running on 3 out of 20 chunkservers Such a configuration offers a possibility to store 180 TB of data in goal=2. It offers high performance as well as high durability. There will be no downtime in case of: losing any 5 disks losing any single server (including the master server) No loss of data in case of: losing any 5 disks losing any single chunkserver losing both metadata servers 16
18 Example 2 2 metadata servers, each equipped with 64 GB of RAM and a RAID-1 storage built on top of two 128 GB SSD drives 30 chunkservers, each equipped with 12 hard drives (in JBOD configuration), 3 TB each 1 Gbit network replication level 3 a metalogger running on 3 out of 30 chunkservers This configuration offers a possibility to store 360 TB of data in goal=3. No downtime in case of: failure of any 2 chunkservers or any 2 disks failure of any single metadata server No loss of data in case of: failure of any 2 chunkservers or any 2 disks failure of both metadata servers Example 3 2 data centers, primary and secondary 3 metadata servers, two in the primary data center one in the backup data center, each equipped with 64 GB of RAM and a RAID-1 storage built on top of two 128 GB SSD drives 15 chunkservers in each of 2 data centers, each equipped with a RAID-6 controller with 16 SAS drives, 3 TB each 10 Gbit link between data centers replication level which enforces one copy to be kept in each data center 2 metaloggers (on selected chunkserver machines) in each data center This configuration offers a possibility to store 630 TB of data. No downtime and no loss of data in case of: failure of any single server (including the master server) failure of any 5 disks failure of the connection between data centers failure of all servers in secondary data center (e.g. power failure, fire or flooding) failure of all chunkservers in the primary data center No data will be lost in case of failure all servers in the primary location, but there would be a need to start the master server manually (no quorum in the secondary data center prevents this automatically), thus some downtime may occur. 17
19 Contact Company contact: LizardFS Contact: Poland tel USA tel. +1 (646) visit our website: Skytechnology sp. z o.o. ul. Głogowa Warsaw NIP: KRS: tel fax info@skytechnology.pl visit our website: INFO@LIZARDFS.COM
20
CA485 Ray Walshe Google File System
Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage
More informationCLOUD-SCALE FILE SYSTEMS
Data Management in the Cloud CLOUD-SCALE FILE SYSTEMS 92 Google File System (GFS) Designing a file system for the Cloud design assumptions design choices Architecture GFS Master GFS Chunkservers GFS Clients
More informationXtreemStore A SCALABLE STORAGE MANAGEMENT SOFTWARE WITHOUT LIMITS YOUR DATA. YOUR CONTROL
XtreemStore A SCALABLE STORAGE MANAGEMENT SOFTWARE WITHOUT LIMITS YOUR DATA. YOUR CONTROL Software Produkt Portfolio New Products Product Family Scalable sync & share solution for secure data exchange
More informationIBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage
IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage Silverton Consulting, Inc. StorInt Briefing 2017 SILVERTON CONSULTING, INC. ALL RIGHTS RESERVED Page 2 Introduction Unstructured data has
More informationThe Google File System
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google SOSP 03, October 19 22, 2003, New York, USA Hyeon-Gyu Lee, and Yeong-Jae Woo Memory & Storage Architecture Lab. School
More informationGoogle File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo
Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google 2017 fall DIP Heerak lim, Donghun Koo 1 Agenda Introduction Design overview Systems interactions Master operation Fault tolerance
More informationDistributed System. Gang Wu. Spring,2018
Distributed System Gang Wu Spring,2018 Lecture7:DFS What is DFS? A method of storing and accessing files base in a client/server architecture. A distributed file system is a client/server-based application
More informationIBM Spectrum Protect Version Introduction to Data Protection Solutions IBM
IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM Note: Before you use this information
More informationThe Google File System
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung December 2003 ACM symposium on Operating systems principles Publisher: ACM Nov. 26, 2008 OUTLINE INTRODUCTION DESIGN OVERVIEW
More informationStorageflex HA3969 High-Density Storage: Key Design Features and Hybrid Connectivity Benefits. White Paper
Storageflex HA3969 High-Density Storage: Key Design Features and Hybrid Connectivity Benefits White Paper Abstract This white paper introduces the key design features and hybrid FC/iSCSI connectivity benefits
More informationIBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM
IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM Note: Before you use this
More informationIBM Spectrum NAS. Easy-to-manage software-defined file storage for the enterprise. Overview. Highlights
IBM Spectrum NAS Easy-to-manage software-defined file storage for the enterprise Highlights Reduce capital expenditures with storage software on commodity servers Improve efficiency by consolidating all
More informationGoogle File System, Replication. Amin Vahdat CSE 123b May 23, 2006
Google File System, Replication Amin Vahdat CSE 123b May 23, 2006 Annoucements Third assignment available today Due date June 9, 5 pm Final exam, June 14, 11:30-2:30 Google File System (thanks to Mahesh
More informationDistributed Filesystem
Distributed Filesystem 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributing Code! Don t move data to workers move workers to the data! - Store data on the local disks of nodes in the
More informationData Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public
Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible
More informationSAP HANA. HA and DR Guide. Issue 03 Date HUAWEI TECHNOLOGIES CO., LTD.
Issue 03 Date 2018-05-23 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2019. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any
More informationOpendedupe & Veritas NetBackup ARCHITECTURE OVERVIEW AND USE CASES
Opendedupe & Veritas NetBackup ARCHITECTURE OVERVIEW AND USE CASES May, 2017 Contents Introduction... 2 Overview... 2 Architecture... 2 SDFS File System Service... 3 Data Writes... 3 Data Reads... 3 De-duplication
More informationCloudian Sizing and Architecture Guidelines
Cloudian Sizing and Architecture Guidelines The purpose of this document is to detail the key design parameters that should be considered when designing a Cloudian HyperStore architecture. The primary
More informationDistributed File Systems II
Distributed File Systems II To do q Very-large scale: Google FS, Hadoop FS, BigTable q Next time: Naming things GFS A radically new environment NFS, etc. Independence Small Scale Variety of workloads Cooperation
More informationSurFS Product Description
SurFS Product Description 1. ABSTRACT SurFS An innovative technology is evolving the distributed storage ecosystem. SurFS is designed for cloud storage with extreme performance at a price that is significantly
More information! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like
Cloud background Google File System! Warehouse scale systems " 10K-100K nodes " 50MW (1 MW = 1,000 houses) " Power efficient! Located near cheap power! Passive cooling! Power Usage Effectiveness = Total
More informationMooseFS Hardware Guide v.0.9
MooseFS Hardware Guide v.0.9 Best practices for choosing appropriate hardware components for a MooseFS storage cluster. Table of contents PREFACE 3 MOOSEFS ARCHITECTURE 4 OPERATING SYSTEMS REQUIREMENTS
More informationCSE 124: Networked Services Lecture-16
Fall 2010 CSE 124: Networked Services Lecture-16 Instructor: B. S. Manoj, Ph.D http://cseweb.ucsd.edu/classes/fa10/cse124 11/23/2010 CSE 124 Networked Services Fall 2010 1 Updates PlanetLab experiments
More informationBUSINESS CONTINUITY: THE PROFIT SCENARIO
WHITE PAPER BUSINESS CONTINUITY: THE PROFIT SCENARIO THE BENEFITS OF A COMPREHENSIVE BUSINESS CONTINUITY STRATEGY FOR INCREASED OPPORTUNITY Organizational data is the DNA of a business it makes your operation
More informationECS High Availability Design
ECS High Availability Design March 2018 A Dell EMC white paper Revisions Date Mar 2018 Aug 2017 July 2017 Description Version 1.2 - Updated to include ECS version 3.2 content Version 1.1 - Updated to include
More informationSPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER. NAS Controller Should be rack mounted with a form factor of not more than 2U
SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER S.No. Features Qualifying Minimum Requirements No. of Storage 1 Units 2 Make Offered 3 Model Offered 4 Rack mount 5 Processor 6 Memory
More informationCSE 124: Networked Services Fall 2009 Lecture-19
CSE 124: Networked Services Fall 2009 Lecture-19 Instructor: B. S. Manoj, Ph.D http://cseweb.ucsd.edu/classes/fa09/cse124 Some of these slides are adapted from various sources/individuals including but
More informationGFS: The Google File System. Dr. Yingwu Zhu
GFS: The Google File System Dr. Yingwu Zhu Motivating Application: Google Crawl the whole web Store it all on one big disk Process users searches on one big CPU More storage, CPU required than one PC can
More informationThe Google File System
The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system
More informationRAIDIX Data Storage Solution. Clustered Data Storage Based on the RAIDIX Software and GPFS File System
RAIDIX Data Storage Solution Clustered Data Storage Based on the RAIDIX Software and GPFS File System 2017 Contents Synopsis... 2 Introduction... 3 Challenges and the Solution... 4 Solution Architecture...
More informationDATA PROTECTION IN A ROBO ENVIRONMENT
Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information
More informationFibre Channel Target in Open-E JovianDSS
A DCS A 2018 WWARDS INNER DATA CE NTR PRODUCT E ICT ST ORAGE OF THE YEAR Target in Open-E () uses a -speed network technology for data storage that has become a common connection type for SAN enterprise
More informationThe Fastest And Most Efficient Block Storage Software (SDS)
The Fastest And Most Efficient Block Storage Software (SDS) StorPool: Product Summary 1. Advanced Block-level Software Defined Storage, SDS (SDS 2.0) Fully distributed, scale-out, online changes of everything,
More informationZadara Enterprise Storage in
Zadara Enterprise Storage in Google Cloud Platform (GCP) Deployment Guide March 2017 Revision A 2011 2017 ZADARA Storage, Inc. All rights reserved. Zadara Storage / GCP - Deployment Guide Page 1 Contents
More informationGFS: The Google File System
GFS: The Google File System Brad Karp UCL Computer Science CS GZ03 / M030 24 th October 2014 Motivating Application: Google Crawl the whole web Store it all on one big disk Process users searches on one
More informationNext Generation Storage for The Software-Defned World
` Next Generation Storage for The Software-Defned World John Hofer Solution Architect Red Hat, Inc. BUSINESS PAINS DEMAND NEW MODELS CLOUD ARCHITECTURES PROPRIETARY/TRADITIONAL ARCHITECTURES High up-front
More informationVSTOR Vault Mass Storage at its Best Reliable Mass Storage Solutions Easy to Use, Modular, Scalable, and Affordable
VSTOR Vault Mass Storage at its Best Reliable Mass Storage Solutions Easy to Use, Modular, Scalable, and Affordable MASS STORAGE AT ITS BEST The ARTEC VSTOR Vault VS Mass Storage Product Family ARTEC s
More informationThe Google File System
October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single
More informationExam : S Title : Snia Storage Network Management/Administration. Version : Demo
Exam : S10-200 Title : Snia Storage Network Management/Administration Version : Demo 1. A SAN architect is asked to implement an infrastructure for a production and a test environment using Fibre Channel
More informationSvSAN Data Sheet - StorMagic
SvSAN Data Sheet - StorMagic A Virtual SAN for distributed multi-site environments StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical applications
More informationThe Google File System (GFS)
1 The Google File System (GFS) CS60002: Distributed Systems Antonio Bruto da Costa Ph.D. Student, Formal Methods Lab, Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur 2 Design constraints
More informationBreaking Barriers Exploding with Possibilities
Breaking Barriers Exploding with Possibilities QTS 4.2 for Business Your Challenges, Our Solutions Remote Q center to monitor all NAS Local File Station Remote Connection Volume/LUN Snapshot Backup Versioning,
More informationHP Supporting the HP ProLiant Storage Server Product Family.
HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication
More informationIsilon Scale Out NAS. Morten Petersen, Senior Systems Engineer, Isilon Division
Isilon Scale Out NAS Morten Petersen, Senior Systems Engineer, Isilon Division 1 Agenda Architecture Overview Next Generation Hardware Performance Caching Performance SMB 3 - MultiChannel 2 OneFS Architecture
More informationTECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1
TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1 ABSTRACT This introductory white paper provides a technical overview of the new and improved enterprise grade features introduced
More informationRAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE
RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting
More informationStorage Optimization with Oracle Database 11g
Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000
More informationFeedback on BeeGFS. A Parallel File System for High Performance Computing
Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December
More informationEMC Celerra CNS with CLARiiON Storage
DATA SHEET EMC Celerra CNS with CLARiiON Storage Reach new heights of availability and scalability with EMC Celerra Clustered Network Server (CNS) and CLARiiON storage Consolidating and sharing information
More informationSurveillance Dell EMC Storage with IndigoVision Control Center
Surveillance Dell EMC Storage with IndigoVision Control Center Sizing Guide H14832 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2016 Dell believes the information
More informationCHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.
CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. File-System Structure File structure Logical storage unit Collection of related information File
More informationNutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure
Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.
More informationParagon Protect & Restore
Paragon Protect & Restore ver. 3 Centralized Backup and Disaster Recovery for virtual and physical environments Tight Integration with hypervisors for agentless backups, VM replication and seamless restores
More informationVeritas NetBackup on Cisco UCS S3260 Storage Server
Veritas NetBackup on Cisco UCS S3260 Storage Server This document provides an introduction to the process for deploying the Veritas NetBackup master server and media server on the Cisco UCS S3260 Storage
More informationNPTEL Course Jan K. Gopinath Indian Institute of Science
Storage Systems NPTEL Course Jan 2012 (Lecture 39) K. Gopinath Indian Institute of Science Google File System Non-Posix scalable distr file system for large distr dataintensive applications performance,
More informationWhite Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft
White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS
More informationIBM Storage Software Strategy
IBM Storage Software Strategy Hakan Turgut 1 Just how fast is the data growing? 128 GB/ person The World s total data per person No Problem I I think I can do this We have a problem 24 GB/ person 0.8 GB/
More informationUsing Cloud Services behind SGI DMF
Using Cloud Services behind SGI DMF Greg Banks Principal Engineer, Storage SW 2013 SGI Overview Cloud Storage SGI Objectstore Design Features & Non-Features Future Directions Cloud Storage
More informationPhysical Storage Media
Physical Storage Media These slides are a modified version of the slides of the book Database System Concepts, 5th Ed., McGraw-Hill, by Silberschatz, Korth and Sudarshan. Original slides are available
More informationGeorgia Institute of Technology ECE6102 4/20/2009 David Colvin, Jimmy Vuong
Georgia Institute of Technology ECE6102 4/20/2009 David Colvin, Jimmy Vuong Relatively recent; still applicable today GFS: Google s storage platform for the generation and processing of data used by services
More informationVendor must indicate at what level its proposed solution will meet the College s requirements as delineated in the referenced sections of the RFP:
Vendor must indicate at what level its proposed solution will the College s requirements as delineated in the referenced sections of the RFP: 2.3 Solution Vision Requirement 2.3 Solution Vision CCAC will
More informationAdvanced Architectures for Oracle Database on Amazon EC2
Advanced Architectures for Oracle Database on Amazon EC2 Abdul Sathar Sait Jinyoung Jung Amazon Web Services November 2014 Last update: April 2016 Contents Abstract 2 Introduction 3 Oracle Database Editions
More informationChapter 11: File System Implementation. Objectives
Chapter 11: File System Implementation Objectives To describe the details of implementing local file systems and directory structures To describe the implementation of remote file systems To discuss block
More informationCold Storage: The Road to Enterprise Ilya Kuznetsov YADRO
Cold Storage: The Road to Enterprise Ilya Kuznetsov YADRO Agenda Technical challenge Custom product Growth of aspirations Enterprise requirements Making an enterprise cold storage product 2 Technical Challenge
More informationDELL POWERVAULT NX3500. A Dell Technical Guide Version 1.0
DELL POWERVAULT NX3500 A Dell Technical Guide Version 1.0 DELL PowerVault NX3500, A Dell Technical Guide THIS TRANSITION GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND
More informationModule 4 STORAGE NETWORK BACKUP & RECOVERY
Module 4 STORAGE NETWORK BACKUP & RECOVERY BC Terminology, BC Planning Lifecycle General Conditions for Backup, Recovery Considerations Network Backup, Services Performance Bottlenecks of Network Backup,
More informationOffloaded Data Transfers (ODX) Virtual Fibre Channel for Hyper-V. Application storage support through SMB 3.0. Storage Spaces
2 ALWAYS ON, ENTERPRISE-CLASS FEATURES ON LESS EXPENSIVE HARDWARE ALWAYS UP SERVICES IMPROVED PERFORMANCE AND MORE CHOICE THROUGH INDUSTRY INNOVATION Storage Spaces Application storage support through
More informationiscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard
iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard On February 11 th 2003, the Internet Engineering Task Force (IETF) ratified the iscsi standard. The IETF was made up of
More informationInstallation and Cluster Deployment Guide for VMware
ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents
More informationTake Back Lost Revenue by Activating Virtuozzo Storage Today
Take Back Lost Revenue by Activating Virtuozzo Storage Today JUNE, 2017 2017 Virtuozzo. All rights reserved. 1 Introduction New software-defined storage (SDS) solutions are enabling hosting companies to
More informationXenData Product Brief: SX-550 Series Servers for LTO Archives
XenData Product Brief: SX-550 Series Servers for LTO Archives The SX-550 Series of Archive Servers creates highly scalable LTO Digital Video Archives that are optimized for broadcasters, video production
More informationSPLUNK ENTERPRISE AND ECS TECHNICAL SOLUTION GUIDE
SPLUNK ENTERPRISE AND ECS TECHNICAL SOLUTION GUIDE Splunk Frozen and Archive Buckets on ECS ABSTRACT This technical solution guide describes a solution for archiving Splunk frozen buckets to ECS. It also
More informationHedvig as backup target for Veeam
Hedvig as backup target for Veeam Solution Whitepaper Version 1.0 April 2018 Table of contents Executive overview... 3 Introduction... 3 Solution components... 4 Hedvig... 4 Hedvig Virtual Disk (vdisk)...
More informationHigh Availability and Disaster Recovery Solutions for Perforce
High Availability and Disaster Recovery Solutions for Perforce This paper provides strategies for achieving high Perforce server availability and minimizing data loss in the event of a disaster. Perforce
More informationNetVault Backup Client and Server Sizing Guide 3.0
NetVault Backup Client and Server Sizing Guide 3.0 Recommended hardware and storage configurations for NetVault Backup 12.x September 2018 Page 1 Table of Contents 1. Abstract... 3 2. Introduction... 3
More informationThe SHARED hosting plan is designed to meet the advanced hosting needs of businesses who are not yet ready to move on to a server solution.
SHARED HOSTING @ RS.2000/- PER YEAR ( SSH ACCESS, MODSECURITY FIREWALL, DAILY BACKUPS, MEMCHACACHED, REDIS, VARNISH, NODE.JS, REMOTE MYSQL ACCESS, GEO IP LOCATION TOOL 5GB FREE VPN TRAFFIC,, 24/7/365 SUPPORT
More informationA GPFS Primer October 2005
A Primer October 2005 Overview This paper describes (General Parallel File System) Version 2, Release 3 for AIX 5L and Linux. It provides an overview of key concepts which should be understood by those
More informationFuxiSort. Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc
Fuxi Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc {jiamang.wang, yongjun.wyj, hua.caihua, zhipeng.tzp, zhiqiang.lv,
More informationThe Leading Parallel Cluster File System
The Leading Parallel Cluster File System www.thinkparq.com www.beegfs.io ABOUT BEEGFS What is BeeGFS BeeGFS (formerly FhGFS) is the leading parallel cluster file system, developed with a strong focus on
More informationThe Microsoft Large Mailbox Vision
WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more email has many advantages. Large mailboxes
More informationAutomated Storage Tiering on Infortrend s ESVA Storage Systems
Automated Storage Tiering on Infortrend s ESVA Storage Systems White paper Abstract This white paper introduces automated storage tiering on Infortrend s ESVA storage arrays. Storage tiering can generate
More informationGoogle File System. By Dinesh Amatya
Google File System By Dinesh Amatya Google File System (GFS) Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung designed and implemented to meet rapidly growing demand of Google's data processing need a scalable
More informationData Movement & Tiering with DMF 7
Data Movement & Tiering with DMF 7 Kirill Malkin Director of Engineering April 2019 Why Move or Tier Data? We wish we could keep everything in DRAM, but It s volatile It s expensive Data in Memory 2 Why
More informationHP Designing and Implementing HP Enterprise Backup Solutions. Download Full Version :
HP HP0-771 Designing and Implementing HP Enterprise Backup Solutions Download Full Version : http://killexams.com/pass4sure/exam-detail/hp0-771 A. copy backup B. normal backup C. differential backup D.
More informationAcronis Storage 2.4. Installation Guide
Acronis Storage 2.4 Installation Guide June 06, 2018 Copyright Statement Acronis International GmbH, 2002-2016. All rights reserved. Acronis and Acronis Secure Zone are registered trademarks of Acronis
More informationCatalogic DPX TM 4.3. ECX 2.0 Best Practices for Deployment and Cataloging
Catalogic DPX TM 4.3 ECX 2.0 Best Practices for Deployment and Cataloging 1 Catalogic Software, Inc TM, 2015. All rights reserved. This publication contains proprietary and confidential material, and is
More informationHUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.
HUAWEI OceanStor Enterprise Unified Storage System HyperReplication Technical White Paper Issue 01 Date 2014-03-20 HUAWEI TECHNOLOGIES CO., LTD. 2014. All rights reserved. No part of this document may
More informationFLAT DATACENTER STORAGE CHANDNI MODI (FN8692)
FLAT DATACENTER STORAGE CHANDNI MODI (FN8692) OUTLINE Flat datacenter storage Deterministic data placement in fds Metadata properties of fds Per-blob metadata in fds Dynamic Work Allocation in fds Replication
More informationSATA RAID For The Enterprise? Presented at the THIC Meeting at the Sony Auditorium, 3300 Zanker Rd, San Jose CA April 19-20,2005
Logo of Your organization SATA RAID For The Enterprise? Scott K. Cleland, Director of Marketing AMCC 455 West Maude Ave., Sunnyvale, CA 94085-3517 Phone:+1-408-523-1079 FAX: +1-408-523-1001 E-mail: scleland@amcc.com
More informationXenData Product Brief: XenData6 Server Software
XenData Product Brief: XenData6 Server Software XenData6 Server is the software that runs the XenData SX-10 Archive Appliance and the range of SX-520 Archive Servers, creating powerful solutions for archiving
More informationVirtuozzo 7. Installation Guide
Virtuozzo 7 Installation Guide August 07, 2018 Virtuozzo International GmbH Vordergasse 59 8200 Schaffhausen Switzerland Tel: + 41 52 632 0411 Fax: + 41 52 672 2010 https://virtuozzo.com Copyright 2001-2018
More informationVeeam Cloud Connect. Version 8.0. Administrator Guide
Veeam Cloud Connect Version 8.0 Administrator Guide June, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,
More informationBackup and Recovery. Benefits. Introduction. Best-in-class offering. Easy-to-use Backup and Recovery solution
DeltaV Distributed Control System Product Data Sheet Backup and Recovery Best-in-class offering Easy-to-use Backup and Recovery solution Data protection and disaster recovery in a single solution Scalable
More informationThe Google File System
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung SOSP 2003 presented by Kun Suo Outline GFS Background, Concepts and Key words Example of GFS Operations Some optimizations in
More informationVideo Surveillance EMC Storage with LENSEC Perspective VMS
Video Surveillance EMC Storage with LENSEC Perspective VMS Sizing Guide H14768 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published March, 2016 EMC believes the information
More informationINTRODUCTION TO XTREMIO METADATA-AWARE REPLICATION
Installing and Configuring the DM-MPIO WHITE PAPER INTRODUCTION TO XTREMIO METADATA-AWARE REPLICATION Abstract This white paper introduces XtremIO replication on X2 platforms. XtremIO replication leverages
More informationOvercoming Obstacles to Petabyte Archives
Overcoming Obstacles to Petabyte Archives Mike Holland Grau Data Storage, Inc. 609 S. Taylor Ave., Unit E, Louisville CO 80027-3091 Phone: +1-303-664-0060 FAX: +1-303-664-1680 E-mail: Mike@GrauData.com
More informationKubernetes Integration with Virtuozzo Storage
Kubernetes Integration with Virtuozzo Storage A Technical OCTOBER, 2017 2017 Virtuozzo. All rights reserved. 1 Application Container Storage Application containers appear to be the perfect tool for supporting
More informationCONFIGURATION GUIDE WHITE PAPER JULY ActiveScale. Family Configuration Guide
WHITE PAPER JULY 2018 ActiveScale Family Configuration Guide Introduction The world is awash in a sea of data. Unstructured data from our mobile devices, emails, social media, clickstreams, log files,
More informationSurveillance Dell EMC Storage with Verint Nextiva
Surveillance Dell EMC Storage with Verint Nextiva Sizing Guide H14897 REV 1.3 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published September 2017 Dell believes the information
More information