Tegile Best Practices for Oracle Databases
|
|
- Annabel Parrish
- 6 years ago
- Views:
Transcription
1 Tegile Best Practices for Oracle Databases Pg. 1
2 Contents Executive Summary... 3 Disclaimer... 3 About This Document... 3 Quick Start Guide... 4 LUN Sizing Recommendations... 5 Tegile IntelliFlash Storage Array Setup... 6 Pools... 6 Projects... 6 LUNs... 9 LUN block size for REDO Logs Compression and Deduplication Linux OS Setup Linux Multipathing Adding Multipath Aliases LUN permissions and UDEV rules Oracle Grid Install (ASM) Using Multiple Arrays with Oracle ASM ASM Redundancy Options Best Practice for Multi-Array Configurations Oracle DB Install Deploying Oracle Databases in VMWare vsphere virtualization LUN Creation Guidelines Virtual Machine Creation Guidelines Hypervisor tuning Linux Guest Configuration Tegile LUNs setup for Oracle Database in a VMware vsphere environment Project and LUN and Project Parameters How to create Tegile Snapshots for Oracle Database Tegile Snapshot creation How to create clones of Oracle Database for test-dev from Tegile Snapshots Tegile Clone Creation Additional References Tegile Best Practices and Reference Architectures for vsphere Additional References for Oracle in in VMware Environments Appendix multipath.conf for Tegile arrays with 2.x firmware or older multipath.conf for Tegile arrays with 3.x firmware or newer Pg. 2
3 Executive Summary This document describes the process for installing an Oracle 12cR1 single instance database on a Redhat or OEL 6- or 7-compatible operating system in order to use Tegile flash storage. For the purpose of this document Oracle and Oracle Linux 6.7 were used. However, Oracle version 11gR2 and earlier versions of Linux will have very similar, if not identical, setup methods. Any version specific alterations in procedures will be called out in the document. The physical characteristics of the system include a 2-socket, 12-core (6 cores per socket) server with 48GB of memory connected via 8GB fiber channel to a Tegile T3700 all-flash array running firmware version configured in an active/active controller configuration. In order to take advantage of the extreme performance characteristics of Tegile flash storage the Oracle Automatic Storage Management (ASM) volume manager will be used to achieve raw performance (as opposed to using a file system). Disclaimer Note that this document describes the process for building a generic system and does not take into account individual customer s requirements for security, performance, resilience and other operational aspects that may be relevant. Customers with existing operational guidelines should treat those guidelines with higher priority and where any advice in this document conflicts with existing policies those policies should be adhered to. Tegile does not accept liability for any issues experienced as a result of following this document. About This Document This document will detail each step necessary to complete the installation process along with examples and expected outputs. However, experienced users may find this level of detail unnecessary, so also included is a Quick Start section showing only the high level steps required. Pg. 3
4 Quick Start Guide This section shows a high-level summary of the steps required to complete the installation: 1. Create LUNs from Tegile s GUI (see section Storage Array Setup) 2. Install the oracle-rdbms-server-12cr1-preinstall package using yum (for 11gR2 utilize the oracle-rdbms-server11gr2-preinstall package) 3. Install and configure the device mapper multipathing software note that there are specific device details required when adding entries into the multipath.conf file for Tegile arrays (see section Multipathing) 4. Add aliases in the multipath.conf file for each LUN presented from Tegile arrays (see section Add Multipath Aliases) 5. Create UDEV rules to handle LUNs presented from Tegile arrays note that again there are specific configuration settings which must be set using these UDEV rules (see section LUN permissions and UDEV rules) 6. Create the Oracle Grid Infrastructure (see section Oracle Grid Install ) 7. Create the Oracle Database (see section Oracle DB Install) 8. Creating a separate ASM DiskgGroup for REDO Logs and guidelines on REDO Logs. Pg. 4
5 High Level Recommendations Tegile makes the following recommendations for the use of Oracle software with Tegile arrays: Oracle Database and Grid Infrastructure (ASM) software of version 11g Release 2 or later is recommended. Databases placed on Tegile all flash arrays should have a database block size or 4K or greater (e.g. the default value of 8K is acceptable). LUN Sizing Recommendations The design of Tegile arrays allows for a single LUN to deliver the full performance capability of each active controller. However, since this full performance capability is so high, many operating systems exhibit bottlenecks at the OS queue level if a single LUN is used. For this reason, Tegile recommends using multiple LUNs in groups of eight per array (four per active controller when active/active is configured) for each data storage point (e.g. ASM diskgroup or filesystem) ASM diskgroups containing database DATA or fast_recovery areas should comprise of 8 LUNs spread equally across active controllers. If multiple arrays are used, the above recommendation should be adapted to allow a minimum of 8 LUNs per diskgroup spread over all arrays. For example, a +DATA diskgroup spread over four arrays would have a minimum of 2 LUNs per array (1 LUN per controller), making 8 LUNs in total. For locations containing files which are infrequently accessed (e.g. database parameter files, +GRID diskgroups etc.) its is recommended to put it on a mirrored DiskGroup for redundancy. To avoid the unnecessary overhead of ASM rebalances, Tegile recommends that customers do not add LUNs to a striped diskgroups to increase capacity, but instead increase the size of existing LUNs. Unless there are specific use cases driving a smaller block size on the Tegile LUNs, Database 8K Block Size should be used to ensure maximum performance from the array as well as maximized compression results when compression is enabled. This recommendation should be used for Bare-Metal Environments (i.e. Linux OS running without a Hypervisor). Pg. 5
6 Tegile IntelliFlash Storage Array Setup Tegile is pioneering a new generation of affordable feature-rich storage arrays that are dramatically faster, and can store more effective data than traditional arrays. The Tegile all flash array utilizes an active-active controller architecture to provide an Oracle environment the highest level of array performance while maintaining a fully redundant, highly available system. The following array setup takes this architecture into account when creating LUNs to be presented to ASM for the absolute highest performing design. In the version of the Tegile T3700 array GUI, there are five IP addresses assigned for managing the array. Two IPMI addresses, one for each controller, two management addresses for managing the controllers individually, and one HA address for managing the entire array. In an active-active configuration the HA address can be utilized for provisioning storage to both controllers. If the array was configured active-passive then each controller would need to be managed by their individual MGMT addresses. Pools By clicking on the Data menu item at the top of the GUI, the first items viewed are the pool-a and pool-b pools. The pools can be understood as the storage associated with each controller. By selecting a pool, the storage available for that particular controller is able to be provisioned in terms of projects, LUNs, and file systems. Projects Projects are an elegant way to encapsulate a group of like LUNs into their base characteristics. By creating a LUN or group of LUNs into a project, all the activities like snapshot scheduling and clone creation can be managed from a single place for the entire group of LUNs. Furthermore, a default set of parameters such as networking settings, block sizes, compression algorithms and others can be generated so the LUNs created under the project will contain the like settings. Pg. 6
7 For Oracle database best practices the following should be followed for project creation: 1) Provide a project name and select Generic under the purpose. Future versions of the GUI will have these Oracle best practice settings incorporated into a template. Select a networking protocol. 2) Based on your specific requirement for the LUNs to be created, complete the FC Target Group information accordingly. Pg. 7
8 3) Complete the Initiator Group settings. 4) The next screen provides for the types of data Deduplication and Compression. By default, Oracle databases are not good candidates for data deduplication as each database block is unique with header and DB storage metadata. Compression however, is a very valid selection with negligible performance impacts. For absolute top performance while providing adequate levels of compression lz4 should remain selected for the compression algorithm. 5) Fulfill the Snapshot Policy for your environment. 6) Review and finish. Pg. 8
9 LUNs As per the information mentioned in the High Level Recommendations section previously in this document, there will be a total of 9 LUNs created in this best practice exercise. However, if an FRA (Flash Recovery Area) were to be configured, this number would then increase to a number of 17 LUNs, 1 for the grid infrastructure files (ASM), 8 for the +DATA diskgroup and 8 for the +FRA diskgroup. Adopt a meaningful LUN naming methodology to easily identify devices on the Oracle host. The LUN naming methodology demonstrated here is in the format Pool letter_useage_blockize_lunsize = a_grid01_8k_5gb Unless there are specific use cases driving a smaller block size on the Tegile LUNs, Database 8K Block Size should be used to ensure maximum performance from the array as well as maximized compression results when compression is enabled. 1) Create the single small LUN for the Oracle Grid infrastructure files on one of the controllers. This example shows this occurring on the a controller or pool in the orclmicro1 Project. 2) Create the remainder of the database LUNs following a similar naming convention for the +DATA and +FRA (if necessary) disk groups. The final configuration will appear as below. Pool-a Pg. 9
10 Pool-b LUN block size for REDO Logs. Redo Logs are transactional journals and each transaction is recorded in the REDO logs. Redo Logs flush to disk at regular intervals which are decided by multiple factors beyond the scope of this document. It is recommended to create a separate ASM Disk Group with redundancy and assign multiple LUNs to it up-to 8. When creating LUNS for REDO logs, it please create the LUN with a higher LUN block size > 64K < 128K and disable deduplication on these LUNs and assign LOGBIAS=Latency To determine the ideal LUN block size for REDO Logs, an AWR report snapshot can determine the highest block size count for your database and redo-wastage.if you determine based on your AWR analysis that the LUN block size for your REDO Logs is not ideal and there is too much redo-wastage, you could create new LUNs with a different block size and them to a new ASM diskgroup, create new REDO Logs on the new DiskGroup and drop the old REDO logs and add the new REDO Log files. Compression and Deduplication Inline deduplication and compression enhance usable capacity well beyond raw capacity, reducing overall storage capacity requirements by as much as 50 percent according to Tegile customer deployments in the field. However, due to the nature of Oracle database blocks and the underlying data, Oracle deployments are NOT well suited for data Deduplication so this option should be avoided for standard Oracle installations. Compression, on the other hand, is a valid and powerful way of reducing the overall Oracle capacity requirements of the database. Different compression algorithms have different characteristics so for best practice purposes the lz4 algorithm should be used for database workloads. Refer to the below chart for other algorithms and their characteristics. Pg. 10
11 Linux OS Setup Follow the Oracle installation guide for setting up the Oracle server environment. For the Oracle Enterprise and RHEL Linux OS, the pre-requisite shell script should be executed for the appropriate Oracle version. oracle-rdbms-server-12cr1-preinstall and oracle-rdbms-server-11gr2-preinstall packages Linux Multipathing Multipathing software allows for resilience and performance benefits to be gained when multiple paths exist between storage devices and servers. In the case of fiber-channel storage solutions there will usually be multiple paths through the fiber-channel network over which LUNs can be presented from storage. The multipathing software is used to detect which duplicate paths correspond to each underlying physical device so that they can then be combined into a single virtual device. The primary benefit of this virtual device is that any underlying path failure can be tolerated provided there is at least one remaining path available. The multipathing software is able to detect failed paths and re-issue any failed I/O requests on a remaining active path in a manner that s transparent to the caller. This transparency is essential for Oracle software such as ASM and the database because they are unaware of its existence and have no built-in functionality to perform the same task. An additional benefit of multipathing software is the lower latency, which can be gained by spreading I/O requests over numerous underlying paths. This is of particular importance when using high performance storage such as Tegile flash arrays. Adding Multipath Aliases By default the multipath virtual devices corresponding to LUNs presented from Tegile will have default names, which may not be useful to administrators. For reasons of manageability Tegile recommends renaming these devices to names, which are more obviously associated with their corresponding target. Possible naming conventions include the use of the array name or the intended ASM disk (e.g. DATA1 ). Each LUN presented from Tegile has a unique identifier. These identifiers are used to create the user-friendly aliases in the multipath configuration file, so a list of the existing LUNs needs to be used the command multipath ll will also show all existing devices known to the multipathing software: [root ~]# multipath -ll grep TEGILE sort ( partial listing) f0d16d a0010 dm-3 TEGILE,ZEBI-FC f0d16d dm-5 TEGILE,ZEBI-FC f0d16d dm-6 TEGILE,ZEBI-FC Pg. 11
12 Based on these values, entries should be added to the /etc/multipath.conf file : defaults { polling_interval 5 path_grouping_policy multibus failback immediate user_friendly_names yes max_fds 8192 devices { device { vendor "TEGILE" product "ZEBI-FC" #2.x fiber-channel #product ZEBI-ISCSI #2.x iscsi 2.x #product INTELLIFLASH # 3.x Fiber-Channel & FC hardware_handler "1 alua" path_selector "round-robin 0" path_grouping_policy "group_by_prio" no_path_retry 10 dev_loss_tmo 50 path_checker tur prio alua failback 30 rr_min_io 128 multipaths # example for setting user defined names for multipath devices; { multipath { wwid f0d16d d alias a_data01_8k_125gb multipath { wwid f0d16d d alias a_data02_8k_125gb NOTE The above listing is for Tegile arrays running code versions 2.x. If the Tegile array is running 3.x or newer code the only difference is the product listing needs to be changed from product "ZEBI-FC" to product "INTELLIFLASH" The final step in the process is now to flush the device mapper and order multipath to pick up the new user-defined configuration: [root ~]# multipath -F [root ~]# multipath v2 Pg. 12
13 The devices now exist in the /dev/mapper directory as expected: [root ~]# ls -l /dev/mapper/data* lrwxrwxrwx 1 root root 7 Nov 23 09:49 a_data01_8k_125gb ->../dm-3 lrwxrwxrwx 1 root root 7 Nov 23 09:49 a_data02_8k_125gb ->../dm-5 lrwxrwxrwx 1 root root 7 Nov 23 09:49 a_data03_8k_125gb ->../dm-6 lrwxrwxrwx 1 root root 7 Nov 23 09:49 a_data04_8k_125gb ->../dm-7 lrwxrwxrwx 1 root root 7 Nov 23 06:04 a_grid_8k_5gb ->../dm-2 lrwxrwxrwx 1 root root 7 Nov 23 09:49 b_data01_8k_125gb ->../dm-8 lrwxrwxrwx 1 root root 7 Nov 23 09:49 b_data02_8k_125gb ->../dm-9 lrwxrwxrwx 1 root root 8 Nov 23 09:49 b_data03_8k_125gb ->../dm-10 lrwxrwxrwx 1 root root 8 Nov 23 09:49 b_data04_8k_125gb ->../dm-11 LUN permissions and UDEV rules The I/O scheduler determines the way in which block I/O operations are submitted to storage. There are a number of different I/O schedulers available in the Linux kernel by default, but a common theme in their behavior is the aim to reduce the impact of hard drive seek time. Most work by assigning I/O operations into queues and then reordering them to reduce the amount of time that disk heads spend moving to each location. For the SLES kernel the cfq scheduler is enabled by default. Flash memory has no issues with seek times and exhibits latencies that are frequently less than a millisecond, so there is minimal gain from using this scheduler. Tests have consistently shown a significant increase in performance when switching to the most simple noop scheduler. In order to set all Tegile devices to use these values, a new UDEV rule must be created. UDEV is the Linux device manager which dynamically creates and maintains the device files found in the /dev directory. UDEV uses a number of rules files located in the /etc/udev/rules.d directory, so to make this change a new file should be created. The name of the file and its contents will be dependent on the version of Linux in use. Red Hat Enterprise Linux 6 / Oracle Linux 6 Create a file with the name 50-tegile.rules: [root ~]# vi /etc/udev/rules.d/50-tegile.rules Pg. 13
14 This file will contain the following UDEV rules (take care not to introduce any additional carriage returns this syntax is very sensitive): ***************************** * Code levels 2.x & 3.x * ***************************** ### /etc/udev/rules.d/50-tegile.rules ####### ### This example is for 2.x FC. For 2.x iscsi replace SYSFS{model== ZEBI-ISCSI ### For 3.x FC and iscsi, replace SYSFS{model== INTELLIFLASH* # RH 6 Set scheduler and queue depth for Tegile SCSI devices KERNEL=="sd*[!0-9] sg*", BUS=="scsi", SYSFS{vendor=="TEGILE", SYSFS{model=="ZEBI-FC", RUN+="/bin/sh -c 'echo noop > /sys/$devpath/queue/scheduler && echo 128 > /sys/$devpath/queue/nr_requests'" # Set owner to oracle/dba for Tegile multipath devices KERNEL=="dm-[0-9]*", ENV{DM_UUID=="mpath f0*", OWNER:="oracle", GROUP:="dba", MODE:="660", RUN+="/bin/sh -c 'echo noop > /sys/$devpath/queue/scheduler && echo 128 > /sys/$devpath/queue/nr_requests'" Finally, UDEV subsystem must be told to reread and apply the new rules: [root ~]# udevadm control --reload-rules [root ~]# udevadm trigger Check that the new rules have taken place by looking at the device owner of the /dev/dm* Tegile devices have been changed to oracle:dba [root ~]# ls l /dev/dm* ( partial listing) brw-rw oracle dba 252, 10 Nov 23 09:49 /dev/dm-10 brw-rw oracle dba 252, 11 Nov 23 09:49 /dev/dm-11 brw-rw oracle dba 252, 2 Nov 23 10:18 /dev/dm-2 brw-rw oracle dba 252, 3 Nov 23 09:49 /dev/dm-3 Pg. 14
15 Oracle Grid Install (ASM) The procedure for installation of Oracle Automatic Storage Management and the creation of diskgroups follows the standard process described in the Oracle documentation. Using Multiple Arrays with Oracle ASM For configurations using multiple Tegile arrays with Oracle Automatic Storage Management, additional design considerations exist. Further assistance with this design process can be sought from the Tegile technical organization. ASM Redundancy Options The standard recommendation for ASM redundancy is to use the EXTERNAL option, i.e. where data will not be mirrored by ASM. However, for multi-array configurations there are situations where NORMAL or HIGH redundancy may be preferable: Oracle Real Application Clusters extended cluster configurations where data must be mirrored across sites Configurations which require the highest level of availability where data will be mirrored across multiple arrays (thereby adding ASM redundancy on top of Tegile s RAID feature) Diskgroups containing the critical Oracle files listed in the following section Best Practice for Multi-Array Configurations Best Practice for Multi-Array Configurations For any multi-array configuration Tegile recommends that the following critical Oracle files should be mirrored across more than one array: Oracle Database control files Oracle Database online redo logs Oracle Clusterware voting disks Oracle Clusterware cluster registry (OCR) This can be achieved either through the creation of multiplexed files (for example by placing online redolog members on multiple arrays) or through the use of ASM mirroring (the creation of a single ASM diskgroup across multiple arrays using NORMAL or HIGH redundancy). Oracle DB Install To achieve optimum performance, there are three elements to consider when configuring the Oracle database to run on NAND flash storage: Database block size (set by parameter db_block_size): Allowable values for this parameter in Oracle are 2K, 4K, 8K (the default), 16K and 32K. In order to ensure optimal performance, values of 8K or greater should always be used with Tegile arrays. Online redo log block size: by default, this is 512 bytes. Please note this value is what Oracle discovers when querying the geometry of the LUN and not the LUN block size. LUN block size for REDO logs was discussed in a previous section. Database Creation There are no special procedures required during the creation of databases on Tegile arrays Pg. 15
16 Deploying Oracle Databases in VMWare vsphere virtualization LUN Creation Guidelines When creating LUNS on Tegile Arrays to be used by ASM by Virtual Machines running Linux and Oracle RDBMS the golden rules are below. Create Multiple LUNS (at-least 4 for ASM Data) Create 2 LUNS for GRID (ASM ) Create ASM LUNs within a project for snapshot and cloning/backup purposes Always select Thin Provisioned LUNS Select Purpose as Virtual Server for VMDK/VMFS Datastore ( Do not select Database and a lower block size) Select the Intended Protocol: FC or iscsi Deduplication is disabled do not enable it LZ4 compression is always enabled and leave it enabled After bringing the LUN under ESX Control as a VMFS Datastore, create one thin VDISK per LUN. Do not span multiple VDISKS over one VMFS Datastore created on one Tegile LUN for performance reasons. Pg. 16
17 There are situations where the customer wants to use Raw Disk Mappings versus VMDK and we list the advantage and disadvantages of each. VMware Disk Type Advantages Disadvantages RAW Disk Mapping RDM VMFS filesystem ( VMDK/Vdisk) datastore Legacy, Easy P->V Migration Array Snapshots can be used Hypervisor completely bypassed Array snapshots can be used Hypervisor Latency is minimal with proper tuning. VMotion, Storage VMotion, SIOC can be used vsphere Replication using Tegile SRA. VM using RDM cannot be livemigrated Storage cannot be migrated using S- VMotion Cannot use SIOC (Storage IO Control No known disadvantages Virtual Machine Creation Guidelines The following guidelines are highly recommended when deploying Oracle RDBMS under Linux running in a Virtual Machine running on VMware Hypervisor. Virtual Machine Version (VMwareParavirtual)PVSCSI vs LSI Logic Controller Always Use the latest Ver 8 or 9 recommended depending on version of ESXi PVSCSI is recommended as it uses less VCPU and is more efficient Queue depths PVSCSI are configurable to 256 per device and 1024 per adapter Refer to VMware Knowledgebase article # for Best Write performance Add each VDISK used for ASM to a different SCSI ID and controller so that the IO is spread over multiple virtual SCSI controllers for better performance for better storage efficiency. Add ASM-DATA1 to scsi(1,0) Add ASM-DATA2 to scsi(2,0) Add ASM-DATA2 to scsi(3,0) Add ASM-DATA2 to scsi(0,1) Configure the VM to produce true UUID for LUNs as seen by Linux This is only required if RDM s LUNs are exposed to a Virtual Machine running Linux and Oracle RDBMS Pg. 17
18 Hypervisor tuning When using VMDK s for ASM, one needs to ensure that the tunables are set for the Hypervisor for FC and iscsi for optimal performance. It is highly advisable to use the Tegile vsphere Plugin to set the tunables This below table lists the parameters commands, which can be used in lieu of using the vsphere Plugin These commands vary a bit depending on vsphere release. The Syntax provided is for vsphere 5.5 Parameter Value Require Reference reboot Set HBA Queue Depth for Qlogic and Emulex& Brocade 256 Yes Please refer to VMware KB Article 1267 Set Maximum Outstanding Disk Requests for virtual machines 64 No Please refer to VMware KB Article 1268 Multipath Policy Roundrobin No Please refer to VMware KB Article IOPS Tunable No This value should be tuned to see what yields best performance for a certain workload. Adjust Maximum Queue Depth for Software iscsi 8192 Yes esxcli system module parameters set -p "iscsivmk_hostqdepth=8192 iscsivmk_lunqdepth=1024" -m iscsi_vmk iscsi Jumbo Frames 9000 Yes Please refer to VMware KB Article Knowledgebase Articles can be retrieved from this URL by using the KB article number. Pg. 18
19 Linux Guest Configuration Always install VMware tools inside the guest on Linux Guest Operating System Disk Timeout for RDM and VMware Virtual DISKS On a Linux VM, add an udev rule with the following entry DRIVERS=="sd", SYSFS{TYPE=="0 7 14", RUN+="/bin/sh -c 'echo 180 > /sys$$devpath/timeout'" Change PVSCSI queue depth Using the following parameter vmw_pvscsi.cmd_per_lun=254 vmw_pvscsi.ring_pages=32 in GRUB configuration or creating a new boot image by putting this in /etc/modprobe.d/pvscsi (create new file) This change requires a reboot and please verify this change using the below commands $ cat /sys/module/vmw_pvscsi/parameters/cmd_per_lun $ cat /sys/module/vmw_pvscsi/parameters/ring_pages UDEV Parameters inside Guest set IO Scheduler to NOOP set NR_requests to 128 ### /etc/udev/rules.d/51-tegile.rules ####### ( This example is VMware VDisk only) # RH 6 Set scheduler and queue depth for Tegile SCSI devices KERNEL=="sd*[!0-9] sg*", BUS=="scsi", SYSFS{vendor=="VMware", SYSFS{model=="Virtual Disk", RUN+="/bin/sh -c 'echo noop > /sys/$devpath/queue/scheduler && echo 128 > /sys/$devpath/queue/nr_requests'" Guest Operating System Disk Timeout On a Linux VM, add an udev rule with the following entry DRIVERS=="sd", SYSFS{TYPE=="0 7 14", RUN+="/bin/sh -c 'echo 180 > /sys$$devpath/timeout'" Pg. 19
20 Tegile LUNs setup for Oracle Database in a VMware vsphere environment Project and LUN and Project Parameters This section provides an overview of how to setup Tegile LUNS for vsphere using Oracle RDBMS Tegile Storage configuration is unique in terms of Projects which allow a user to group LUNs into different buckets which offer parameter inheritance, target and Initiator Grouping and ability to snap and clone group of LUNs in a Project. The below table highlights a typical Project Configuration for vsphere. Project Name Template Purpose LUN parameters compr ess dedup DRAM Cache SSD Cache Logbias Block size vsphere-server Boot LUNS Virtual Server LUNS for Booting ESXi Server LZ4 ON meta meta Through put 32K vsphere-virtual- MC-OS- Datastores Virtual Server Datastore for hosting VMs running Oracle on Linux This datastore will have multiple VMs LZ4 ON all All latency 32K Oracle-ASM- Datastores Virtual Server LUNS used only for ASM Data /Logs Each datastore maps to only one VDISK LZ4 ON all All latency 32K The below figure shows a typical Project Schema and how snapshots and clones work 1. Project for boot-vms is on CTLR-B. It could be on a hybrid Pool 2. Project for OS-VMs is on CTLR-A. It is recommended to be on an all-flash Pool 3. Projects for Oracle LUNS CTLRA: It is recommended to be on an all-flash Pool 4. VM1 boots from Datastore OS-VMS 5. Project Oracle-LUNS-VM1 has 7 LUNs 4 DATA, 2 GRID and 1 REDO 6. These LUNS are brought into ESXi control as VMFS datastores. 7. Then they are exported to VM1 as virtual disks. Pg. 20
21 How to create Tegile Snapshots for Oracle Database Tegile Snapshot creation Referencing the above diagram, A Tegile snapshot can be taken for the Project Oracle-ASM-VM1 This snapshot will take a instant snapshot of all the LUNS in that project A space-optimized snapshot can be triggered from the project properties from the GUI or a REST API call to the Array. If quiesce is turned on the snapshot will be synchronously crash-consistent across all LUNS Pg. 21
22 How to create clones of Oracle Database for test-dev from Tegile Snapshots Tegile Clone Creation A Clone of all the LUNs can be created for test-dev use by GUI or REST API call Select the snapshot and click on Clone and Click YES Provide a Clone name and click on inherit settings. This will make the clone LUNS available to the same ESXI server. The clone LUNs can be brought into VM3 as a test-dev environment. These clones are space optimized clones and multiple such test-dev copies can be created. This can also be automated using the REST API. Pg. 22
23 Additional References Tegile Best Practices and Reference Architectures for vsphere ** Tegile Best Practices for VMware vsphere ** Tegile and Oracle Reference Architecture with Cisco UCS Additional References for Oracle in in VMware Environments ** Oracle Databases on VMware Best Practices Guide. ** Oracle Databases on VMware High Availability Guidelines. ** Oracle Databases High Availability on VMware vsphere. ** Oracle Databases on VMware Workload Characterization. ** Oracle Databases on VMware RAC Deployment Guide. Pg. 23
24 Appendix multipath.conf for Tegile arrays with 2.x firmware or older defaults { polling_interval 5 path_grouping_policy multibus failback user_friendly_names max_fds 8192 devices { device { vendor "TEGILE" product "ZEBI-FC" hardware_handler "1 alua" path_selector "roundrobin 0" path_grouping_policy "group_by_prio" immediate yes no_path_retry 10 dev_loss_tmo 50 path_checker tur prio alua failback 30 rr_min_io 128 multipaths { multipath { wwid f0d16d d35ad000a alias b_data01_8k_125gb multipath { wwid f0d16d d alias b_data02_8k_125gb multipath.conf for Tegile arrays with 3.x firmware or newer defaults { polling_interval 5 path_grouping_policy multibus failback immediate user_friendly_names max_fds 8192 yes Pg. 24
25 devices { device { vendor "TEGILE" product "INTELLIFLASH" hardware_handler "1 alua" path_selector "roundrobin 0" path_grouping_policy "group_by_prio" no_path_retry 10 dev_loss_tmo 50 path_checker tur prio alua failback 30 rr_min_io 128 multipaths { multipath { wwid f0d16d d35ad000a alias b_data01_8k_125gb multipath { wwid f0d16d d alias b_data02_8k_125gb Pg. 25
High performance Oracle database workloads with the Dell Acceleration Appliance for Databases 2.0
High performance Oracle database workloads with the Dell Acceleration Appliance for Databases 2.0 A Dell Reference Architecture Dell Database Solutions Engineering June 2015 A Dell Reference Architecture
More informationVirtual Volumes FAQs First Published On: Last Updated On:
First Published On: 03-20-2017 Last Updated On: 07-13-2018 1 Table of Contents 1. FAQs 1.1.Introduction and General Information 1.2.Technical Support 1.3.Requirements and Capabilities 2 1. FAQs Frequently
More informationNutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure
Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.
More informationNEC Storage M series for SAP HANA Tailored Datacenter Integration Configuration and Best Practice Guide
NEC Storage M series for SAP HANA Tailored Datacenter Integration Configuration and Best Practice Guide (M120/M320/M320F/M110/M310/M310F/M510/M710/M710F) August, 2018 NEC Copyright 2018 NEC Corporation.
More informationVeritas Storage Foundation In a VMware ESX Environment
Veritas Storage Foundation In a VMware ESX Environment Linux and Solaris x64 platforms December 2008 TABLE OF CONTENTS Introduction... 3 Executive Summary... 4 Overview... 5 Virtual Machine File System...
More informationVMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ]
s@lm@n VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ] VMware VCP-511 : Practice Test Question No : 1 Click the Exhibit button. An administrator has
More informationThe Host Server. Linux Configuration Guide. October 2017
The Host Server Linux Configuration Guide October 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running Linux. Basic Linux administration skills are assumed including
More informationThe Host Server. Linux Configuration Guide. August The authority on real-time data
The Host Server Linux Configuration Guide August 2018 This guide provides configuration settings and considerations for Hosts running Linux with SANsymphony. Basic Linux storage administration skills are
More informationOracle Validated Configuration with Cisco UCS, Nimble Storage, and Oracle Linux
Oracle Validated Configuration with Cisco UCS, Nimble Storage, and Oracle Linux 1 Best Practices Deployment Guide: Oracle Validated Configuration with Cisco UCS, Nimble Storage, and Oracle Linux This document
More informationUsing EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x
Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Application notes Abstract These application notes explain configuration details for using Infortrend EonStor DS Series iscsi-host
More informationNAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp
NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp Agenda The Landscape has Changed New Customer Requirements The Market has Begun to Move Comparing Performance Results Storage
More informationLINUX IO performance tuning for IBM System Storage
LINUX IO performance tuning for IBM System Storage Location of this document: http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102584 Markus Fehling Certified IT specialist cross systems isicc@de.ibm.com
More informationVirtual Server Agent for VMware VMware VADP Virtualization Architecture
Virtual Server Agent for VMware VMware VADP Virtualization Architecture Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 18 VMware VADP Virtualization Architecture - Virtual Server Agent for VMware
More informationMission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software.
Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. White Paper rev. 2017-10-16 2017 FlashGrid Inc. 1 www.flashgrid.io Abstract Ensuring high availability
More informationCisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA
Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its
More informationIntroducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile
Tegile Systems 1 Introducing Tegile Company Overview Product Overview Solutions & Use Cases Partnering with Tegile 2 Company Overview Company Overview Te gile - [tey-jile] Tegile = technology + agile Founded
More informationBackup and Recovery Best Practices With Tintri VMstore
Backup and Recovery Best Practices With Tintri VMstore Backup and Recovery Best Practices with Tintri VMstore TECHNICAL BEST PRACTICES PAPER, Revision 1.0, April 10, 2014 Contents Contents Introduction
More informationThe Oracle Database Appliance I/O and Performance Architecture
Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.
More informationBest Practices for Implementing VMware vsphere in a Dell PS Series Storage Environment
Best Practices for Implementing VMware vsphere in a Dell PS Series Storage Environment Abstract Dell EMC recommended best practices for configuring VMware vsphere hosts connecting to Dell PS Series storage
More informationA Kaminario Reference Architecture: Reference Architecture for Running SQL Server on ESXi
A Kaminario Reference Architecture: Reference Architecture for Running SQL Server on ESXi December 2017 TABLE OF CONTENTS 2 2 3 3 10 11 Executive Summary Introduction to Kaminario K2 Microsoft SQL Server
More informationConfiguring and Managing Virtual Storage
Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks
More informationConfiguration Guide -Server Connection-
FUJITSU Storage ETERNUS DX, ETERNUS AF Configuration Guide -Server Connection- (Fibre Channel) for Citrix XenServer This page is intentionally left blank. Preface This manual briefly explains the operations
More informationIOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December
IOmark- VM IBM IBM FlashSystem V9000 Test Report: VM- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and
More informationHedvig as backup target for Veeam
Hedvig as backup target for Veeam Solution Whitepaper Version 1.0 April 2018 Table of contents Executive overview... 3 Introduction... 3 Solution components... 4 Hedvig... 4 Hedvig Virtual Disk (vdisk)...
More informationiscsi Target Usage Guide December 15, 2017
December 15, 2017 1 Table of Contents 1. Native VMware Availability Options for vsan 1.1.Native VMware Availability Options for vsan 1.2.Application Clustering Solutions 1.3.Third party solutions 2. Security
More informationVMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS
VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions
More informationCisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture
Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of
More informationFUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-
FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection- (SAS) for Citrix XenServer This page is intentionally left blank. Preface This manual briefly explains the operations that need to be
More informationVMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.
VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS
More informationIntroduction to Virtualization. From NDG In partnership with VMware IT Academy
Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization
More informationExam4Tests. Latest exam questions & answers help you to pass IT exam test easily
Exam4Tests http://www.exam4tests.com Latest exam questions & answers help you to pass IT exam test easily Exam : VCP510PSE Title : VMware Certified Professional 5 - Data Center Virtualization PSE Vendor
More informationHYPER-UNIFIED STORAGE. Nexsan Unity
HYPER-UNIFIED STORAGE Nexsan Unity Multipathing Best Practices Guide NEXSAN 25 E. Hillcrest Drive, Suite #150 Thousand Oaks, CA 9160 USA Printed Wednesday, January 02, 2019 www.nexsan.com Copyright 2010
More informationVMware vsphere 5.5 Professional Bootcamp
VMware vsphere 5.5 Professional Bootcamp Course Overview Course Objectives Cont. VMware vsphere 5.5 Professional Bootcamp is our most popular proprietary 5 Day course with more hands-on labs (100+) and
More informationSurveillance Dell EMC Storage with Digifort Enterprise
Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the
More informationSetup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the
More informationThe Host Server. Citrix XenServer Configuration Guide. May The authority on real-time data
The Host Server Citrix XenServer Configuration Guide May 2018 This guide provides configuration settings and considerations for Hosts running XenServer with SANsymphony. Basic XenServer storage administration
More informationVMware VMFS Volume Management VMware Infrastructure 3
Information Guide VMware VMFS Volume Management VMware Infrastructure 3 The VMware Virtual Machine File System (VMFS) is a powerful automated file system that simplifies storage management for virtual
More informationData center requirements
Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine
More informationVirtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere
Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Deployment and Setup Guide for 7.1 release February 2018 215-12647_B0 doccomments@netapp.com Table of Contents
More informationDM-Multipath Guide. Version 8.2
DM-Multipath Guide Version 8.2 SBAdmin and DM-Multipath Guide The purpose of this guide is to provide the steps necessary to use SBAdmin in an environment where SAN storage is used in conjunction with
More informationAdministering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2
Administering VMware Virtual SAN Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/
More informationEMC Integrated Infrastructure for VMware. Business Continuity
EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.
More informationSurveillance Dell EMC Storage with FLIR Latitude
Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information
More informationIOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August
IOmark-VM VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC-160816-a Test Report Date: 16, August 2016 Copyright 2010-2016 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI,
More informationSetup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7
Setup for Failover Clustering and Microsoft Cluster Service 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website
More informationVirtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere
Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Deployment and Setup Guide for 7.2.1 release January 2019 215-13884_A0 doccomments@netapp.com Table of Contents
More informationDell EMC Ready Solutions for Oracle: Design for Dell EMC Unity All Flash Unified Storage
Dell EMC Ready Solutions for Oracle: Design for Dell EMC Unity All Flash Unified Storage With Dell EMC PowerEdge R840 and R640, RHEL 7.4, ESXi 6.5, and Oracle Database 12cR2 and 18cR1 February 2019 H17577
More informationSetup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.
Setup for Failover Clustering and Microsoft Cluster Service Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware
More informationMongoDB on Kaminario K2
MongoDB on Kaminario K2 June 2016 Table of Contents 2 3 3 4 7 10 12 13 13 14 14 Executive Summary Test Overview MongoPerf Test Scenarios Test 1: Write-Simulation of MongoDB Write Operations Test 2: Write-Simulation
More informationThe Contents and Structure of this Manual. This document is composed of the following four chapters.
Preface This document briefly explains the operations that need to be performed by the user in order to connect an ETERNUS2000 model 100 or 200, ETERNUS4000 model 300, 400, 500, or 600, or ETERNUS8000
More informationVeritas Storage Foundation in a VMware ESX Environment
Veritas Storage Foundation in a VMware ESX Environment Linux and Solaris x64 platforms January 2011 TABLE OF CONTENTS Introduction... 3 Executive Summary... 4 Overview... 5 Virtual Machine File System...
More informationBacula Systems Virtual Machine Performance Backup Suite
Bacula Systems Virtual Machine Performance Backup Suite Bacula Systems VM Performance Backup Suite is part of Bacula Enterprise Edition. It comprises of modules that can be utilized to perfectly fit any
More informationDell EMC SAN Storage with Video Management Systems
Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for
More informationOverview. Prerequisites. VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot
VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot Course Name Format Course Books vsphere Version Delivery Options Remote Labs Max Attendees Requirements Lab Time Availability May, 2017 Suggested Price
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
STO1926BU A Day in the Life of a VSAN I/O Diving in to the I/O Flow of vsan John Nicholson (@lost_signal) Pete Koehler (@vmpete) VMworld 2017 Content: Not for publication #VMworld #STO1926BU Disclaimer
More informationSurveillance Dell EMC Storage with Cisco Video Surveillance Manager
Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes
More informationSetup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 This document supports the version of each product listed and supports all subsequent
More informationIOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December
IOmark- VDI IBM IBM FlashSystem V9000 Test Report: VDI- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark,
More informationvsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5
Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have
More information"Charting the Course... VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot. Course Summary
Course Summary Description This powerful 5-day class provides an in-depth look at vsphere 6.5. In this course, cover how to deploy vsphere 6.5, how to optimize it including VMs, ESXi hosts, vcenter Server
More informationVeritas Dynamic Multi-Pathing for VMware 6.0 Chad Bersche, Principal Technical Product Manager Storage and Availability Management Group
Veritas Dynamic Multi-Pathing for VMware 6.0 Chad Bersche, Principal Technical Product Manager Storage and Availability Management Group Dynamic Multi-Pathing for VMware 1 Agenda 1 Heterogenous multi-pathing
More informationDell EMC SC Series Arrays and Oracle
Dell EMC SC Series Arrays and Oracle Abstract Best practices, configuration options, and sizing guidelines for Dell EMC SC Series storage in Fibre Channel environments when deploying Oracle. July 2017
More informationPerformance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems
NETAPP TECHNICAL REPORT Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems A Performance Comparison Study of FC, iscsi, and NFS Protocols Jack McLeod, NetApp
More informationIOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April
IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-
More informationIOmark-VM. Datrium DVX Test Report: VM-HC b Test Report Date: 27, October
IOmark-VM Datrium DVX Test Report: VM-HC-171024-b Test Report Date: 27, October 2017 Copyright 2010-2017 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark are trademarks
More informationNovell Cluster Services Implementation Guide for VMware
www.novell.com/documentation Novell Cluster Services Implementation Guide for VMware Open Enterprise Server 2015 SP1 May 2016 Legal Notices For information about legal notices, trademarks, disclaimers,
More informationEMC Performance Optimization for VMware Enabled by EMC PowerPath/VE
EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath
More informationAGENDA Settings, configuration options, etc designed for every allflash array regardless of vendor Our philosophy What you need to consider What you d
SER2355BE Best Practices for All- Flash Arrays with VMware vsphere Vaughn Stewart, VP, Enterprise Architect, Pure Storage Cody Hosterman, Technical Director, Pure Storage #VMworld #SER2355BE AGENDA Settings,
More informationThe Host Server. Citrix XenServer Configuration Guide. November The Data Infrastructure Software Company
The Host Server Citrix XenServer Configuration Guide November 2017 This guide provides configuration settings and considerations for SANsymphony Hosts running Citrix XenServer. Basic Citrix XenServer administration
More informationvsan All Flash Features First Published On: Last Updated On:
First Published On: 11-07-2016 Last Updated On: 11-07-2016 1 1. vsan All Flash Features 1.1.Deduplication and Compression 1.2.RAID-5/RAID-6 Erasure Coding Table of Contents 2 1. vsan All Flash Features
More informationHyperFlex. Simplifying your Data Center. Steffen Hellwig Data Center Systems Engineer June 2016
HyperFlex Simplifying your Data Center Steffen Hellwig Data Center Systems Engineer June 2016 Key Challenges You Face Business Speed Operational Simplicity Cloud Expectations APPS Hyperconvergence First
More informationSizing and Best Practices for Deploying Oracle 11g Transaction Processing Databases on Dell EqualLogic Storage A Dell Technical Whitepaper
Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Oracle 11g Transaction Processing Databases on Dell EqualLogic Storage A Dell Technical Whitepaper Chidambara Shashikiran Storage
More informationLIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT
WHITE PAPER LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT Continuous protection for Oracle environments Simple, efficient patch management and failure recovery Minimal downtime for Oracle
More informationWhite Paper Effects of the Deduplication/Compression Function in Virtual Platforms ETERNUS AF series and ETERNUS DX S4/S3 series
White Paper Effects of the Deduplication/Compression Function in Virtual Platforms ETERNUS AF series and ETERNUS DX S4/S3 series Copyright 2017 FUJITSU LIMITED Page 1 of 17 http://www.fujitsu.com/eternus/
More informationDevice Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.1 release notes
Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays v4.4.1 release notes April 2010 H Legal and notice information Copyright 2009-2010 Hewlett-Packard Development Company, L.P. Overview
More informationFUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide
FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide Non-stop storage is a high-availability solution that combines ETERNUS SF products
More informationPerformance Testing December 16, 2017
December 16, 2017 1 1. vsan Performance Testing 1.1.Performance Testing Overview Table of Contents 2 1. vsan Performance Testing Performance Testing 3 1.1 Performance Testing Overview Performance Testing
More informationData Protection Guide
SnapCenter Software 4.0 Data Protection Guide For VMs and Datastores using the SnapCenter Plug-in for VMware vsphere March 2018 215-12931_C0 doccomments@netapp.com Table of Contents 3 Contents Deciding
More informationVirtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere
Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Administration Guide for 7.2 release June 2018 215-13169_A0 doccomments@netapp.com Table of Contents 3 Contents
More informationHP 3PAR StoreServ Storage and VMware vsphere 5 best practices
Technical white paper HP 3PAR StoreServ Storage and VMware vsphere 5 best practices Table of contents Executive summary... 3 Configuration... 4 Fibre Channel... 4 Multi-pathing considerations... 5 HP 3PAR
More informationTechnical White Paper: IntelliFlash Architecture
Executive Summary... 2 IntelliFlash OS... 3 Achieving High Performance & High Capacity... 3 Write Cache... 4 Read Cache... 5 Metadata Acceleration... 5 Data Reduction... 6 Enterprise Resiliency & Capabilities...
More informationExam Name: VMware Certified Professional on vsphere 5 (Private Beta)
Vendor: VMware Exam Code: VCP-511 Exam Name: VMware Certified Professional on vsphere 5 (Private Beta) Version: DEMO QUESTION 1 The VMware vcenter Server Appliance has been deployed using default settings.
More informationConfiguration Guide -Server Connection-
FUJITSU Storage ETERNUS DX, ETERNUS AF Configuration Guide -Server Connection- (Fibre Channel) for VMware ESX This page is intentionally left blank. Preface This manual briefly explains the operations
More informationIOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March
IOmark- VM HP MSA P2000 Test Report: VM- 140304-2a Test Report Date: 4, March 2014 Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and IOmark are trademarks
More informationvsan Planning and Deployment Update 1 16 OCT 2018 VMware vsphere 6.7 VMware vsan 6.7
vsan Planning and Deployment Update 1 16 OCT 2018 VMware vsphere 6.7 VMware vsan 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have
More information"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary
Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely
More informationW H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4
W H I T E P A P E R Comparison of Storage Protocol Performance in VMware vsphere 4 Table of Contents Introduction................................................................... 3 Executive Summary............................................................
More informationW H I T E P A P E R. What s New in VMware vsphere 4: Performance Enhancements
W H I T E P A P E R What s New in VMware vsphere 4: Performance Enhancements Scalability Enhancements...................................................... 3 CPU Enhancements............................................................
More informationOracle Real Application Clusters on VMware vsan January 08, 2018
Oracle Real Application Clusters on VMware vsan January 08, 2018 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Solution Overview 1.3.Key Results 2. vsan Oracle RAC Reference Architecture
More informationRed Hat Enterprise Linux 7 DM Multipath
Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Steven Levine Red Hat Enterprise Linux 7 DM Multipath DM Multipath Configuration and Administration Steven Levine
More informationAdministering VMware vsan. Modified on October 4, 2017 VMware vsphere 6.5 VMware vsan 6.6.1
Administering VMware vsan Modified on October 4, 2017 VMware vsphere 6.5 VMware vsan 6.6.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If
More informationVMware vsphere 6.5 Boot Camp
Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp
More informationFunctional Testing of SQL Server on Kaminario K2 Storage
Functional Testing of SQL Server on Kaminario K2 Storage September 2016 TABLE OF CONTENTS 2 3 4 11 12 14 Executive Summary Introduction to Kaminario K2 Functionality Tests for SQL Server Summary Appendix:
More informationUsing Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2
Using Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2 Dell Engineering Donald Williams November 2013 A Dell Deployment and Configuration Guide Revisions Date November 2013 Description Initial
More informationMicrosoft Applications on Nutanix
Microsoft Applications on Nutanix Lukas Lundell Sachin Chheda Chris Brown #nextconf #AW105 Agenda Why MS Exchange or any vbca on Nutanix Exchange Solution Design Methodology SharePoint and Unified Communications
More informationDeep Dive on SimpliVity s OmniStack A Technical Whitepaper
Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity
More informationEMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP
IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes the high-level steps and best practices required
More informationFUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series
Utilizing VMware vsphere Virtual Volumes (VVOL) with the FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Reference Architecture for Virtual Platforms (15VM/iSCSI) The ETERNUS AF series and
More informationNutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix
Nutanix White Paper Hyper-Converged Infrastructure for Enterprise Applications Version 1.0 March 2015 1 The Journey to Hyper-Converged Infrastructure The combination of hyper-convergence and web-scale
More informationEXAM - VCP5-DCV. VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam. Buy Full Product.
VMware EXAM - VCP5-DCV VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Buy Full Product http://www.examskey.com/vcp5-dcv.html Examskey VMware VCP5-DCV exam demo product is here
More information