A Technical Overview of New Features for Automatic Storage Management in Oracle Database 18c ORACLE WHITE PAPER FEBRUARY 2018

Similar documents
Oracle Grid Infrastructure 12c Release 2 Cluster Domains O R A C L E W H I T E P A P E R N O V E M B E R

Oracle Grid Infrastructure Cluster Domains O R A C L E W H I T E P A P E R F E B R U A R Y

Oracle Clusterware 18c Technical Overview O R A C L E W H I T E P A P E R F E B R U A R Y

Oracle Exadata Statement of Direction NOVEMBER 2017

Creating Custom Project Administrator Role to Review Project Performance and Analyze KPI Categories

RAC Database on Oracle Ravello Cloud Service O R A C L E W H I T E P A P E R A U G U S T 2017

Veritas NetBackup and Oracle Cloud Infrastructure Object Storage ORACLE HOW TO GUIDE FEBRUARY 2018

An Oracle White Paper June Enterprise Database Cloud Deployment with Oracle SuperCluster T5-8

Leverage the Oracle Data Integration Platform Inside Azure and Amazon Cloud

Cloud Operations for Oracle Cloud Machine ORACLE WHITE PAPER MARCH 2017

Oracle Real Application Clusters One Node

An Oracle White Paper October Minimizing Planned Downtime of SAP Systems with the Virtualization Technologies in Oracle Solaris 10

Oracle CIoud Infrastructure Load Balancing Connectivity with Ravello O R A C L E W H I T E P A P E R M A R C H

August 6, Oracle APEX Statement of Direction

An Oracle White Paper October Deploying and Developing Oracle Application Express with Oracle Database 12c

Oracle Clusterware 12c Release 2 Technical Overview O R A C L E W H I T E P A P E R M A R C H

Automatic Data Optimization with Oracle Database 12c O R A C L E W H I T E P A P E R S E P T E M B E R

Oracle Database Appliance X6-2S / X6-2M ORACLE ENGINEERED SYSTEMS NOW WITHIN REACH FOR EVERY ORGANIZATION

Generate Invoice and Revenue for Labor Transactions Based on Rates Defined for Project and Task

Oracle Data Provider for.net Microsoft.NET Core and Entity Framework Core O R A C L E S T A T E M E N T O F D I R E C T I O N F E B R U A R Y

Oracle Real Application Clusters (RAC) on Oracle Database 18c ORACLE WHITE PAPER FEBRUARY 2018

Oracle Database Vault

Achieving High Availability with Oracle Cloud Infrastructure Ravello Service O R A C L E W H I T E P A P E R J U N E

Tutorial on How to Publish an OCI Image Listing

An Oracle White Paper October The New Oracle Enterprise Manager Database Control 11g Release 2 Now Managing Oracle Clusterware

Protecting Your Investment in Java SE

MySQL CLOUD SERVICE. Propel Innovation and Time-to-Market

Oracle ACFS 12c Release 2.

Oracle ASM Considerations for Exadata Deployments: On-premises and Cloud ORACLE WHITE PAPER MARCH 2017

An Oracle White Paper November Primavera Unifier Integration Overview: A Web Services Integration Approach

Oracle Privileged Account Manager

Oracle Secure Backup. Getting Started. with Cloud Storage Devices O R A C L E W H I T E P A P E R F E B R U A R Y

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

ORACLE DATABASE LIFECYCLE MANAGEMENT PACK

JD Edwards EnterpriseOne Licensing

Oracle WebLogic Server Multitenant:

Oracle WebLogic Portal O R A C L E S T A T EM EN T O F D I R E C T IO N F E B R U A R Y 2016

Oracle Database Appliance X7-2 Model Family

Benefits of an Exclusive Multimaster Deployment of Oracle Directory Server Enterprise Edition

An Oracle White Paper December, 3 rd Oracle Metadata Management v New Features Overview

Migrating VMs from VMware vsphere to Oracle Private Cloud Appliance O R A C L E W H I T E P A P E R O C T O B E R

Load Project Organizations Using HCM Data Loader O R A C L E P P M C L O U D S E R V I C E S S O L U T I O N O V E R V I E W A U G U S T 2018

Oracle Database 12c: JMS Sharded Queues

Oracle Diagnostics Pack For Oracle Database

Deploy VPN IPSec Tunnels on Oracle Cloud Infrastructure. White Paper September 2017 Version 1.0

Loading User Update Requests Using HCM Data Loader

NOSQL DATABASE CLOUD SERVICE. Flexible Data Models. Zero Administration. Automatic Scaling.

An Oracle White Paper November Oracle RAC One Node 11g Release 2 User Guide

April Understanding Federated Single Sign-On (SSO) Process

Oracle VM 3: IMPLEMENTING ORACLE VM DR USING SITE GUARD O R A C L E W H I T E P A P E R S E P T E M B E R S N

Oracle Flashback Data Archive (FDA) O R A C L E W H I T E P A P E R M A R C H

Oracle Database Vault

Oracle Cloud Applications. Oracle Transactional Business Intelligence BI Catalog Folder Management. Release 11+

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC)

Increasing Network Agility through Intelligent Orchestration

Technical White Paper August Recovering from Catastrophic Failures Using Data Replicator Software for Data Replication

Installation Instructions: Oracle XML DB XFILES Demonstration. An Oracle White Paper: November 2011

Oracle DIVArchive Storage Plan Manager

StorageTek ACSLS Manager Software Overview and Frequently Asked Questions

Oracle Data Masking and Subsetting

New Oracle NoSQL Database APIs that Speed Insertion and Retrieval

Application Container Cloud

An Oracle White Paper September Security and the Oracle Database Cloud Service

Best Practice Guide for Implementing VMware vcenter Site Recovery Manager 4.x with Oracle ZFS Storage Appliance

Migration Best Practices for Oracle Access Manager 10gR3 deployments O R A C L E W H I T E P A P E R M A R C H 2015

Configuring Oracle Business Intelligence Enterprise Edition to Support Teradata Database Query Banding

Oracle Java SE Advanced for ISVs

Bastion Hosts. Protected Access for Virtual Cloud Networks O R A C L E W H I T E P A P E R F E B R U A R Y

October Oracle Application Express Statement of Direction

COMPUTE CLOUD SERVICE. Moving to SPARC in the Oracle Cloud

Correction Documents for Poland

Technical Upgrade Guidance SEA->SIA migration

An Oracle White Paper May Oracle VM 3: Overview of Disaster Recovery Solutions

Siebel CRM Applications on Oracle Ravello Cloud Service ORACLE WHITE PAPER AUGUST 2017

Oracle Hyperion Planning on the Oracle Database Appliance using Oracle Transparent Data Encryption

Oracle JD Edwards EnterpriseOne Object Usage Tracking Performance Characterization Using JD Edwards EnterpriseOne Object Usage Tracking

Repairing the Broken State of Data Protection

VISUAL APPLICATION CREATION AND PUBLISHING FOR ANYONE

CONTAINER CLOUD SERVICE. Managing Containers Easily on Oracle Public Cloud

Deploying the Zero Data Loss Recovery Appliance in a Data Guard Configuration ORACLE WHITE PAPER MARCH 2018

STORAGE CONSOLIDATION AND THE SUN ZFS STORAGE APPLIANCE

Automatic Receipts Reversal Processing

Oracle Database Exadata Cloud Service Exadata Performance, Cloud Simplicity DATABASE CLOUD SERVICE

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

Solution-in-a-box: Deploying Oracle FLEXCUBE v12.1 on Oracle Database Appliance Virtualized Platform ORACLE WHITE PAPER JULY 2016

Database Consolidation onto Private Clouds

An Oracle White Paper September, Oracle Real User Experience Insight Server Requirements

An Oracle Technical White Paper September Oracle VM Templates for PeopleSoft

Using the Oracle Business Intelligence Publisher Memory Guard Features. August 2013

Technical White Paper August Migrating to Oracle 11g Using Data Replicator Software with Transportable Tablespaces

ORACLE SNAP MANAGEMENT UTILITY FOR ORACLE DATABASE

Deploying Custom Operating System Images on Oracle Cloud Infrastructure O R A C L E W H I T E P A P E R M A Y

Integrating Oracle SuperCluster Engineered Systems with a Data Center s 1 GbE and 10 GbE Networks Using Oracle Switch ES1-24

WebCenter Portal Task Flow Customization in 12c O R A C L E W H I T E P A P E R J U N E

Oracle Database Vault with Oracle Database 12c ORACLE WHITE PAPER MAY 2015

Sun Fire X4170 M2 Server Frequently Asked Questions

Oracle Big Data SQL. Release 3.2. Rich SQL Processing on All Data

ASM New Features for Release 12.2

Oracle Business Activity Monitoring 12c Best Practices ORACLE WHITE PAPER DECEMBER 2015

Transitioning from Oracle Directory Server Enterprise Edition to Oracle Unified Directory

Transcription:

A Technical Overview of New Features for Automatic Storage Management in Oracle Database 18c ORACLE WHITE PAPER FEBRUARY 2018

Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle s products remains at the sole discretion of Oracle. A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE MANAGEMENT IN ORACLE DATABASE 18C

Table of Contents Disclaimer 1 Introduction 1 Automatic Storage Management Overview and Background 1 Total Storage Management in Oracle 11g Database Release 2 2 New Features Beginning in Oracle 12c Database Release 1 3 Oracle Flex 3 New Features in Oracle 12c Database Release 2 4 IO Services 5 Database-Oriented Storage Management 6 File Groups 7 Quota Groups 8 Extended Disk Groups 9 New Features in Oracle Database 18c 10 Database Clones 10 Example of Making an Clone of a PDB 11 Changing an Clone s Quota Group 13 Conclusion 14

Introduction Automatic Storage Management () has been one of the most successful and widely adopted Oracle database features. was introduced in Oracle Database 10g and through its many releases, evolved to meet changing needs for both the Oracle database and the hardware environments in which the Oracle database operates. This paper primarily presents new features in Oracle Database 18c through a review of the evolution of that existed in Oracle Database 10g to in Oracle database 18c. If the reader s interest is primarily with the latest enhancements, then they can skip to the section titled New Features in Oracle 18c section. Automatic Storage Management Overview and Background Automatic Storage Management () is a purpose-built file system and volume manager for the Oracle database. was first released with Oracle Database 10g. For Oracle databases, simplified both the file system and volume management for the Oracle database. In addition to simplifying storage management, improved file system scalability, performance, and database availability. These benefits hold for both single-instance databases as well as for Oracle Real Application Cluster (Oracle RAC) databases. With, the customer does not need to use a third-party file system or a volume manager. Storage is provided to for managing and effectively organizes data in Disk Groups. Oracle Database Opera-ng System Database Logical Volume Manager Oracle Database File System and Volume Management Opera-ng System Diskgroup(s) Server Server Figure 1. replacing O/S Logical Volume Manager and File System Because of its innovative rebalancing of data in Disk Groups, maintains best in class performance by distributing data evenly across all storage resources whenever the physical storage configuration changes. This rebalancing feature provides an even distribution of IO and ensures optimal performance. Furthermore, scales to very large configurations of both databases and storage, without compromising functionality or performance. is built to maximize database availability. For example, provides self-healing automatic mirror reconstruction and resynchronization and rolling upgrades. also supports dynamic and on-line storage reconfigurations. Customers realize significant cost savings and achieve lower total cost of ownership because of 1 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

features such as just-in-time provisioning, and clustered pool of storage making it ideal for database consolidation. provides all of this without additional licensing fees. In summary, is a file system and volume manager optimized for Oracle database files providing:» Simplified and automated storage management» Increased storage utilization, uptime, and agility» Delivering predictable performance and availability service levels Total Storage Management in Oracle 11g Database Release 2 With the release of Oracle Database 11g Release 2, Oracle added the Cluster File System (ACFS) to compliment s file management. ACFS provides the same level of storage management provided by. Specifically, ACFS simplifies and automates storage management functions, increases storage utilization, uptime and agility to deliver predictable performance and availability for conventional file data stored outside an Oracle Database.» ACFS includes: Automatic Storage Management Dynamic Volume Manager as a volume manager for Automatic Storage Management Cluster File System.» Automatic Storage Management Cluster File System provides advanced data services and security features for managing general purpose files.», ACFS, and Oracle Clusterware are bundled as a complete package and are called Oracle Grid Infrastructure Database Applica'ons Snapshots Tagging Replication Security Encryption Cluster File System (ACFS) Dynamic Volume Manager (ADVM) Automa'c Storage Management Files DB Files Diskgroups Dynamic Volumes ACFS Figure 2. System Layers Oracle Grid Infrastructure provides an integrated foundation for database and general-purpose files as well as an infrastructure for clustered environments. The Oracle Grid Infrastructure streamlines management of volumes, file systems and cluster configurations, therefore eliminating the need for multiple 3 rd party software layers that add complexity and cost. 2 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

New Features Beginning in Oracle 12c Database Release 1 in Oracle Database 12c Release 1 addressed storage management needs for environments with larger cluster configurations common to what has become called private clouds. features introduced in Oracle database 12c Release 1 enhanced storage management by seamlessly adapting to frequently changing cluster configurations expected in cloud-like environments. Additionally, other features introduced enhancements for Oracle s engineered systems, such as Exadata and the Oracle Database Appliance. Oracle Flex The most significant enhancement for in Oracle Database 12c Release 1 is a set of features collectively called Oracle Flex. Oracle Flex provided for critical capabilities required for clustering in large enterprise environments. These environments typically deploy database clusters of varying sizes that not only have stringent performance and reliability requirements, but also must be able to rapidly adapt to changing workloads with minimal management. Oracle Flex fundamentally changes the cluster architecture. Before the introduction of Oracle Flex, an instance ran on every server in a cluster. Each instance communicated with other instances in the cluster and collectively they presented shared Disk Groups to the database clients running in the cluster as well. The collection of instances formed an cluster. If an instance were to fail in this configuration, then all the database instances running on the same server as the failing instance, also failed. The gray boxes in figure 3 represent instances in a pre-12c environment. DB2 DB3 DB3 DB2 Shared Diskgroups Figure 3. Cluster Oracle Flex changed the architecture regarding the cluster organization. In Oracle Database release 12c Release 1, a smaller number of instances run on a subset of servers in a cluster. The number of instances running is called the cardinality. If a server fails that is running an instance, Oracle Clusterware starts a replacement instance on a different server to maintain the cardinality. If an instance fails for whatever reason, then active Oracle Database 12c instances that were relying on that instance will reconnect to another surviving instance on a different server. Furthermore, database instances are connection load balanced across the set of available instances. The default cardinality is 3, but that can be changed with a Clusterware command. 3 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

DB2 DB3 DB3 DB2 Figure 4. Flex DB2 DB3 DB3 DB2 Figure 4a. Failing Instance Figure 4 illustrates the architecture with Oracle Flex. There are a reduced number of instances on selected servers in the cluster and Oracle Database 12c clients can connect across the network to instances on different servers. Furthermore, Oracle Database 12c clients can failover to a surviving server with an instance if a server with an instance fails, all without disruption to the database client. New Features in Oracle 12c Database Release 2 The goal of features introduced in Oracle Database 12c, Release 1 was to improve scaling and redundancy for large cluster environments. Enhanced scaling for clusters is further enhanced in Oracle Database 12c Release 2 with the introduction of Cluster Domains. A Cluster Domain provides a new architecture for clustering that centralizes a set of common core services utilized by multiple independent clusters, thus reducing overhead and streamlining manageability (see figure 5). It simplifies management of a large number of clusters and provides tools for enabling standardization. For example, provisioning, patching, diagnosability, and storage management tasks are centralized and standardized. The key element in a Cluster Domain is the Domain Services Cluster (DSC). The DSC is a central point of management for a multi-clustered environment. The DSC provides a Management Repository that collects key statistics and diagnostic information useful for the individual Member Clusters and is the basis for the new Autonomous Health Framework. 4 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

Cluster Domain Database Member Cluster ApplicaCon Member Cluster Database Member Cluster Database Member Cluster Private Network Uses local GI only Uses IO Service of DSC Uses Service SAN Storage Domain Services Cluster Network Storage Mgmt Repository (GIMR) Service Trace File Analyzer (TFA) Service Rapid Home Provisioning (RHP) Service AddiConal OpConal Services ACFS Services Service IO Service Shared Figure 5. Domain Cluster Simply stated, a Cluster Domain is collection of independent clusters, called Member Clusters, and a centralized Domain Services Cluster (DSC) providing centralized and other management services. When a Cluster Domain is deployed, a DSC can eliminate the requirement for instances to be running on the Member Clusters by utilizing shared storage services in the DSC. However, while Member Clusters can use the services in the DSC, as an alternative, Member Clusters, for storage isolation reasons, are able to host their own environment. This is mode is shown in figure 5 with the left most Member Cluster. IO Services Services available in Oracle Database 12c release 2 enables Member Clusters to access data in a centralized Disk Group environment of the Domain Services Cluster (DSC). When Services in a DSC are used, there are two connectivity options by which Members Clusters can access the shared Disk Groups residing in a DSC. The first is for the Member Cluster to have shared SAN attachment to the DSC storage as shown in figure 6a. The database instances in the Member Cluster rely on instance(s) in the DSC for coordinating access through the SAN. Version: N+1 Database Member Cluster Version: N+1 Database Member Cluster Uses Service Uses IO Service of DSC SAN NETWORK STORAGE Version: N Domain Services Cluster Service Version: N Domain Services Cluster IO Service Shared Shared Figure6a. SAN Attached Storage Figure 6b. Network Attached Storage 5 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

The second way in which a Member Cluster can access a DSC s Disk Group is though IO Services using the Storage Network. This is shown in figure 6b. With this mode, there is no physical SAN connection of storage between the Member Clusters and the DSC. Member Clusters access data through a private network. The private network can be the same private network used by Grid Infrastructure or a separate dedicated private network. This model of data access is useful for test and development configurations operating in Member Clusters for accessing production or test data that is shared through the DSC. The Storage Network reduces the cost of storage with the use of network attachment rather than more expensive storage attachment, such as Fibre Channel switched networks. Using a centralized environment operating in a DSC allows Member Clusters to share data and access shared Disk Groups. Customers can have a production database in the DSC, then clone that database using the new database-cloning feature, and then use Rapid Home Provisioning feature for quickly creating a new test and development Member Cluster environment. Customers can quickly and easily create many test clusters without specialized or dedicated infrastructure. The key to these capabilities are the new Services available in Oracle Database 12c Release 2. Database-Oriented Storage Management When introduced, the challenge for was to address management complexity associated with storage used for large database environments. Before, achieving optimal performance, database administrators had to determine the best storage configurations for various database objects. For example, a tablespace belonging to a particular table or database might best be placed on a particular file system providing the needed performance for the application. As databases grew is size, administrators had an ongoing challenge of making changes in the physical storage configuration to keep pace with the changing database workloads. simplified this effort by allowing Disk Groups to contain all the database objects without regard to the underlying storage configuration. This meant that a particular Disk Group contained a wide range of database files and even contained many different databases. This organization can be described as Disk Group-oriented storage management. This is represented in figure 7 with many different databases and their files sharing a common Disk Group. In this model there is no discrimination between different databases or even the needs of different databases. Pre-12.2 Diskgroup Organiza5on Diskgroup : DB3 : DB3 : DB2 : DB2 : : DB3 : DB2 : File 4 DB2 : : Figure 7: Disk-oriented Storage Managing storage by allocating resources to a few Disk Groups did greatly improve the manageability for large databases and organizations deploying a large number of databases. It eliminated the piecemeal mode of the past 6 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

where administrators had to manage the relationship between database objects and storage with fine granularity. However, this course-grain management style limited the flexibility for effectively consolidating a large number of databases with different requirements into just a few simple Disk Groups using a single set of characteristics, such as redundancy. If was sometimes observed that some customers deployed many Disk Groups for containing databases that had different requirements. For example, production databases might be separated from test and development databases because the latter did not have the same performance or reliability requirements as the production databases. While separating databases into different Disk Groups provides finer granularity with respect to control, it works against the objective of reducing management overhead through consolidation. File Groups For these reasons, Oracle is introducing in Oracle Database 12c Release 2, the concept of database-oriented storage management with a new type of Disk Groups called Flex Disk Groups. With Flex Disk Groups, all files belonging to an individual database or with multitenant, a PDB are collectively identified with a new object called a File Group. From here forward, for simplicity, we refer to non-multitenant databases, and multitenant CDBs and PDBs as simply databases. A File Group logically contains the files associated with a single database. A File Group provides the means for referring to all the files that are part of a database residing within a single Disk Group. A single database may have multiple File Groups residing in different Disk Groups. Command syntax referring to a File Group refers to all the files belonging to the File Group. Figure 8 below shows this logical grouping. When a database is initially created, if an existing File Group already has a name that matches the name of the new database, then that existing File Group is used for recording the new file names that are created. However, if there is not an existing File Group for a database when a database is created, then a new File Group for the database is created. 12.2 Flex Diskgroup Organiza6on File Group Flex Diskgroup File Group DB2 File 4 DB3 Figure 8: Database-oriented Storage Management Different File Groups within a Flex Disk Group can have different redundancies and the redundancies can be changed, as a situation requires. For example, a production database can use High Redundancy while a test database, in the same Disk Group, can have Normal Redundancy. If desired, the redundancy of a database, i.e. the 7 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

File Group, can be changed online. When a File Group s redundancy is changed, invokes an operation similar to an rebalance causing the redundancy change in storage. File Group names are unique within a Disk Group. However, different Disk Groups can have File Groups with the same name. This provides for a database to span Disk Groups without creating a naming conflict. Quota Groups An important feature required for consolidating databases, from a storage management perspective, is storage quota management. Without the means of providing quota management, a single database can consume all the space is a Disk Group. Flex Disk Groups offer a new feature called Quota Groups. A Quota Group is a logical container specifying the amount of Disk Group space that one or more File Groups are permitted to consume. As an example, in figure 9 below, Quota Group A contains File Groups and DB2, whereas Quota Group B contains File Group DB3. The databases in Quota Group A are then limited by the specification of available space in that Quota Group. Every Flex Disk Group has a default Quota Group. If a File Group i.e. database, is not assigned a Quota Group, it is then assigned to the default Quota Group. Furthermore, the sum of space represented by all the Quota Groups may actually exceed the total physical space available. Consequently, Quota Groups represent a logical capacity limit of available space. Changing Quota Groups requires administrative privileges. An administrator can create a set of Quota Groups in which subsequent databases are allocated. Quota Groups facilitate consolidating many databases into a single Flex Disk Group by preventing any single database from consuming more than its fair share of storage and inhibiting the operation of the other databases. 12 Flex Diskgroups Quota Groups Flex Diskgroup DB2 File 4 DB3 Quota Group A Quota Group B Figure 9: Quota Space Management 8 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

Extended Disk Groups With an enterprise s core operations at stake, it is vital that underlying storage for critical databases have the highest reliability and robustness. The design of, from the beginning, has been to eliminate data loss after a hardware failure and ensure ongoing operation. For example, Normal and High-Redundancy Disk Groups provide data access in the face of storage failures. Early implementers of saw an opportunity to extend an Oracle RAC cluster s availability by deploying RAC clusters across two closely located datacenters. This design used mirroring across the datacenters so that data availability is ensured in spite of a complete failure of a datacenter. Figure 11a represents the architecture of what became known as Extended RAC using mirroring. The benefit of the Extended RAC architecture is that mirrors file extents across two different Failure Groups, with each File Group located in a different datacenter. If one datacenter fails, then all the file extents required for the RAC instance in the surviving datacenter, remains available in that datacenter. Site A Site B RAC Fail Group A1 F-EXT DG Fail Group B1 F-EXT Quorum Group Figure 11a: Extended Disk Groups in Oracle Database 12c Release 2 extends the utility of Extended RAC with a new feature called Extended Disk Groups. Extended Disk Groups eliminates the one limitation of Extended RAC, which was that a Disk Group in an Extended RAC implementation could have at most two Failure Groups, each in a different datacenter. However, Extended Disk Groups, that are an extension of Flex Disk Groups, now enable multiple Failure Groups within a datacenter or site. This means that more than one copy of a file s extent can exist, enabling mirroring within a datacenter, as well as across datacenters. In Figure 11b, the file extent high-lighted is replicated across Failure Groups within a datacenter as well as across the two datacenters. Additionally, Extended Disk Groups now support three datacenters and allow for the use of High-Redundancy. Finally, with Extended Disk Groups, an Extended RAC configuration can be deployed on Exadata within the limitation of Infiniband. 9 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

Site A Site B RAC Fail Group A1 Fail Group A2 Fail Group B1 Fail Group B2 F-EXT F-EXT F-EXT DG Fail Group A3 Fail Group A4 Fail Group B3 Fail Group B4 F-EXT Quorum Group Figure 11b: Extended Disk Groups New Features in Oracle Database 18c Database Clones A storage system s ability for rapidly replicating data for deploying test and development systems is well understood and appreciated. Most storage-based replication technologies utilize either mirror splitting, within storage arrays or snapshot replication in file systems., with Flex Disk Groups, now provide the ability for creating near instantaneous copies of complete databases. These database copies can be use for test and development, or when used with an Exadata system, provide a read-only master for an Exadata snapshot copies. The advantage of database clones, when compared with storage array-based replication, is that database clones replicate databases rather than generic files or blocks of physical storage. Storage array or file system-based replication, in a database environment, requires coordination between database objects being replicated with the underlying technology doing the replication. With Database Clones, the administrator does not need to worry about the physical storage layout. This mode of replication is another aspect of database-oriented storage management provided with Flex Disk Groups. Note: Feature functionality for clones has two types- cloning for multitenant databases and cloning for non-multitenant databases. Oracle Database 18c Release 1 provides clones for multitenant databases i.e. PDBs. A future update will include functionality for non-multitenant databases. database cloning works by leveraging redundancy. Previously, as a protection against data loss during hardware failure, provided up to two additional redundant copies of a file s extents. Flex Disk Groups now can provide up to five redundant copies, in which one or more of the copies can be split off to provide a near instantaneous replica. The process involves two phases, the first phase is the replication phase and the second is the near instantaneous splitting of the file extent copies providing a distinct replica that is independent of the original. When an database clone is made, all the files associated with the database are split together and provide an 10 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

independent database. Figure 10 represents the splitting of the files for DB3 providing a separate and independent database DB3a. Flex Diskgroup Database Clone Flex Diskgroup DB2 File 4 Quota Group A DB3 Quota Group B DB3a Figure 10: Database Clones Example of Making an Clone of a PDB The following procedures are for illustration purposes and highlight using cloning for a PDB database. The steps were made with a default database created using DBCA. A simple PDB is created called PDB. From SQLPLUS connected to instance as sysasm (Step 1) SQL> alter diskgroup data set attribute 'compatible.asm' = '18.0'; Diskgroup altered. SQL> alter diskgroup data set attribute 'compatible.rdbms' = '18.0'; Diskgroup altered. In this step, the and RDBMS Disk Group compatibility attributes are advanced to 18.0. This is minimum level required for creating database clones. Using CMD to display File Groups (Step 2) asmcmd lsfg File Group Disk Group Quota Group Used Quota MB Client Name Client Type DEFAULT_FILEGROUP DATA GENERIC 0 ORCL_PDB$SEED DATA GENERIC 1584 ORCL_PDB$SEED DATABASE ORCL_PDB DATA GENERIC 1600 ORCL_PDB DATABASE ORCL_CDB$ROOT DATA GENERIC 9504 ORCL_CDB$ROOT DATABASE CMD from the instance displays the current File Groups. ORCL_PDB is the File Group created when the PDB database was initially created. ROOT and SEED File Groups are also shown. 11 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

From SQLPLUS connected to database instance as sysdba SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB READ WRITE NO PDBS are shown from the database instance showing PDBs that exists before a database clone is created. From SQLPLUS connected to database instance as sysdba (Step 3) SQL> alter session set container = pdb; Session altered. SQL> alter pluggable database pdb prepare mirror copy pdbcopy; Pluggable database altered. The first phase for creating a clone is a prepare statement. This step takes the most time. However, the prepare phase can be executed well in advance of splitting a clone. After the prepare statement, the clone is attached to the source PDB copy and is kept current with the source. SQL> conn / as sysdba Connected. SQL> create pluggable database pdb1 from pdb using mirror copy pdbcopy; Pluggable database created. The second phase to creating a clone is a create pluggable database statement. This phase is relatively quick and after it completes, a new independent File Group exists and a new PDB is created. It is possible during this step you might get an ORA-59003 error indicating the copy is not ready to be split. If so, wait a few minutes and try again. Using CMD to display File Groups (Step 4) SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB READ WRITE NO 5 P MOUNTED Using CMD to display File Groups asmcmd lsfg File Group Disk Group Quota Group Used Quota MB Client Name Client Type DEFAULT_FILEGROUP DATA GENERIC 0 ORCL_PDB$SEED DATA GENERIC 1584 ORCL_PDB$SEED DATABASE ORCL_PDB DATA GENERIC 1600 ORCL_PDB DATABASE ORCL_CDB$ROOT DATA GENERIC 9504 ORCL_CDB$ROOT DATABASE PDBCOPY DATA GENERIC 1600 ORCL_P DATABASE The last step show displays the new PDB and the corresponding File Group. P resides in File Group PDBCOPY. Notice that the Quota Group of the clone is inherited from the source. However, a DBA can simply change a Quota Group for either the source or clone as required. At this point you can connect to P as a new database that was previously a copy of PDB. 12 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

Changing an Clone s Quota Group Flex Disk Groups enable Database-Oriented Storage Management. One huge benefit is that greater consolidation is possible by separating database clients into separate Quota Groups. This means that through administration one database, or in the case below, a PDB can reside in its own Quota Group and the amount of space it consumes is controlled. The following steps illustrate how the newly created PDB clone is moved to a new Quota Group. By default, File Groups reside in the default Quota Group and they inherit the source s Quota Group when they re cloned. Using CMD to create a new Quota Group asmcmd mkqg -G data testqg quota 200g Diskgroup altered. This first step creates a new Quota Group and sets the maximum space that can consumed to 200 gigabytes. Using CMD to move a File Group to the newly created Quota Group asmcmd mvfg -G data --filegroup pdbcopy testqg Diskgroup altered. This first step moves the File Group representing the PDB clone to the newly created Quota Group. Using CMD to verify Quota Groups asmcmd lsfg File Group Disk Group Quota Group Used Quota MB Client Name Client Type DEFAULT_FILEGROUP DATA GENERIC 0 ORCL_PDB$SEED DATA GENERIC 1584 ORCL_PDB$SEED DATABASE ORCL_PDB DATA GENERIC 1600 ORCL_PDB DATABASE ORCL_CDB$ROOT DATA GENERIC 9504 ORCL_CDB$ROOT DATABASE PDBCOPY DATA TESTQG 1600 ORCL_P DATABASE The last step displays the File Groups and shows that P resides in Quota Group testqg that was previously created. 13 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

Conclusion in Oracle Database 18c continues an evolution of superior storage management for the Oracle database that began back in Oracle Database 10g. In Oracle 10g, addressed one goal really well; to improve the automation and manageability for storage used by Oracle databases. The next phase of s evolution came in Oracle Database 11g Release 2 with the introduction of ACFS. ACFS extended storage management for all customer data. in Oracle Database 12c Release 1 continued this tradition of storage management evolution in the realm of efficiently supporting large cluster configurations, including on premise cloud deployments. In the next step, in Oracle Database 12c Release 2 provided storage management for Domain Clusters. Domain Clusters and the Domain Services Cluster more broadly enable loosely joined and managed configurations that reduce management overhead, yet provides for improved consolidation. The key to improving storage consolidation is Flex Disk Groups enabling database-oriented storage management. Database-oriented storage management empowers admins by allowing them to classify database data and implement capabilities such as quota management and quality of service controls. Finally, the key advancement in Oracle Database 18c provides database cloning built on replication. This replication referred to as Database Cloning, provides for point in time database replication built on database-oriented storage management delivered in Oracle Database 12c Release 2. 14 A TECHNICAL OVERVIEW OF NEW FEATURES FOR AUTOMATIC STORAGE Management IN ORACLE DATABASE 18C

Oracle Corporation, World Headquarters Worldwide Inquiries 500 Oracle Parkway Phone: +1.650.506.7000 Redwood Shores, CA 94065, USA Fax: +1.650.506.7200 CONNECT WITH US blogs.oracle.com/oracle facebook.com/oracle twitter.com/oracle oracle.com Copyright 2018, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0118 White Paper Title February 2018 Author: Jim Williams