Management on Dell/EMC Storage Arrays By Zafar Mahmood, Uday Datta Shet, and Bharat Sajnani ASM migration process The process for migrating an Oracle Real Application Clusters (RAC) database from Oracle Cluster File System (OCFS) to Oracle Automatic Storage Management (ASM) involves the following major steps: /dev/raw/asm1 /dev/emcpowerb1 /dev/raw/asm2 /dev/emcpowerc1 4. Restart the raw device binding service on all nodes in the cluster: service rawdevices restart 1. Add and prepare the additional shared storage to store the database files using ASM on all nodes. 2. Create ASM cluster instances on all RAC nodes with required ASM disk groups. 3. Start the ASM cluster instances on all RAC nodes and mount the newly created ASM disk groups. 4. Register the newly created ASM instances with Oracle Cluster Ready Services (CRS). 5. Prepare the existing OCFS-based RAC database for migration. 6. Migrate the OCFS-based RAC database files to ASM-managed disk groups using the Oracle Recovery Manager (RMAN) utility. 7. Perform the following post-migration steps: a. Create a new temporary tablespace, which resides in an ASM disk group. b. Re-create the online redo log files on an ASM disk group. c. Migrate the server parameter file (spfile) from OCFS to an ASM disk group. (This step is optional.) 8. Perform verification checks on the migrated database. 9. Remove the database files residing on OCFS volumes to reclaim the disk space for further expansion of ASM disk groups. Step 1: Preparing additional storage for ASM Administrators can add storage to the shared storage system and make it available for ASM by following these steps: 1. Issue the following commands as user root to change the names of the raw devices to easily recognizable ASM device names: mv /dev/raw/raw4 /dev/raw/asm1 Note: In this example, the raw devices interface is used to prepare the ASM disk groups. Administrators also have the option of using the ASM library driver interface, which is available from the Oracle Technology Network, to prepare the additional storage. The raw devices interface was used in this migration example because, at the time of publication, EMC PowerPath software did not support the ASM library interface. Step 2: Creating ASM cluster instances on RAC nodes An Oracle parameter file, such as the one shown in Figure A, can be used to create the ASM instance on all nodes in the cluster. In environments in which a large number of ASM disk groups need to be created, administrators may need to set a larger value for the LARGE_POOL_SIZE parameter from the default value of 12 MB. This file should be created in $ORACLE_HOME/dbs as init+asmnode number.ora on all nodes in the cluster. For example, the file shown in Figure A should be created on node 1. $ORACLE_HOME/dbs/init+ASM1.ora *.asm_diskgroups='data','recovery' *.asm_diskstring='/dev/raw/asm*' *.background_dump_dest='/opt/oracle/admin/+asm/bdump' *.cluster_database=true *.core_dump_dest='/opt/oracle/admin/+asm/cdump' +ASM2.instance_number=2 +ASM1.instance_number=1 *.instance_type='asm' *.large_pool_size=20m *.remote_login_passwordfile='exclusive' *.user_dump_dest='/opt/oracle/admin/+asm/udump' mv /dev/raw/raw5 /dev/raw/asm2 2. Make the user named oracle the owner of the newly added devices: chown oracle.dba /dev/raw/asm* 3. Edit the /etc/sysconfig/rawdevices file and add an entry for each storage device to be used with ASM: Figure A. Creating an ASM instance on node 1 Next, administrators should create ASM instance dump files and alert log file destination folders on all nodes: mkdir -p /opt/oracle/admin/+asm/udump mkdir -p /opt/oracle/admin/+asm/cdump mkdir -p /opt/oracle/admin/+asm/bdump August 2005 Reprinted from Dell Power Solutions, August 2005. Copyright 2005 Dell Inc. All rights reserved. DELL POWER SOLUTIONS 1
Then, administrators should start the ASM instance named +ASM1 on the first node and create the disk groups, as shown in Figure B. Step 3: Starting ASM cluster instances and mounting ASM disk groups The ASM instances should be started on all nodes in the cluster, and the ASM disk groups should be made available for the database file storage. Optionally, administrators can create the Oracle server parameter file (spfile) from the parameter file (pfile) to help ease ASM instance management: Next, administrators should restart ASM using the server control utility to enable the ASM CRS configuration: srvctl start asm -n zaf1850-pub srvctl start asm -n zaf2850-pub Step 5: Preparing the OCFS-based RAC database for migration Administrators must ensure the OCFS database is running by connecting as a user with sysdba privileges. Next, they should determine whether block change tracking is disabled, which is the default setting: SQL> create spfile='/dev/raw/spfile+asm.ora' from pfile; Next, administrators should edit $ORACLE_HOME/dbs/init+ ASM1.ora and point to the spfile file: spfile='/dev/raw/spfile+asm.ora' Similarly, administrators should edit the init+asm2.ora file on the second node and point to the shared spfile. Step 4: Registering ASM instances with Oracle Cluster Ready Services In the fourth step, administrators should register ASM instances with Oracle CRS so that ASM can automatically start at each reboot, as shown in Figure C. SQL> select status from v$block_change_tracking; STATUS ---------- DISABLED Administrators should shut down the database on all nodes: srvctl stop database -d zafdb Next, they should start one of the database instances in nomount state, and then modify the database server parameter to point to the ASM disk groups and enable the use of Oracle Managed Files (OMF), as shown in Figure D. [oracle@zaf1850-pub dbs]$ export ORACLE_SID=+ASM1 [oracle@zaf1850-pub dbs]$ sqlplus "/as sysdba" SQL> startup nomount; ASM instance started SQL> create diskgroup DATA external redundancy disk '/dev/raw/asm1'; SQL> create diskgroup RECOVERY external redundancy disk '/dev/raw/asm2'; Figure B. Starting the ASM instance and creating disk groups [oracle@zaf1850-pub dbs]$ srvctl add asm -n zaf1850-pub -i +ASM1 -o $ORACLE_HOME [oracle@zaf1850-pub dbs]$ srvctl add asm -n zaf2850-pub -i +ASM2 -o $ORACLE_HOME [oracle@zaf1850-pub oracle]$ srvctl enable asm -n zaf1850-pub -i +ASM1 [oracle@zaf1850-pub oracle]$ srvctl enable asm -n zaf2850-pub -i +ASM2 Figure C. Registering ASM instances with Oracle CRS August 2005 Reprinted from Dell Power Solutions, August 2005. Copyright 2005 Dell Inc. All rights reserved. DELL POWER SOLUTIONS 2
[oracle@zaf1850-pub oracle]$ sqlplus "/as sysdba" SQL> startup nomount SQL> alter system set control_files='+data/control.ctl' scope=spfile sid='*'; SQL> alter system set db_create_file_dest='+data' scope=spfile sid='*'; SQL> alter system set db_create_online_log_dest_1='+data' scope=spfile sid='*'; SQL> alter system set log_archive_dest_1='location=+recovery' scope=spfile sid='*'; SQL> alter system set db_recovery_file_dest='+recovery' scope=spfile sid='*'; Figure D. Preparing the OCFS-based RAC database for migration Finally, administrators should shut down the instance: SQL> shutdown immediate; Step 6: Migrating OCFS-based RAC database files to ASM In the sixth step, administrators invoke the RMAN utility and start up the database in nomount state: Step 7: Performing post-migration steps Once migration is completed, administrators should perform the following tasks. Create the temporary tablespace. The temporary tablespace and the redo log files must be relocated and created on an ASM disk group to complete the migration process. To do so, administrators should take the following steps: [oracle@zaf1850-pub oracle]$ rman RMAN> connect target RMAN> startup nomount; 1. Exit RMAN and open the database on one of the nodes: SQL> alter database open; First, administrators should migrate the control file from the OCFS location to the +DATA disk group: RMAN> restore controlfile from '/u03/zafdb/control01.ctl'; Then, they must mount the database: RMAN> alter database mount; Next, administrators should copy the existing database files into the new ASM disk group (this will retain the existing files): 2. Create the temporary tablespace in an ASM disk group: SQL> create temporary tablespace temp_asm tempfile '+DATA' size 100m; 3. Make the new temporary tablespace the default temporary tablespace for the database and drop the old temporary tablespace, which was on an OCFS volume: SQL> alter database default temporary tablespace temp_asm; SQL> drop tablespace temp; RMAN> backup as copy database format '+DATA'; Finally, they should switch the database files into the new ASM disk group named +DATA : RMAN> switch database to copy; Re-create the online redo log files. Before dropping and recreating the online redo log files, administrators should take the following steps: 1. Archive the online redo log files and then stop the archiving process: Once this step is completed, all data files will have been migrated to the ASM disk group. The original data files stored on OCFS volumes will be cataloged as data file copies, so administrators can use them as backup, or they can use them to migrate back to the former storage system. SQL> alter system archive log all; SQL> alter system archive log stop; 2. Create online redo log files in the +DATA ASM disk group using separate ASM directories for each instance. Connect to August 2005 Reprinted from Dell Power Solutions, August 2005. Copyright 2005 Dell Inc. All rights reserved. DELL POWER SOLUTIONS 3
SQL> col member format a25 SQL> select lf.member, l.bytes, l.group#, l.thread# from v$logfile lf, v$log l where lf.group# = l.group# and lf.type = 'ONLINE' order by l.thread#, l.sequence#; MEMBER BYTES GROUP# THREAD# ------------------------- -------- -------- -------- /u03/zafdb/redo01.log 10485760 1 1 /u03/zafdb/redo02.log 10485760 2 1 /u03/zafdb/redo03.log 10485760 3 2 /u03/zafdb/redo04.log 10485760 4 2 Figure E. Obtaining information about existing log files SQL> select group#, status from v$log; GROUP# STATUS -------- ------------ 1 INACTIVE 2 INACTIVE 3 INACTIVE 4 CURRENT 5 ACTIVE 6 CURRENT SQL> alter database drop logfile group 1; SQL> alter database drop logfile group 2; the ASM instance on any node: SQL> alter diskgroup DATA add directory '+DATA/zafdb/log1'; SQL> alter diskgroup DATA add directory '+DATA/zafdb/log2'; 3. Find the number, size, thread, and group information for the existing log files so that they can be re-created on the ASM disk group, as shown in Figure E. 4. Create the redo log files for both threads (thread 1 is zafdb1 and thread 2 is zafdb2, respectively) on ASM disk groups and folders previously created: SQL> alter database add logfile thread 1 group 5 Figure F. Creating new log file groups on ASM disk groups ('+DATA/zafdb/log1/log11.log') size 10240K, group 6 ('+DATA/zafdb/log1/log12.log') size SQL> select group#, status from v$log; GROUP# STATUS -------- ------------ 4 INACTIVE 5 INACTIVE 6 CURRENT 7 ACTIVE 8 CURRENT 10240K; 5. Perform a log file switch twice to make the newly created log file groups 5 and 6 active and current, respectively. Drop the old redo log file groups 1 and 2 once they are inactive. Figure F shows these steps. 6. Similarly, create the log files for thread 2: SQL> alter database add logfile thread 2 group 7 ('+DATA/zafdb/log2/log21') size 10240K, group 8 ('+DATA/zafdb/log2/log22') size 10240K; SQL> alter database drop logfile group 3; SQL> alter database drop logfile group 4; Figure G. Making existing log file groups inactive 7. Start up the other instance and switch the log files for instance 2 (zafdb2) twice to make the existing log file groups inactive (this operation must be performed from the second instance). Figure G shows these steps. August 2005 Reprinted from Dell Power Solutions, August 2005. Copyright 2005 Dell Inc. All rights reserved. DELL POWER SOLUTIONS 4
SQL> select member from v$logfile; MEMBER +DATA/zafdb/log2/log21 +DATA/zafdb/log2/log22 +DATA/zafdb/log1/log11.log +DATA/zafdb/log1/log12.log SQL> select name from v$datafile; +DATA/zafdb/datafile/system.260.1 +DATA/zafdb/datafile/undotbs1.261.1 +DATA/zafdb/datafile/sysaux.259.1 +DATA/zafdb/datafile/undotbs2.262.1 +DATA/zafdb/datafile/users.263.1 +DATA/zafdb/datafile/data.257.1 +DATA/zafdb/datafile/load.258.1 SQL> select name from v$controlfile; +DATA/control.ctl SQL> select name, status from v$tempfile; Migrate the server parameter file. Unlike the previous two tasks, this task is optional. Administrators can migrate the spfile server parameter file from OCFS to an ASM disk group by entering the following in the Oracle SQL*Plus command line: SQL> create pfile='/home/oracle/pfile_asm_final. ora' from spfile; SQL> create spfile='+data/zafdb/spfilezafdb.ora' FROM pfile='/home/oracle/pfile_asm_final.ora'; They should then change the $ORACLE_HOME/dbs/initzafdb.ora file and point to the new location of the spfile: SPFILE='/u03/zafdb/spfilezafdb.ora' OLD SPFILE='+DATA/zafdb/spfilezafdb.ora' NEW Step 8: Verifying the migrated database In the eighth step, administrators should make sure that all redo log and database files have been migrated to the ASM disk group, as shown in Figure H. Next, administrators should try shutting down and restarting the database before deleting the old OCFS files and volumes (see Figure I). Step 9: Removing database files on OCFS In the final step, administrators should reclaim the space used by the OCFS volumes, remove the original OCFS database files, and then remove the OCFS volumes: STATUS ----------------------------------- ------- [oracle@zaf1850-pub dbs]$ rman target / RMAN> delete copy of database; +DATA/zafdb/tempfile/temp_asm.266.1 ONLINE Figure H. Verifying the database that has been migrated to ASM [oracle@zaf1850-pub dbs]$ srvctl stop database -d zafdb [oracle@zaf1850-pub dbs]$ srvctl start database -d zafdb [oracle@zaf1850-pub dbs]$ srvctl status database -d zafdb Instance zafdb1 is running on node zaf1850-pub Instance zafdb2 is running on node zaf2850-pub Figure I. Shutting down and restarting the ASM-managed database August 2005 Reprinted from Dell Power Solutions, August 2005. Copyright 2005 Dell Inc. All rights reserved. DELL POWER SOLUTIONS 5