Restore an RMAN backup to a different host

 

Oracle RMAN is a powerful tool with many features for recovering datafiles, tablespaces or even single blocks, as well as cloning databases for non production uses.  However restoring a database to an entirely different server (or set of servers) it was backed up from is a somewhat cumbersome process.

In this post we will restore an RMAN backup to a new host, keeping the same database name and datafile names.

Continue reading

RMAN Restore fails with “PSDRPC returns significant error 3113”

When using RMAN to restore a database to a new host, the recover database step fails with:

Crosschecked 43 objects
PSDRPC returns significant error 3113.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/18/2017 19:31:23
ORA-03113: end-of-file on communication channel

If you connect to the recovered database and check the database role, you find that the controlfiles have been restore for a physical standby:

SQL> select name, open_mode, database_role from v$database;

NAME      OPEN_MODE            DATABASE_ROLE
--------- -------------------- ----------------
NYC11     MOUNTED              PHYSICAL STANDBY

The problem here is that if you use RMAN to restore the controlfiles to a new host, and you are connected to the RMAN catalog, the controlfiles are written as standby controlfiles.

Here how the controlfiles in this example were restored to the new host:

[oracle@rstemc64vm31 ~]$ rman target / catalog rman/rman@rcat

Recovery Manager: Release 11.2.0.4.0 - Production on Sat Mar 18 19:29:57 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: NYC11 (not mounted)
connected to recovery catalog database

RMAN> restore controlfile from tag CF_20170318_1435;

Starting restore at 18-MAR-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=6 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=63 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=132 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=191 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: reading from backup piece /nfs_mount/dd0205_rman/nyc11/cf_NYC11_9471_1.rmn
channel ORA_DISK_1: piece handle=/nfs_mount/dd0205_rman/nyc11/cf_NYC11_9471_1.rmn tag=CF_20170318_1435
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=+DATA/nyc11/controlfile/current.271.938979023
output file name=+FRA/nyc11/controlfile/current.338.938979023
Finished restore at 18-MAR-17

In order to avoid this problem, we need to restore the controlfiles while NOT connected to the RMAN catalog database.  This means we cannot use the backup tag, but we can use the name of the backupset file:

[oracle@rstemc64vm31 dataguard]$ rman target / catalog rman/rman@rcat

Recovery Manager: Release 11.2.0.4.0 - Production on Sat Mar 18 19:42:14 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: NYC11 (DBID=334041916, not open)
connected to recovery catalog database

RMAN> list backup of controlfile completed after 'sysdate-(6/24)';
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
398266 Incr 0 31.02M DISK 00:00:05 18-MAR-17
BP Key: 398276 Status: AVAILABLE Compressed: NO Tag: CF_20170318_1435
Piece Name: /nfs_mount/dd0205_rman/nyc11/cf_NYC11_9471_1.rmn
Control File Included: Ckp SCN: 33220754 Ckp time: 18-MAR-17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
398294 Full 31.05M DISK 00:00:01 18-MAR-17
BP Key: 398296 Status: AVAILABLE Compressed: NO Tag: TAG20170318T144723
Piece Name: +FRA/nyc11/autobackup/2017_03_18/s_938962043.287.938962043
Control File Included: Ckp SCN: 33221214 Ckp time: 18-MAR-17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
398314 Full 31.02M DISK 00:00:02 18-MAR-17
BP Key: 398317 Status: AVAILABLE Compressed: NO Tag: TAG20170318T144728
Piece Name: /nfs_mount/dd0205_rman/nyc11/NYC11_controlfile_83rver40_1_1.ctl
Control File Included: Ckp SCN: 33221259 Ckp time: 18-MAR-17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
398325 Full 31.05M DISK 00:00:00 18-MAR-17
BP Key: 398327 Status: AVAILABLE Compressed: NO Tag: TAG20170318T144731
Piece Name: +FRA/nyc11/autobackup/2017_03_18/s_938962051.351.938962051
Control File Included: Ckp SCN: 33221273 Ckp time: 18-MAR-17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
398462 Full 31.05M DISK 00:00:01 18-MAR-17
BP Key: 398478 Status: AVAILABLE Compressed: NO Tag: TAG20170318T144818
Piece Name: +FRA/nyc11/autobackup/2017_03_18/s_938962098.318.938962099
Control File Included: Ckp SCN: 33221554 Ckp time: 18-MAR-17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
398517 Full 31.05M DISK 00:00:00 18-MAR-17
BP Key: 398519 Status: AVAILABLE Compressed: NO Tag: TAG20170318T144827
Piece Name: +FRA/nyc11/autobackup/2017_03_18/s_938962107.395.938962107
Control File Included: Ckp SCN: 33221632 Ckp time: 18-MAR-17

The controlfile backed up with tag CF_20170318_1435 is in the backupset file /nfs_mount/dd0205_rman/nyc11/cf_NYC11_9471_1.rmn.

So exit RMAN, and start it again without the connection to the catalog:

[oracle@rstemc64vm31 dataguard]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Sat Mar 18 19:50:43 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: NYC11 (not mounted)

RMAN> restore controlfile from '/nfs_mount/dd0205_rman/nyc11/cf_NYC11_9471_1.rmn';

Starting restore at 18-MAR-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=69 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
output file name=+DATA/nyc11/controlfile/current.271.938979023
output file name=+FRA/nyc11/controlfile/current.338.938979023
Finished restore at 18-MAR-17

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

Connect to the database using SQL*Plus and verify that the database has mounted the correct controlfiles and that the database role is shown as PRIMARY.

SQL> show parameter control_files

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      +DATA/nyc11/controlfile/curren
                                                 t.271.938979023, +FRA/nyc11/co
                                                 ntrolfile/current.338.93897902
                                                 3

SQL> select name, open_mode, database_role from v$database;

NAME      OPEN_MODE            DATABASE_ROLE
--------- -------------------- ----------------
NYC11     MOUNTED              PRIMARY

Once the correct type of control file has been restored, you can launch RMAN again and this time connect to the catalog to restore and recover the database to the new host.

EMC Unity Storage Performance testing with Oracle ASM and SLOB

I’ve been testing the new EMC Unity 600F all-flash storage array with an Oracle database to determine the impact of storage compression on Oracle I/O performance.  To do this I have been using Kevin Closson’s SLOB tool.

SLOB is an excellent tool for testing I/O performance as seen by Oracle.  Unlike Swingbench which mimics a real application and therefore spends much of its execution cycle on the server CPU, SLOB concentrates exclusively on generating I/O from the Oracle stack.

Conversely, don’t expect SLOB to generate meaningful data to test ranking or sorting operations inside of Oracle.  SLOB generates entirely synthetic data that is meaningless from an application standpoint.

The following posts covers using SLOB to test I/O performance, and what was learned from the testing against the Unity 600F all-flash array.

Continue reading

Experimenting with ASM Filter Drivers

Oracle is moving away from ASMlib, and introducing ASM Filter Drivers as a replacement.

ASM Filter Drivers will handle consistent device naming and permissions, as well as filter out illegal IO to ASM devices to protect against rogue dd commands corrupting ASM disks.

Future plans include support for TRIM commands to enable thinly provisioned disks to reclaim deleted blocks without having to resort to the massively dangerous ASRU tool.

ASM Filter Drivers were introduced with Oracle 12.1.0.2, but the implementation is currently one massive kludge.  By default on 12.1.0.2, OEL7 is not supported without a patch (patch 21053000).  OEL6 UEK is also not supported without a patch (patch 18321597).

Note that the patches require OPatch 12.1.0.2, but Oracle Grid Infrastructrue 12.1.0.2 installs OPatch 12.1.0.1.3 so you have to patch the patcher (patch 6880880), so you can patch the Oracle software, to make Oracle ASM Filter Drivers work with Oracle’s own operating system kernel.  Clear?  Good!

You cannot install Filter Drivers by default.  You have to migrate to them from UDEV or ASMlib.

Oracle 12.2 should hopefully fix this mess and make Filter Drivers actually usable, but in the meantime it might be fun to play with the new technology and see what it can do.

Continue reading

ASM Filter Drivers – disks not filtering on reboot

 

 

You’ve enabled ASM Filter Drivers, migrated your existing disks, and then after you reboot you see this:

[oracle@oel6solo ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
DATA2 DISABLED /dev/sdc
DATA4 DISABLED /dev/sde
DATA3 DISABLED /dev/sdd
DATA1 DISABLED /dev/sdb

You check the ASM Filter Driver state and it says it is loaded and filtering:

[oracle@oel6solo ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DEFAULT' on host 'oel6solo.comp-lab.sc'

There is one more step that much of the documenation is missing:

[oracle@oel6solo ~]$ $ORACLE_HOME/bin/asmcmd afd_filter -e

Now the ASM FD filtering will survive a reboot:

[oracle@oel6solo ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
DATA2 ENABLED /dev/sdc
DATA4 ENABLED /dev/sde
DATA3 ENABLED /dev/sdd
DATA1 ENABLED /dev/sdb