So you’ve made some changes to your SPFILE, and since the parameters you want change cannot be made to a running instance, you used the scope=spfile clause.
Now you go to restart your database, and you find the instance won’t start!
[oracle@sio01-mgmt sql]$ srvctl start database -d dbbench
PRCR-1079 : Failed to start resource ora.dbbench.db
CRS-5017: The resource action "ora.dbbench.db start" encountered the following error:
ORA-01078: failure in processing system parameters
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/sio02-mgmt/crs/trace/crsd_oraagent_oracle.trc".
I have been running into some problems recently with 12cR2 databases and Kevin Clossons’ SLOB tool.
The SLOB setup.sh script allows for the concurrent loading of multiple schemas, and if you are loading a large amount of data, being able to load concurrently is a significant time saver.
With LOAD_PARALLEL_DEGREE set to 8, I got the following error:
ORA-27090: Unable to reserve kernel resources for asynchronous disk I/O
Linux-x86_64 Error: 11: Resource temporarily unavailable
Additional information: 3
Additional information: 128
Additional information: 140728056780720
These servers were new Dell R630s with plenty of horsepower, so the idea that just 8 parallel threads would cause this type of a failure was puzzling.
Further investigation of the trace file showed that the problem occured on the index shrink command:
SQL> SQL> ALTER INDEX i_cf1 SHRINK SPACE COMPACT
ERROR at line 1:
ORA-12801: error signaled in parallel query server P13L, instance
After some time investigating, it seems Oracle 12c has a much higher target for PARALLEL_MAX_SERVERS and PARALLEL_SERVERS_TARGET. In my case, PARALLEL_MAX_SERVERS had defaulted to 2240.
Since the SLOB data load uses the parallel query option, Oracle was spawning thousands of slave processes all trying to issue ASYNC IO.
So I set the numbers to what I considered more reasonable:
SQL> alter system set parallel_max_servers=400 sid='*';
SQL> alter system set parallel_min_servers=40 sid='*';
SQL> alter system set parallel_servers_target=400 sid='*';
Now SLOB was able to load data with eight concurrent processes.
Oracle RMAN is a powerful tool with many features for recovering datafiles, tablespaces or even single blocks, as well as cloning databases for non production uses. However restoring a database to an entirely different server (or set of servers) it was backed up from is a somewhat cumbersome process.
In this post we will restore an RMAN backup to a new host, keeping the same database name and datafile names.
When using RMAN to restore a database to a new host, the recover database step fails with:
Crosschecked 43 objects
PSDRPC returns significant error 3113.
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-03002: failure of recover command at 03/18/2017 19:31:23
ORA-03113: end-of-file on communication channel
Frequently I find I want to check what operating system is certified for which version of the database, and I can never remember the MOS note that explains it.
So here it is: MOS Note 1304727.1
Connect to My Oracle Support to access the note.
I’ve been testing the new EMC Unity 600F all-flash storage array with an Oracle database to determine the impact of storage compression on Oracle I/O performance. To do this I have been using Kevin Closson’s SLOB tool.
SLOB is an excellent tool for testing I/O performance as seen by Oracle. Unlike Swingbench which mimics a real application and therefore spends much of its execution cycle on the server CPU, SLOB concentrates exclusively on generating I/O from the Oracle stack.
Conversely, don’t expect SLOB to generate meaningful data to test ranking or sorting operations inside of Oracle. SLOB generates entirely synthetic data that is meaningless from an application standpoint.
The following posts covers using SLOB to test I/O performance, and what was learned from the testing against the Unity 600F all-flash array.
Oracle is moving away from ASMlib, and introducing ASM Filter Drivers as a replacement.
ASM Filter Drivers will handle consistent device naming and permissions, as well as filter out illegal IO to ASM devices to protect against rogue dd commands corrupting ASM disks.
Future plans include support for TRIM commands to enable thinly provisioned disks to reclaim deleted blocks without having to resort to the massively dangerous ASRU tool.
ASM Filter Drivers were introduced with Oracle 188.8.131.52, but the implementation is currently one massive kludge. By default on 184.108.40.206, OEL7 is not supported without a patch (patch 21053000). OEL6 UEK is also not supported without a patch (patch 18321597).
Note that the patches require OPatch 220.127.116.11, but Oracle Grid Infrastructrue 18.104.22.168 installs OPatch 22.214.171.124.3 so you have to patch the patcher (patch 6880880), so you can patch the Oracle software, to make Oracle ASM Filter Drivers work with Oracle’s own operating system kernel. Clear? Good!
You cannot install Filter Drivers by default. You have to migrate to them from UDEV or ASMlib.
Oracle 12.2 should hopefully fix this mess and make Filter Drivers actually usable, but in the meantime it might be fun to play with the new technology and see what it can do.