I’ve been testing the new EMC Unity 600F all-flash storage array with an Oracle database to determine the impact of storage compression on Oracle I/O performance. To do this I have been using Kevin Closson’s SLOB tool.
SLOB is an excellent tool for testing I/O performance as seen by Oracle. Unlike Swingbench which mimics a real application and therefore spends much of its execution cycle on the server CPU, SLOB concentrates exclusively on generating I/O from the Oracle stack.
Conversely, don’t expect SLOB to generate meaningful data to test ranking or sorting operations inside of Oracle. SLOB generates entirely synthetic data that is meaningless from an application standpoint.
The following posts covers using SLOB to test I/O performance, and what was learned from the testing against the Unity 600F all-flash array.
Oracle is moving away from ASMlib, and introducing ASM Filter Drivers as a replacement.
ASM Filter Drivers will handle consistent device naming and permissions, as well as filter out illegal IO to ASM devices to protect against rogue dd commands corrupting ASM disks.
Future plans include support for TRIM commands to enable thinly provisioned disks to reclaim deleted blocks without having to resort to the massively dangerous ASRU tool.
ASM Filter Drivers were introduced with Oracle 18.104.22.168, but the implementation is currently one massive kludge. By default on 22.214.171.124, OEL7 is not supported without a patch (patch 21053000). OEL6 UEK is also not supported without a patch (patch 18321597).
Note that the patches require OPatch 126.96.36.199, but Oracle Grid Infrastructrue 188.8.131.52 installs OPatch 184.108.40.206.3 so you have to patch the patcher (patch 6880880), so you can patch the Oracle software, to make Oracle ASM Filter Drivers work with Oracle’s own operating system kernel. Clear? Good!
You cannot install Filter Drivers by default. You have to migrate to them from UDEV or ASMlib.
Oracle 12.2 should hopefully fix this mess and make Filter Drivers actually usable, but in the meantime it might be fun to play with the new technology and see what it can do.
There are many examples of this one, this is again more for my benefit than anyone else’s.
The following shows relocating the OCR and voting disks on a 12c RAC.
No downtime is needed. You only need to execute these commands on one node. Log in as root and source the Grid Infrastructure environment to make these changes:
On this blog and elsewhere you will find UDEV rules examples for setting device ownership and naming consistency on older versions of Linux.
With RHEL7 some of the syntax has changed slightly.
This example was created using OEL7 with the Red Hat kernel, but should also work on Red Hat and CentOS.
The following shows how to clean up a failed 12cR1 RAC install on Linux, so that you can launch the runInstaller executable again.
Note. This approach assumes you have a single Oracle Home. If you have multiple versions of Oracle installed this approach may need to be adapted.
USE AT YOUR OWN RISK.
EMC recently announced the availability of its DSSD rack scale flash storage appliance.
DSSD’s specs are impressive:
- 10 Million IOPs
- 100GB/S bandwidth
- 144TB capacity
- 100uS latency
The DSSD D5 takes a different approach to flash storage, by eliminating the latency prone networking layer typically handled by Fibre Channel, Infiniband or iSCSI. DSSD replaces this with a PCI extension through a proprietary PCIe Isley Card that connects servers to the D5.
DSSD also replaces conventional SSDs that package NAND technology and makes them look like fast spinning disks, with Flash Modules that eliminate much of the latency of mimicking spinning disks.
There’s a lot more to the D5 that just that, but I am taking a DBA centric approach, and all I want to know is, how do I consume this fast storage in my database?
During install of Oracle RAC 12c (220.127.116.11), the installer, or possibly the cluster verification utility reports an error:
PRVG-11850 : The system call "connect" failed with error "111" while executing exectask on node ...