Yesterday we saw the latest decision in the long-running legal spectacle of Oracle suing Google for allegedly infringing Oracle’s Java API.
Google defeats Oracle in Java code copyright case
I am not a lawyer, nor do I claim any depth of legal expertise. And as engineers, its easy to dismiss these events as one tech giant looking to squeeze some money from another tech giant using their lawyers instead of their products.
But as engineers, these courtroom events should directly concern us.
With the rise of all-flash storage arrays, DBAs have been exploring the opportunity to use the native 4K sectors/4K logical block mode of flash drives.
Traditional spinning disk almost universally uses a 512-byte block, or in reality most arrays now use a 520-byte block with 8 bytes reserved for Data Integrity Field or DIF, but flash drives use a 4096-byte block instead, which many all-flash arrays now expose to the operating system if instructed to do so.
On an EMC XtremIO, the LUN block size may be selected at LUN creation time from the Logical Block Size drop-down.
Several popular operating systems including Windows Server 2012, Red Hat Enterprise Linux 6.0 and Solaris 11.1 support this new configuration.
The advantage is by writing 4K blocks, instead of 512-byte blocks, the all-flash array is not required to use a Read/Modify/Write shuffle to update 512 bytes with a 4K block. There are some modest performance benefits to doing this, but don’t expect anything radical in most cases.
However be aware that at present, with VMware ESX 5.5, the VMware hypervisor cannot work with LUNs that use a logical block size of 4K, even if they are presented as RDMs. If you try to attach native 4K LUNs to a guest OS as RDMs, the guest OS power-up will fail with:
38 (Function not implemented)
A VMware Knowledge Base article confirms this is expected behavior.
VMware Knowledge Base 2091600
When using VMware, be sure to specify Normal (512 LBs) from your XtremIO array.
A couple of weeks ago, Oracle released the long awaited Oracle 12c database, with lots of exciting new features.
A couple of great blog posts have already been done on how to install this, but from what I have seen they rely on Oracle’s OVM technology and/or Oracle Enterprise Linux.
This blog post is a detailed step-by-step of Oracle 12cR1 RAC using VMware Workstation and CentOS 6.4.
Time Required: 20 minutes
- Completed Oracle 12c install
The next step is to create a new 12c database!
The dbca is still the method used to create a new database in 12c. But before we launch the installer, we will modify the /etc/oratab file.
Time Required: 60 minutes
- Oracle 12cR1 Database software
The next step is to install the Oracle 12c database software.
The database install process is largely unchanged from the 11g installer, so this should be familiar terriroty to most DBAs.
Oracle 12cR1 22.214.171.124 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Part IX
Time Required: 60 minutes
- Oracle 12cR1 Grid Infrastructure software
Now that we have completed all the preparation steps and the grid pre-install steps, we can install the 12c Oracle Grid Infrastructure software.
The Grid Infrastructure will provide the Cluster software that allows the RAC nodes to communicate, as well as the ASM software to manage the shared disks.
Time Required: 30 minutes
- Oracle 11gR2 Grid Infrastructure software
Next we are going to perform some final steps before we can launch the Oracle Grid Infrastructure install.
The Grid Infrastructure will provide the cluster software that allows the RAC nodes to communicate, as well as the ASM software to manage the shared disks.
To begin, download the zip files from the Oracle software download website and unzip on Ruggero. Make sure you are logged in as the oracle user so that oracle owns the unzipped files.