Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Part IX

Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Part IX

Time Required: 60 minutes

Class Materials:

  • Oracle 12cR1 Grid Infrastructure software

Now that we have completed all the preparation steps and the grid pre-install steps, we can install the 12c Oracle Grid Infrastructure software.

The Grid Infrastructure will provide the Cluster software that allows the RAC nodes to communicate, as well as the ASM software to manage the shared disks.

In the previous step we unzipped the installation files downloaded from Oracle, so we should already have a grid install directory.

We will now launch the Grid installer as follows:

[oracle@ruggero grid]$ ./runInstaller &

 
This will start the graphical installer and present the initial menu. Select the option; Skip software updates. Then click Next.

At the next screen select Install and Configure Oracle Grid Infrastructure for a Cluster and click Next.

At the next screen I selected Configure a Standard Cluster. I will play with the Flex Cluster options another time. Click Next.

At the next screen select Advanced Installation and click Next.

At the next screen select select the langauges you want to install and click Next.

At the next screen enter your cluster information. I have named my cluster larondine and set the SCAN address to the one I defined earlier in the DNS install step – larondine-scan.hadesnet.

I am not configuring GNS.

Enter the cluster and SCAN name for your cluster and click Next.

At the next screen click Add and enter the host IP and VIP of the additional nodes. In my example I am adding one extra node I called magda with a VIP of magda-vip. Click Next.

At the next screen set the public and private networks for the cluster. In my example eth0 is a DHCP managed network to the outside world, so not suitable for cluster use.

My cluster public network is on subnet 10.10.1.x so I set that network to Public.

My cluster private network is on subnet 10.10.2.x so I set that network to Private. I may also set it to Private and ASM if I intend to use a new 12c feature whereby I have a single ASM for all nodes to connect, but again I am going to leave that for now.

At the next screen I choose to configure the Grid Infrastructure Management Repository. This feature allows several new 12c options, including:

  • Cluster Health Monitor.
  • QoS (Quality of Service) Management.
  • Memory Guard.
  • Rapid Home Provisioning.

Note: In my testing, select to configure the Grid Infrastructure Management Repository invariably leads to an INS-20802 error during cluster verification. Although the error can be ignored, if you want a completely clean install and you do not plan to use the above features, you may choose to not configure the repository.

At the next screen, select Use Standard ASM for storage. If you set your private cluster NIC to Private and ASM in a previous step, then you might choose to experiment with Flex ASM.

At the next screen we choose the disk for ASM to use. The default discovery path is /dev/sd*, which as we changed that through UDEV rules to dev/oracleasm/asm-disk1 is not going to work for us.

Click Change Discovery Path and change the path to /dev/oracleasm* and click OK.

Changing the discovery path should allow the installer to see the disk we prepared as a candidate disk.

Set the redundancy to External and select the candidate disk. Then click Next.

At the next screen, select Use same passwords for these accounts and enter a simple password for the SYS and ASMSNMP accounts.

Since this is a sandbox I tend to use something like “oracle”.

At the next screen, select Do not use Intelligent Platform Management Interface (IPMI). Click Next.

At the next screen, set the Oracle ASM Administrator (OSASM)Group to asmadmin and the Oracle ASM DBA (OSDBA for ASM) Group to asmdba.

Leave the Oracle ASM Operator (OSOPER for ASM) Group blank.

Click Next

At the next screen set the locations for the Oracle base and the grid install.

In my example these are:

Component Location
Oracle Base /u01/app/oracle
Grid Infrastructure /u01/app/12.1.0/grid

Click Next

At the next screen set the locations for the Oracle inventory.

At the next screen is one of my favourite new feature of the 12c install process. The installer now allows us to automatically run the root scripts at the end of process, instead of having to do that manually.

The reason I like this so much, is that I have taught RAC technology across the United States, and in every single class, someone always tries to run the root.sh on the second node before it has completed on the first which wreaks havoc on the install and sometimes we have to start all over again.

Check Automatically run configuration scripts and enter the root password into the dialog box.

Click Next

The next screen will verify our configuration and report back.

The Single Client Access Name (SCAN) failure is due to us only have two IP addresses listed. We can ignore that.

The Device Checks for ASM is due to a bug in the executable /tmp/CVU_12.1.0.1.0_oracle/exectask which tries to determine is our ASM disk is suitable for an install.

One of the things that this process does is execute the following piece of code:

/bin/grep KERNEL== /etc/udev/rules.d/*.rules | grep GROUP | grep MODE | sed -e '/^#/d' -e 's/\*/.*/g' -e 's/\(.*\)KERNEL=="\([^\"]*\)\(.*\)/\2 @ \1 KERNEL=="\2\3/' | awk '{if ("asm-disk1" ~ $1 ) print $3,$4,$5,$6,$7,$8,$9,$10,$11,$12}' | sed -e 's/://' -e 's/\.\*/\*/g'

 
My sed and awk skills are admittedly rusty, but as far as I can see, the only way for that script to succeed is when the device name we have selected for install – in our case /dev/oracleasm/asm-disk1 matches the original device name in the UDEV rule, which is /dev/sdb1.

But since we use UDEV to insulate ourselves from Linux renaming devices as it discovers them on the SCSI bus, doing so would not only ignore one of the chief reasons for using UDEV, but also open ourselves up to failure later on if someone added new devices to the SCSI bus on any node in our cluster.

Therefore, we will ignore this error.

Click Ignore All and click Next.

The installer will warn us that we are ignoring potential problems. Click OK and then review the options we have selected.

If everything look okay, click Install.

The install process can take some time so relax. It can easily take 45 minutes on my Core i7 laptop with 16GB of RAM, so don’t be surprised if yours takes a while too.

When the installer reaches 79% it will prompt us if we want to run the root scripts. Just click Yes.

If you selected to create the Grid Infrastructure Management Repository, then as the install nears completion you will get an error:

[INS-20802] Oracle Cluster Verification Utility failed.

Scan through the log for anything that does not refer to SCAN addresses or UDEV devices. In all likelihood the install is fine.

Click OK

The next screen shows a completed install. Click Skip.

The install is now complete.

Article Quick Navigation
Previous Step Main Index Next Step
Advertisements

15 thoughts on “Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Part IX

  1. Does this step (running the Grid installer) only need to be run on one of the nodes if all the prep steps were completed on both nodes?

    • Hi Roger,

      Yes the Grid install should only run on one node. The software automatically installs itself onto additional nodes.

  2. To get past this (if you start out with a minimal Centos like I did), it is needed to run “yum install xorg-x11-utils*”

    The /usr/bin/xdpyinfo is simply missing, but the crappy (pardon my french) Oracle installer keeps whining about not being able to check if screen can show +256 colors. Geez..

  3. Next obstacle, thrown in by my own error and Oracle installers unwillingness to helpin any way is that
    “Scan Name” **must** include the silly little dot “.” at the end, as in your text and example screendumps.
    Otherwise it will … fail.

  4. Hmm,
    I might be doing this wrong… Should the local DNS be able to lookup each nodename->IP?
    Am I missing some local DNS server config?
    (the nodenames are added to each servers hosts files as instructed. “ping roskilde|copenhagen” works as a charm. But nslookup [nodename] fails with NXDOMAIN)

    Below is the “failed” result (the only failed by now) of the Grid runInstaller:
    (A word of advise would be hugely appreciated)

    Task resolv.conf Integrity – This task checks consistency of file /etc/resolv.conf file across nodes
    Operation Failed on Nodes: [roskilde,  copenhagen]
    Verification result of failed node: roskilde  Details:
     – 
    Server: 10.10.1.110 Address: 10.10.1.110#53 ** server can’t find roskilde: REFUSED  – Cause: Cause Of Problem Not Available  – Action: User Action Not Available
     – 
    Server: 99.99.135.52 Address: 99.99.135.52#53 ** server can’t find roskilde: NXDOMAIN  – Cause: Cause Of Problem Not Available  – Action: User Action Not Available
     – 
    Check for integrity of file “/etc/resolv.conf” failed  – Cause: Cause Of Problem Not Available  – Action: User Action Not Available

    • Hi Brian, Did you get the solution for “Check for integrity of file “/etc/resolv.conf” failed – Cause: Cause Of Problem Not Available – Action: User Action Not Available”?? I am also facing the same problem so if you have already fixed it, please share with us how? Thanks

  5. Ok, I am officially lost here in the depth of Oracle Tools/scripts.
    Your guide is very, very good. I am sure I must have goofed up somewhere along the line, because Grid installer couldn’t run root its scripts. And hence it failed “something” regarding the remote node. (copenhagen).

    I *did* get past my previous obstacle (Server can’t find “REFUSED”/”NXDOMAIN”) by adding an A record for each of my nodes in each DNS. That made me able to do nslookup on each, and seemed to make the installer happy(er).

    But it all ends now, where the next step (root script) resulted in this little marvel, “unknown to man” (or google) 😀

    CLSRSC-507: The root script cannot proceed on this node copenhagen because either the first-node operations have not completed on node roskilde or there was an error in obtaining the status of the first-node operations. Died at /u01/app/12.1.0/grid/crs/install/crsutils.pm line 3681. The command ‘/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/rootcrs.pl -auto -lang=en_US.UTF-8’ execution failed

    I told it to ignore and finished the install (“successful, but some config assistants failed”), and tried starting the next step “oracle database install”, but that can only see “one” node, as one would expect from the previous. I just don’t quite know how to mend the dent, so to speak.

  6. The same here – only one node looks ok… For the second I’ve got “Died at /u01/app/12.1.0.1/grid/crs/install/crsutils.pm line 3681.” Did you find any solution?

  7. I have not found any solution. In fact I gave up on my attempt to use OL7, and are now trying to make things work on OL6 in stead.
    Seems like Oracle totally forgot to make Guides available to OL7, just threw it out there.
    Now I am stuck at the fact that my list of ASM disks in my new attempt is ….. (wait for it..)
    EMPTY 🙂
    Go figure…

  8. Yep see CLSRSC-507: Doc ID 1919825.1. Damn, so 12.1.0.2 fresh install is hosed. How can this be!? very frustrating… I guess wait for Oct2014-PSU is only option?

  9. I solved this , by running a deconfig force on the failed node , and replacing lsnodes.bin with a copy of olsnodes.bin and lsnodes with a copy of olsnodes. Seems lsnodes fails with struct size 0 , but olsnodes does not. This gets past the error in the addnode script

  10. Has anyone solved this error(during GI install) in vmware workstation?

    CLSRSC-507: The root script cannot proceed on this node because either the first-node operations have not completed on node or there was an error in obtaining the status of the first-node operations

    This seems to be an issue only in VMWare Workstation. I was able to install this successfully using VIrtualBox VM’s. Wondering what in vmware is causing this issue. One difference I can think is that virtual box forces the shared disks to be “Fixed size disks” instead of dynamic sized but I’m not sure how that can cause this error.

  11. Node Connectivity – This is a prerequisite condition to test whether connectivity exists amongst all the nodes. The connectivity is being tested for the subnets “192.0.2.0,10.0.0.0”  Error:
     - 
    PRVF-4090 : Node connectivity failed for interface “*”  – Cause:  Unable to verify connectivity to the interface indicated using the “OS ping” utility.  – Action:  Verify that the interface indicated is available.
     - 
    PRVF-4090 : Node connectivity failed for interface “*”  – Cause:  Unable to verify connectivity to the interface indicated using the “OS ping” utility.  – Action:  Verify that the interface indicated is available.

    Check Failed on Nodes: [rac2,  rac1]
    Verification result of failed node: rac2  Details:
     - 
    PRVG-11050 : No matching interfaces “*” for subnet “192.0.2.0” on nodes “rac1,rac2”  – Cause:  The interfaces classified as cluster interconnect or public were not found on the specified subnet on the nodes.  – Action:  Ensure that at least one interface with specified name is on the specified subnet on each node of Cluster.
    Back to Top
    Verification result of failed node: rac1  Details:
     - 
    PRVG-11050 : No matching interfaces “*” for subnet “192.0.2.0” on nodes “rac1,rac2”  – Cause:  The interfaces classified as cluster interconnect or public were not found on the specified subnet on the nodes.  – Action:  Ensure that at least one interface with specified name is on the specified subnet on each node of Cluster.
    Back to Top

    Can you help me ? Thanks advanced .

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s