How To Install Oracle 10g RAC On Solaris 10 - Part 1 of 4

Pre-Installation Tasks, CRS Installation, CRS Patching & Regression

BACKGROUND & OVERVIEW

The following documentation provides instructions for building an Oracle Real Application Cluster (RAC) database residing on a Sun Solaris 10 (Version 5.10) operating system.

This document is Part 1 of a 4-Part Installation Guide.

ASSUMPTIONS & PRE-REQUISITES

This document expects and assumes the following:

  • The instructions are carried out by a qualified DBA, fully conversant with Oracle, including RAC.
  • Access to the internet is available.
  • All necessary client software, e.g. Telnet and X-Server is available.
  • The necessary RAC software has been installed under /u02/SOFTWARE.
    • The guide assumes sub-directories of CLUSTER, COMPANION, DATABASE and OPATCH exist.
  • The resultant database will be part of a RAC configuration.
  • All references to SIDxx should be replaced with correct database name.
  • All $variable references assume the .profile as described in the File Listings section has been implemented and run.
  • All root.sh references are used to indicate a script to be submitted as root has to be run. The exact name of the script may vary depending upon which piece of software is being installed or patched. The exact name of the script to be used will be displayed by the GUI.
  • The screen shots displayed are for guidance only, the values displayed on them do not necessarily correspond to the values used in the worked examples.

SETTINGS USED FOR THE WORKED EXAMPLE

  • The following settings and values have been used in the example:

IP ADDRESSES & HOST NAMES

Node 1 Host Name : HOST1
Node 1 IP Address : 192.168.1.1
Node 1 Oracle VIP Host Name : HOST1-vip
Node 1 Oracle VIP IP Address : 192.168.1.2
Node 1 Heart-beat Host Name : HOST1-hb
Node 1 Heart-beat IP Address : 192.168.2.1
Node 2 Host Name : HOST2
Node 2 IP Address : 192.168.1.3
Node 2 Oracle VIP Host Name : HOST2-vip
Node 2 Oracle VIP IP Address : 192.168.1.4
Node 2 Heart-beat Host Name : HOST2-hb
Node 2 Heart-beat IP Address : 192.168.2.2

SOFTWARE VERSIONS & LOCATIONS

Clusterware Source Software : HOST1:/u02/SOFTWARE/CLUSTER/10.2
Clusterware Source Patch Software : HOST1:/u02/SOFTWARE/CLUSTER/PATCHES
Node 1 Clusterware $HOME : HOST1:/u01/app/crs/product/10.2.0
Node 2 Clusterware $HOME : HOST2:/u01/app/crs/product/10.2.0
ASM Source Software : HOST1:/u02/SOFTWARE/DATABASE/10.2
ASM Source Patch Software : HOST1:/u02/SOFTWARE/DATABASE/PATCHES
Node 1 ASM $HOME : HOST1:/u01/app/asm/product/10.2.0
Node 2 ASM $HOME : HOST2:/u01/app/asm/product/10.2.0
Database Source Software : HOST1:/u02/SOFTWARE/DATABASE/10.2
Database Source Patch Software : HOST1:/u02/SOFTWARE/DATABASE/PATCHES
Node 1 Database $HOME : HOST1:/u01/app/oracle/product/10.2.0
Node 2 Database $HOME : HOST2:/u01/app/oracle/product/10.2.0
Node 1 OPatch Location : HOST1:/u02/SOFTWARE/OPATCH/OPatch
Node 2 OPatch Location : HOST2:/u02/SOFTWARE/OPATCH/OPatch

DISK INFORMATION

OCR Location : /dev/rdsk/emcpower0a
Voting Disk Location : /dev/rdsk/emcpower0b
Redundancy Method : External
ASM SPFile : /dev/rdsk/emcpower1a
ASM Password File : /dev/rdsk/emcpower1b
Database SPFile : /dev/rdsk/emcpower2a
Database Password File : /dev/rdsk/emcpower2b
Database Primary Disk Group : /dev/rdsk/emcpower3a - +DATA
Database Recovery Disk Group : /dev/rdsk/emcpower4a - +REC

CLUSTER, INSTANCE & DATABASE NAMES

Cluster Name : CRS_NAME
Node 1 Instance Name : SID01
Node 2 Instance Name : SID02
RAC Database Name : SID00
ASM Name : ASM_NAME
Node 1 ASM Instance Name : +ASM1
Node 2 ASM Instance Name : +ASM2

STEP-BY-STEP GUIDE

  1. Design network layout – See Example
  2. Design SAN layout – See Example
  3. Ensure the /etc/hosts has been configured correctly on each node. It will need to contain all necessary RAC addresses – See Example
  4. Ping all Host Names - all bar the Oracle VIP should respond.
    • If the Oracle VIPs have been pre-bound, use ifconfig to un-plumb them.
  5. Check Oracle uid and dba gid – all nodes should be configured with the same values.
  6. Ensure /var/opt/oracle directory exists and is owned by oracle:dba.
  7. As oracle, set up user equivalence:
    • Note: If fully qualified host names are used here, then they must be used in other parts of the installation too. Similarly, if the short names are used here, then the short names too must be used for the remainder of the install.
    • On HOST1
      • ssh-keygen -b 1024 -t dsa
      • <Return>
      • <Return>
      • <Return>
    • On HOST2
      • ssh-keygen -b 1024 -t dsa
      • <Return>
      • <Return>
      • <Return>
    • On HOST1
      • cd $HOME/.ssh
      • cat id_dsa.pub | ssh HOST1 "cat - >> ~/.ssh/authorized_keys2"
      • yes
      • <oracle password>
      • cat id_dsa.pub | ssh HOST2 "cat - >> ~/.ssh/authorized_keys2"
      • yes
      • <oracle password>
    • On HOST2
      • cd $HOME/.ssh
      • cat id_dsa.pub | ssh HOST1 "cat - >> ~/.ssh/authorized_keys2"
      • yes
      • <oracle password>
      • cat id_dsa.pub | ssh HOST2 "cat - >> ~/.ssh/authorized_keys2"
      • yes
      • oracle password>
  8. If the system is configured to use scp and ssh then rcp and rsh will need to be linked to scp and ssh.
    • On HOST1
      • su - root
      • <root password>
      • cd /usr/bin
      • mv rcp rcp.orig
      • ln -s scp rcp
      • mv rsh rsh.orig
      • ln -s ssh rsh
      • exit
    • On HOST2
      • su - root
      • <root password>
      • cd /usr/bin
      • mv rcp rcp.orig
      • ln -s scp rcp
      • mv rsh rsh.orig
      • ln -s ssh rsh
      • exit
  9. Set-up .profile and .ssh config.
    • Ensure the oracle .profile is configured and doesn't display any prompts.
      • On HOST1
        • cd $HOME/.ssh
        • vi config
        • LogLevel quiet
        • ForwardX11 no
      • On HOST2
        • cd $HOME/.ssh
        • vi config
        • LogLevel quiet
        • ForwardX11 no
  10. Ensure /u01 and /u02 exist on each node and are owned by oracle:dba
  11. Make the oracle directories:
    • On HOST1
      • mkdir -p /u01/app/crs/product/10.2.0
      • mkdir -p /u01/app/asm/product/10.2.0
      • mkdir -p /u01/app/oracle/product/10.2.0
    • On HOST2
      • mkdir -p /u01/app/crs/product/10.2.0
      • mkdir -p /u01/app/asm/product/10.2.0
      • mkdir -p /u01/app/oracle/product/10.2.0
  12. Ensure those in charge of the SAN have correctly labelled the LUN's and that they are identical from each node.
  13. Change the LUN permissions:
    • On HOST1
      • su - root
      • <root password>
      • chown oracle:dba /dev/rdsk/emcpower0b (Voting Disk)
      • chmod 644 /dev/rdsk/emcpower0b
      • chown oracle:dba /dev/rdsk/emcpower0a (OCR Disk)
      • chmod 640 /dev/rdsk/emcpower0a
      • chown oracle:dba /dev/rdsk/<the other allocated raw disks>
      • chmod 660 /dev/rdsk/<the other allocated raw disks>
      • exit
    • On HOST2
      • su - root
      • <root password>
      • chown oracle:dba /dev/rdsk/emcpower0b (Voting Disk)
      • chmod 644 /dev/rdsk/emcpower0b
      • chown oracle:dba /dev/rdsk/emcpower0a (OCR Disk)
      • chmod 640 /dev/rdsk/emcpower0a
      • chown oracle:dba /dev/rdsk/<the other allocated raw disks>
      • chmod 660 /dev/rdsk/<the other allocated raw disks>
      • exit
  14. Check for existence of required packages:
    • On HOST1
      • su - root
      • <root password>
      • pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWxwfnt
      • exit
    • On HOST2
      • su - root
      • <root password>
      • pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWxwfnt
      • exit
  15. Set configuration parameters
    • On HOST1
      • su - root
      • <root password>
      • ndd -set /dev/udp udp_xmit_hiwat 65536
      • ndd -set /dev/udp udp_recv_hiwat 65536
      • exit
    • On HOST2
      • su - root
      • <root password>
      • ndd -set /dev/udp udp_xmit_hiwat 65536
      • ndd -set /dev/udp udp_recv_hiwat 65536
      • exit
  16. Check java version:
    • On HOST1
      • su - root
      • <root password>
      • java -version
      • exit
    • On HOST2
      • su - root
      • <root password>
      • java -version
      • exit
  17. Configure the DISPLAY variable, if not already set.
  18. Start x-server, if not already running.
  19. Start CRS installation…
  20. Run runInstaller and follow the instructions as demonstrated in the Example CRS Install section below.
    • On HOST1
      • export ORACLE_BASE=/u01/app/oracle
      • export ORACLE_HOME=/u01/app/crs/product/10.2.0
      • cd /u02/SOFTWARE/CLUSTER/10.2
      • ./runInstaller
      • Required answers and actions - see the Example CRS Install section for the detailed plan:
        • /u01/app/oracle/oraInventory
        • dba
        • OraCrs10g_home1
        • /u01/app/crs/product/10.2.0
        • CRS_NAME
        • HOST1 ; HOST1-hb ; HOST1-vip
        • HOST2 ; HOST2-hb ; HOST2-vip
        • Public, Private or Do Not Use as required
        • External Redundancy - /dev/rdsk/emppower0a OCR Disk
        • External Redundancy - /dev/rdsk/emppower0b Voting Disk
        • After running root.sh scripts on each node, BUT BEFORE continuing with GUI run:
          • vipca
  21. Configure the DISPLAY variable, if not already set.
  22. Start x-server, if not already running.
  23. Start CRS Patch installation…
  24. Run runInstaller and follow the instructions as demonstrated in the Example CRS Patch section below.
    • On HOST1
      • export ORACLE_BASE=/u01/app/oracle
      • export ORACLE_HOME=/u01/app/crs/product/10.2.0
      • cd /u02/SOFTWARE/CLUSTER/PATCHES/10.2.0.4/Disk1
      • ./runInstaller
      • Required answers and actions - see the Example CRS Patch section for the detailed plan:
        • OraCrs10g_home1
        • /u01/app/crs/product/10.2.0
        • Stop crs and run root.sh on each node.
  25. CRS has now been installed and patched - now go to Part 2.

REGRESSION

If a complete return the beginning is required, the following steps will regress the installion of any software and file configurations:

  • On HOST1
    • Stop any software that may be running, RDMS instances, ASM instances, listeners, nodeapps and/or clusterware.
      • The simpliest method may be to log on as root and issue the /u01/app/crs/product./10.2.0/bin/crsctl stop crs command which will closedown the entire software stack.
    • cd $HOME
    • rm -rf .ssh
    • su - root
    • <root password>
    • dd if=/dev/zero of=/dev/rdsk/emcpower0a bs=8192 count=2560 - zero the OCR disk header
    • dd if=/dev/zero of=/dev/rdsk/<any previously used ASM disks> bs=8192 count=2560 - zero the ASM disk headers
      • It is not necessary to zero the voting disk, password file disks or spfiles disks as they will be overwritten by the next install.
    • cd /u01/app
    • rm -rf oracle crs asm
    • cd /etc/init.d
    • rm -rf init.cssd init.crs init.crsd init.evmd
    • rm /etc/rc0.d/K96init.crs
    • rm /etc/rc1.d/K96init.crs
    • rm /etc/rc2.d/K96init.crs
    • rm /etc/rc3.d.K96init.crs
    • rm /etc/rcS.d/K96init.crs
    • rm /etc/rc3.d/S96init.crs
    • cd /var/opt/oracle
    • rm -rf *
    • rm /etc/inittab.crs
    • cp /etc/inittab.orig /etc/inittab
    • cd /usr/bin
    • mv rcp.orig rcp
    • mv rsh.orig rsh
  • On HOST2
    • Stop any software that may be running, RDMS instances, ASM instances, listeners, nodeapps and/or clusterware.
      • The simpliest method may be to log on as root and issue the /u01/app/crs/product./10.2.0/bin/crsctl stop crs command which will closedown the entire software stack.
    • cd $HOME
    • rm -rf .ssh
    • su - root
    • <root password>
    • cd /u01/app
    • rm -rf oracle crs asm
    • cd /etc/init.d
    • rm -rf init.cssd init.crs init.crsd init.evmd
    • rm /etc/rc0.d/K96init.crs
    • rm /etc/rc1.d/K96init.crs
    • rm /etc/rc2.d/K96init.crs
    • rm /etc/rc3.d.K96init.crs
    • rm /etc/rcS.d/K96init.crs
    • rm /etc/rc3.d/S96init.crs
    • cd /var/opt/oracle
    • rm -rf *
    • rm /etc/inittab.crs
    • cp /etc/inittab.orig /etc/inittab
    • cd /usr/bin
    • mv rcp.orig rcp
    • mv rsh.orig rsh

DESIGNING A NETWORK LAYOUT

In order to build a RAC environment, there has to be sufficient Network Cards and IP Addresses configured and assigned to the machines taking part in the cluster. Some servers make use of the Internet Protocol Multi-Pathing facility which gives resiliance to the Network Cards.

Therefore, when designing a RAC network, the following need to be requested - for each server in the cluster:

APPLICATION LAN

  • With IPMP
    • 2 Network Cards
    • 2 Production IP Addresses - pre-bound before RAC installation commences.
    • 1 Production IPMP IP Address - pre-bound before RAC installation commences.
    • 1 Oracle VIP IP Address
  • Without IPMP
    • 1 Network Card
    • 1 Production IP Addresse - pre-bound before RAC installation commences.
    • 1 Oracle VIP IP Address

HEART-BEAT LAN

(Note: This must be a dedicated private LAN)

  • With IPMP
    • 2 Network Cards
    • 2 Heart-Beat IP Addresses - pre-bound before RAC installation commences.
    • 1 Heart-Beat IPMP IP Address - pre-bound before RAC installation commences.
  • Without IPMP
    • 1 Network Card
    • 1 Heart-Beat IP Addresse - pre-bound before RAC installation commences.

ALOM LAN

  • 1 Network Card
  • 1 ALOM IP Address - pre-bound before RAC installation commences.

BACKUP & MANAGEMENT LAN

  • 1 Network Card
  • 1 Backup & Management IP Address - pre-bound before RAC installation commences.

DESIGNING A SAN LAYOUT

In order to build a RAC environment, there has to be a number of disk compononents pre-configured.

Therefore, when designing a RAC SAN, the following, or similar, should be requested:

UFS MOUNT POINTS

For each server in the cluster

  • /u01 - To store all CRS, ASM and Database binaries.
    • At least 20 GB
  • /u02 - To store all installation software and data exports.
    • At least 40 GB

RAW DEVICES

One set, concurrently accessible by each server in the cluster

Oracle Cluster Registry : At least 200 MB
Voting Disk : At least 40 MB
ASM SPFile : At least 10 MB
ASM Password : At least 2 MB
Database SPFile : At least 10 MB
Database Password : At least 2 MB
Database Primary Disk Group : Size as required
Database Recovery (FRA) Disk Group : Size as required

FILE LISTINGS

oracle .profile

#----------------------------------------------------------------------
# Configure Terminal Settings.
#----------------------------------------------------------------------

stty susp ^Z
stty quit ^C
stty erase
export TERM=vt100-w
export ORACLE_TERM=vt100

#----------------------------------------------------------------------
# Configure Shell Settings.
#----------------------------------------------------------------------

set -o vi
export PATH=/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/ccs/bin:$PATH
export EDITOR=vi
export HOSTNAME=`hostname`
export PS1='$LOGNAME@$HOSTNAME:$ORACLE_SID> '
export TMPDIR=/tmp
export TEMP=/tmp
umask 022

#----------------------------------------------------------------------
# Configure Oracle Settings.
#----------------------------------------------------------------------

export ORACLE_BASE=/u01/app/oracle
export SQLPATH=$ORACLE_BASE/DBA/SQL
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORA_CRS_HOME=/u01/app/crs/product/10.2.0
export OPATCH_LIB=/u02/SOFTWARE/OPATCH/OPatch
export PATH=$ORACLE_HOME/bin:$PATH:$ORA_CRS_HOME/bin:$OPATCH_LIB

#----------------------------------------------------------------------
# Configure Netbackup Settings.
#----------------------------------------------------------------------

export NBU=/opt/openv/netbackup/ext/db_ext/oracle

#----------------------------------------------------------------------
# Configure Aliases.
#----------------------------------------------------------------------

alias ll="ls -lha"
alias bdf="df -h"
alias bdfasm="/u01/app/oracle/DBA/SCRIPTS/bdfasm.ksh"
alias CSTATP='$ORA_CRS_HOME/bin/crs_stat -p'
alias CSTATT='$ORACLE_BASE/DBA/SCRIPTS/crsstat.ksh'
alias CCTL='$ORA_CRS_HOME/bin/crsctl'
alias SCTL='$ORA_CRS_HOME/bin/srvctl'
alias OLOG='cd $ORACLE_HOME/log/$HOSTNAME; pwd; ls -lahtr'
alias CLOG='cd $ORA_CRS_HOME/log/$HOSTNAME; pwd; ls -lahtr'

/etc/hosts

127.0.0.1     localhost
192.168.1.1   HOST1
192.168.1.2   HOST1-vip
192.168.1.3   HOST2
192.168.1.4   HOST2-vip
192.168.2.1   HOST1-hb
192.168.2.2   HOST2-hb

EXAMPLE CRS INSTALL


CRSInstall_01.jpg
  • Click Next.

CRSInstall_02.jpg
  • Enter full path of the orainventroy directory.
  • Enter operating system group name - should be dba.
  • Click Next.

CRSInstall_03.jpg
  • Enter Name of the CRS_HOME.
    • If unsure, check the value by clicking Installed Products.
  • Enter full Path of the CRS_HOME directory.
  • Click Next.

CRSInstall_04.jpg
  • Ensure there are 0 requirements to be verified.
  • Click Next.

CRSInstall_05.jpg
  • Use Add, Edit, or Remove to configure the Public, Private and Virtual node names for each member of the cluster.
    • The following screen shows an example of the Add facility.
  • When all node names have been configured, click Next.

CRSInstall_06.jpg
  • Enter Public Node Name.
  • Enter Private Node Name.
  • Enter Virtual Node Name.
  • Click OK.

CRSInstall_07.jpg
  • Use Edit to configure the Public, Private and Do Not Use network interfaces of the cluster.
    • The following screen shows an example of the Edit facility.
  • When all interfaces have been configured, click Next.

CRSInstall_08.jpg
  • Click Public, Private or Do Not Use.
  • Click OK.

CRSInstall_09.jpg
  • Click External Redundancy.
  • Enter OCR Location.
  • Click Next.

CRSInstall_10.jpg
  • Click External Redundancy.
  • Enter Voting Disk Location.
  • Click Next.

CRSInstall_11.jpg
  • Review Summary.
  • Click Install.

CRSInstall_12.jpg
  • Wait for approximately 20 mins…

CRSInstall_13.jpg
  • Run the root.sh scripts as instructed. - use default answers.
  • DO NOT PRESS OK YET
  • If non-routable IP addresses have been used for the Public addresses - e.g. 10. or 192.168. then it is necessary to run VIPCA.
    • Using a different terminal to the install, log on to HOST1 as root.
    • cd /u01/app/crs/product/10.2.0/bin
    • ./vipca -silent -nodelist HOST1,HOST2 -nodevips HOST1/HOST1-vip,HOST2/HOST2-vip
  • Now click OK.

CRSInstall_15.jpg
  • Click Next.

CRSInstall_16.jpg
  • Click Exit.

CRSInstall_17.jpg
  • Click Yes.

EXAMPLE CRS PATCH INSTALL


CRSPatch_01.jpg
  • Click Next.

CRSPatch_02.jpg
  • Enter Name of the CRS_HOME.
    • If unsure, check the value by clicking Installed Products.
  • Enter full Path of the CRS_HOME directory.
  • Click Next.

CRSPatch_03.jpg
  • Click Next.

CRSPatch_04.jpg
  • Ensure there are 0 requirements to be verified.
  • Click Next.

CRSPatch_05.jpg
  • Review Summary.
  • Click Install.

CRSPatch_06.jpg
  • Wait for approximately 10 mins…

CRSPatch_07.jpg
  • On each node, as root, run the following, one node at a time:
    • cd /u01/app/crs/product/10.2.0/bin
    • ./crsctl stop crs
    • cd /u01/app/crs/product/10.2.0/install
    • ./root102.sh
  • Click Exit.

CRSPatch_08.jpg
  • Click Yes.

© copyright 2001-2014 ABCdba.com | all rights reserved