Thursday, September 26, 2013

How to Perform Pre-Installation Tasks for Oracle RAC Virtual Servers

Information about Pre-Installation Tasks for Oracle RAC Virtual Servers.
This document describes how to perform pre-installation tasks for Oracle RAC Virtual Servers after initial Red Hat Linux OS Installation.
  • These tasks are required before beginning the Oracle RAC Software Install.
  • This includes, verifying /u00 filesystems, running delta scripts for Oracle, connecting the shared storage, configuring rawdevices, cloning the base node, and finally setting up ssh for Oracle user account.
  • Wen completed continue on to Linux Oracle RAC Installation Guide



Document Creator Todd Walters


Overview Information


  1. Virtual Servers must be built as described in How to Install Linux Oracle RAC Virtual Server guide.
  2. The tasks are divided into 5 Sections that must be completed in order.
  3. The /u00 filesystem is now created with Kickstart, but verify it's correct and in /etc/fstab
Section Info
  • Section 1:  Create script to configure Oracle Account and Group
  • Section 2:  Create and Add the Virtual Shared Storage Virtual Disks in ESXi Infrastructure Client.
  • Section 3:  Configure Rawdevices for Voting and OCR Disks
  • Section 4:  Clone Virtual Server Node 1 to create additional Nodes
  • Section 5:   SSH-Key Configuration for Oracle User
  • Section 6:  Go to : Linux Oracle RAC Installation Guide
Section 1:  Begin Delta Script for Oracle Servers, to Modify Kernel and install Oracle User
  1. Create a Oracle script for Oracle servers.
    • This script does the following:
      • Creates oracle account and dba group.
      • Modifies kernel tuning parameters (requires reboot) - Review script to see kernel modifications.
      • Creates a set of empty files owned by oracle:dba (/etc/listner.ora, for example)
  2. Change Oracle User Account Password
    • [root@srv0dbxd11 oracle]# passwd oracle
      Changing password for user oracle.
      New UNIX password:
      Retype new UNIX password:
      passwd: all authentication tokens updated successfully.
  3. Change Ownership of filesystems to oracle:dba after running  oracle delta script.
    • [root@srv0dbx03 /]# chown -R oracle:dba /u00
  4. Copy Oracle User Environment to /export/home/oracle
    • [root@srv0dbx02 /]# cd /home/admins/ora-inst/oracle
      [root@srv0dbx02 /]# cp -vaR . /export/home/oracle
  5. Edit /export/home/oracle/.rhosts file to match Nodes you will build:
    • Create .rhosts file (if is doesn't exist)

      touch /export/home/oracle/.rhosts and add node information.
      >
      more .rhosts
      srv0dbxd11 oracle
      srv0dbxd12 oracle
      srv0dbxd13 oracle
      srv0dbxd11-vip oracle
      srv0dbxd12-vip oracle
      srv0dbxd13-vip oracle
      srv0dbxd11-ic oracle
      srv0dbxd12-ic oracle
      srv0dbxd13-ic oracle


  
Section 2: Begin ESXi Shared Storage Configuration 
  1. Login to ESXi host via Putty
  2. Change directory to datastore where Shared Storage will reside amd make Shared directory:
    • cd /vmfs/volumes/d0esxd02vmfs03/
    • /vmfs/volumes/d0esxd02vmfs03/ # mkdir shared
  3. Create the Three Voting Shared Virtual Disks First
    vmkfstools -c <size> -a lsilogic -d eagerzeroedthick /vmfs/volumes/<mydir>/<myDisk>.vmdk.
    Example: vmkfstools -c 10G -a lsilogic -d eagerzeroedthick /vmfs/volumes/shared/asm01.vmdk
    • /vmfs/volumes/4995670b-6d7bfc7d-392b-0019b9c871d8/shared # vmkfstools -c 250m -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/voting1.vmdk
      Creating disk '/vmfs/volumes/d0esxd01vmfs02/shared/voting1.vmdk' and zeroing it out...
      Create: 100% done.
    • Continue with each additional disk:
    • # vmkfstools -c 250m -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/voting2.vmdk
    • # vmkfstools -c 250m -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/voting3.vmdk
  4. Create the two OCR Shared Virtual Disks
    • vmfs/volumes/4995670b-6d7bfc7d-392b-0019b9c871d8/shared # vmkfstools -c 250m -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/ocr1.vmdk
    •  vmkfstools -c 250m -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/ocr2.vmdk
  5. Create the five Shared Virtual Disks to be used by ASM (set size as needed)
    • vmkfstools -c 5g -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/asm1.vmdk
    • vmkfstools -c 5g -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/asm2.vmdk
    • vmkfstools -c 5g -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/asm3.vmdk
    • vmkfstools -c 5g -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/asm4.vmdk
    • vmkfstools -c 5g -a lsilogic -d eagerzeroedthick /vmfs/volumes/d0esxd01vmfs02/shared/asm5.vmdk 
  6. Display Shared storage flat files:
    # ls -lh | grep flat
    -rw-------    1 root     root         5.0G Jun  1 22:17 asm1-flat.vmdk
    -rw-------    1 root     root         5.0G Jun  1 22:18 asm2-flat.vmdk
    -rw-------    1 root     root         5.0G Jun  1 22:19 asm3-flat.vmdk
    -rw-------    1 root     root         5.0G Jun  1 22:20 asm4-flat.vmdk
    -rw-------    1 root     root         5.0G Jun  1 22:22 asm5-flat.vmdk
    -rw-------    1 root     root       250.0M Jun  1 22:10 voting1-flat.vmdk
    -rw-------    1 root     root       250.0M Jun  1 22:10 voting-flat.vmdk
    -rw-------    1 root     root       250.0M Jun  1 22:11 voting3-flat.vmdk
    -rw-------    1 root     root       250.0M Jun  1 22:14 ocr1-flat.vmdk
    -rw-------    1 root     root       250.0M Jun  1 22:43 ocr2-flat.vmdk
  7. Power Off the Virtual Server,  Node 1 (srv0dbxd11) in Infrastructure Client.
  8. Add Virtual Shared disks to node in VMWare Infrastructure Client
    • Open Edit settings for Node 1 and on Virtual Machine Properties Box select add
    • On Add Hardware Wizard dialog box, Select Hard Disk and click Next
    • On Select a Disk page, choose Use an existing virtual disk and click Next
    • On Select Existing Disk page, click on Browse and browse to datastore where Shared Virtual Disks Were Created.
    • Choose [d0esxd01vmfs02] shared/voting1.vmdk and click Next
    • On  Specify Advanced Options page, under Virtual Device Node choose SCSI 1:0 and click Next
    • Click  Finish to complete adding the Virtual Disk.
    • Select Disk you just added and make sure to Set Mode to Independent > Persistent
    • Repeat steps 1-8 to add each disk for Voting, OCR, and ASM, incrementing the SCSI ID each time. i.e SCSI 1:1, 1:2...
    • Important: On Virtual Machine Properties Page choose SCSI Controller 1 and make sure to check Virtual under SCSI Bus Sharing.
    • Click OK to close and power on Node 1. Open a console to configure storage during boot.
    • On the Kudzu new hardware screen, Select Configure, to configure the new LSI Logic Virtual SCSI Controller
    • Login via putty once up.
    • Add Voting, and OCR Virtual Shared Disks as RAW devices in /etc/sysconfig/rawdevices ( see next section below )
Vendor Links
and This guide for Shared Storage: How Shared Storage Work on VM  
Section 3:  Begin Configure of Partitions and RAW Devices
  1. Login to srv0dbxd11 or Node 1 and Verify you can see all the new Virtual Shared Storage Disks
    • [root@srv0dbxd11 ~]# fdisk -l | grep /dev/sd
      Disk /dev/sdb doesn't contain a valid partition table
      Disk /dev/sdc doesn't contain a valid partition table
      Disk /dev/sdd doesn't contain a valid partition table
      Disk /dev/sde doesn't contain a valid partition table
      Disk /dev/sdf doesn't contain a valid partition table
      Disk /dev/sdg doesn't contain a valid partition table
      Disk /dev/sdh doesn't contain a valid partition table
      Disk /dev/sdi doesn't contain a valid partition table
      Disk /dev/sdj doesn't contain a valid partition table
      Disk /dev/sdk doesn't contain a valid partition table
    • Note: /dev/sdb - /dev/sdf should be for OCR and Voting and /dev/sdg - /dev/sdk for ASM
  2. We next need to create one partition on each disk using fdisk
    • [root@srv0dbxd11 ~]# fdisk /dev/sdb
    • Choose n > p > 1 > default > default > w
      • fdisk  definitions
        m to show help
        p to show partition table
        d to delete a partition
        n to create new partion
        w to write the partition table
    • Continue  and repeat for disk devices /dev/sdb through /dev/sdk

       
  3. Run Partprobe to read new partition table into kernel without rebooting.
    • [root@srv0dbxd11 ~]# partprobe
  4. Run fdisk again to verify all partitions are shown. You should see every device and the partition 1
    • [root@srv0dbxd11 ~]# fdisk -l | grep sd[a-z]1
      /dev/sda1   *           1          33      265041   83  Linux
      /dev/sdb1               1         250      255984   83  Linux
      /dev/sdc1               1         250      255984   83  Linux
      /dev/sdd1               1         250      255984   83  Linux
      /dev/sde1               1         250      255984   83  Linux
      /dev/sdf1               1         250      255984   83  Linux
      /dev/sdg1               1        9137    73392921   83  Linux
      /dev/sdh1               1        9137    73392921   83  Linux
      /dev/sdi1               1        9137    73392921   83  Linux
  5. Change directectory to /etc/sysconfig and edit the rawdevices file.
    • cd /etc/sysconfig/and edit with vi, the 'rawdevices'  file to match below.
    • # Voting Disks
      /dev/raw/raw1   /dev/sdb1
      /dev/raw/raw2   /dev/sdc1
      /dev/raw/raw3   /dev/sdd1
      # OCR disks
      /dev/raw/raw4   /dev/sde1
      /dev/raw/raw5   /dev/sdf1
  6. Start rawdevices:
    [root@srv0dbxd12 ~]#
    service rawdevices start
    Assigning devices:
               /dev/raw/raw1  -->   /dev/sdb1
    /dev/raw/raw1:  bound to major 8, minor 17
               /dev/raw/raw2  -->   /dev/sdc1
    /dev/raw/raw2:  bound to major 8, minor 33
               /dev/raw/raw3  -->   /dev/sdd1
    /dev/raw/raw3:  bound to major 8, minor 49
               /dev/raw/raw4  -->   /dev/sde1
    /dev/raw/raw4:  bound to major 8, minor 65
               /dev/raw/raw5  -->   /dev/sdf1
    /dev/raw/raw5:  bound to major 8, minor 81
    done
  7. Check Status of raw devices
[root@srv0dbxd12 ~]# service rawdevices status
/dev/raw/raw1:  bound to major 8, minor 17
/dev/raw/raw2:  bound to major 8, minor 33
/dev/raw/raw3:  bound to major 8, minor 49
/dev/raw/raw4:  bound to major 8, minor 65
/dev/raw/raw5:  bound to major 8, minor 81

 
  1. Verify rawdevices is set to start at boot time.

    [root@srv0dbx02 sysconfig]# chkconfig --list rawdevices
    rawdevices      0:off   1:off   2:on    3:on    4:on    5:on    6:off
Section 4:  Begin Clone of Virtual RAC Node 1 Server


  1. Power off srv0dbxd11 or Node1 from terminal.
    • node 1#:  shutdown -h now
  2. SSH into ESXi host via Putty
  3. Change Directory to Guest DataStore Location
    • /vmfs/volumes # cd /vmfs/volumes/s0esxd01vmfs01
  4. Create a Virtual Machine Directory in this datastore, for next node.
    • # mkdir srv0dbxd12
  5. Run vmkfstools to clone the Virtual Machine you want.
    • /vmfs/volumes/4995670b-6d7bfc7d-392b-0019b9c871d8 # vmkfstools -i /vmfs/volumes/d0esxd01vmfs02/srv0dbxd11/srv0dbxd11.vmdk /vmfs/volumes/d 0esxd01vmfs02/srv0dbxd12/srv0dbxd12.vmdk
      Destination disk format: VMFS thick
      Cloning disk '/vmfs/volumes/d0esxd01vmfs02/srv0dbxd11/srv0dbxd11.vmdk'...
      Clone: 0% done.
      Clone: 3% done.
  6. Create a new Virtual Machine based on this clone
    • Create New Virtual Machine following How to Install Linux Oracle RAC Virtual Server guidelines for CPU, NIC, Memory.
    • Choose ‘Custom’  at the Select Appropriate Configuration dialog box click Next
    • Choose 'Datatstore'  and configure Guest OS, Virtual CPU's, Memory, NIC's, and LSI Logic Storage Adapter.
    • When you get to the ‘Select a Disk’  portion, choose Use an existing virtual disk and browse to  the datastore and select this newly renamed vmdk file (node2 you just cloned).
    • Leave  Virtual Device Node to SCSI 0:0 and click Next
    • Click  Finish complete configuration
  7. Click on Edit Virtual Machine Settings
  8. To add all the ASM, OCR, and Voting Disk Shared Storage that you created for node 1 we will copy the scsi1 Controller section from the node1.vmx file and then edit the node2.vmx and node3.vmx file. This is the 'quick' way to add additional shared hard drives to cloned vm.
    • First Method
      • Change directory to /vmfs/volume/datastore-location-of-node1/node1
      • ~ # cd /vmfs/volumes/lesxit01-vmfs02/srv0dbxt11
      • cat the node1.vmx file
      • /vmfs/volumes/4a9fa3b7-1db9846a-da66-0026b927df07/srv0dbxt11 # cat srv0dbxt11.vmx
      • Highlight (in putty) to copy the sections beginning with:
      • scsi1.present = "TRUE" and copy to scsi1.pciSlotNumber = "35"
      • the scsi1 denotes SCSI Controller 1
      • Change directory to Node 2
      • /vmfs/volumes/4a9fa3b7-1db9846a-da66-0026b927df07/srv0dbxt11 # cd ../srv0dbxt12
      • Vi (edit) the node2.vmx file and paste at end of the file where node2 = hostname. i.e. srv0dbxt12.vmx
    • 2nd (Faster - preferred) Method
      • Change directory to /vmfs/volume/datastore-location-of-node1/node1
      • from node1 location type this:  grep scsi1?* srv0dbxt11.vmx > scsi1
      • Change directory to node 2:  cd ../srv0dbxt12
      • Copy ssci file to current directory: cp ../srv0dbxt11/scsi1 .
      • vi srv0dbxt12.vmx
      • Type 'G' to go to end of file.
      • Type 'o' to create new line
      • Type 'esc' to exit 'insert mode'
      • Type ' shift :' to enter ex mode
      • Type ' r scsi1 ' to read file and put after current position
      • Type :wq! to save file
    • Repeat either 'method' for each additional node.
  9. Power on new cloned Virtual Machine and Open Console in ESXi Infrastructure Client
    • Verify you can see all new hard drives in vSphere.
  10. At the Kudzu Configuration when server is booting up, choose the following:
    • Remove Configuration for both NIC's
    • At the Hardware Added screen choose Configure
    • Configure for eth1 first, which would be the 192.168.0.x IP
    • At the 2nd Hardware Added screen choose Configure again
    • Configure this NIC as eth0 or the primary IP.
    • At the LSI SCSI Hardware  screen choose Configure
  11. Server should boot up and then you can ssh into it to verify NIC, Hostname, etc...and change if necessary.
    • Change hostname to 2nd Nodename
      • [root@srv0dbxt11 ~]# hostname srv0dbxt12
      • Edit /etc/sysconfig/network and change hostname
      • Verify interface configuration in /etc/sysconfig/network-scripts for eth0 and eth1
    • Verfiy /etc/hosts file is correct.
    • Set MTU for both NIC's
      • [root@srv0dbxd12 network-scripts]# ip link list
        1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
            link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
            link/ether 00:0c:29:6d:d9:31 brd ff:ff:ff:ff:ff:ff
        3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
            link/ether 00:0c:29:6d:d9:3b brd ff:ff:ff:ff:ff:ff
        4: sit0: <NOARP> mtu 1480 qdisc noop
            link/sit 0.0.0.0 brd 0.0.0.0
        [root@srv0dbxd12 network-scripts]# ip link set dev eth0 mtu 1500
        [root@srv0dbxd12 network-scripts]# ip link set dev eth1 mtu 1500
        [root@srv0dbxd12 network-scripts]# ip link list
        1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
            link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        2: eth0: <BROADCAST,MULTICAST,UP> mtu 1392 qdisc pfifo_fast qlen 1000
            link/ether 00:0c:29:6d:d9:31 brd ff:ff:ff:ff:ff:ff
        3: eth1: <BROADCAST,MULTICAST,UP> mtu 1392 qdisc pfifo_fast qlen 1000
            link/ether 00:0c:29:6d:d9:3b brd ff:ff:ff:ff:ff:ff
        4: sit0: <NOARP> mtu 1480 qdisc noop
            link/sit 0.0.0.0 brd 0.0.0.0
      • Set MTU=1392 in /etc/sysconfig/network-scripts/ifcfg-eth[0-1]
    • Check rawdevices are correct with service rawdevices status
    • Verify with fdisk you can see all the Virtual Shared Storage.
  • Make sure both nodes are powered on and continue to next section.

Vendor Links
 Section 5: SSH Key Configuration and .rhosts for Oracle User account

SSH Keys Installation Overview
    • The ssh keys are generated on each host.
    • Then on Node 1 you copy the keys from every host to the Authorized Keys file on node 1.
    • Then copy the keys file to a central location to allow easier copying to the other nodes.
    • Finally you copy the Authorized_keys file to each node and place in /export/home/oracle/.ssh
    • Also, /export/home/oracle permission cannot be higher than 755.
  1. Login as the Oracle user
     [root@srv0dbxd11 ~]# su - oracle
  2. Create the /export/home/oracle/.ssh directory if it does not exist.
    • mkdir ~/.ssh
    • chmod 700 ~/.ssh
  3. If .ssh does exist, remove existing authorized_keys file
    • srv0dbxd12 | ORA1020 | /export/home/oracle/.ssh
      > rm authorized_keys
  4. Generate a rsa and dsa key ( Do this on each node in the cluster, node 1 first, then node 2, etc..)
    • > /usr/bin/ssh-keygen -t rsa  (use defaults)
    • > /usr/bin/ssh-keygen -t dsa   (use defaults)
  5. Return to Node 1 after creating keys on Node 2 as well.
  6. Change directory to /export/home/oracle/.ssh
    • cd /export/home/oracle/.ssh
  7. Add Keys to authorized_keys file on Node 1 (srv0dbxd11)
    • Example: copy contents of ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub to authorized_keys
      srv0dbxd11 | ORA1020 | /export/home/oracle/.ssh
      > cat id_dsa.pub >> authorized_keys
      srv0dbxd11 | ORA1020 | /export/home/oracle/.ssh
      > cat id_rsa.pub >> authorized_keys
  8. Copy keys from Node 2 to Node1's authorized_keys file
    • srv0dbxd11 | ORA1020 | /export/home/oracle/.ssh
      > ssh 192.168.9...119 cat /export/home/oracle/.ssh/id_dsa.pub >> authorized_keys
    • srv0dbxd11 | ORA1020 | /export/home/oracle/.ssh
      > ssh 192.168.9...119 cat /export/home/oracle/.ssh/id_rsa.pub >> authorized_keys
  9. View authorized_keys file. It should have four entries in it, two for each node.
  10. Repeat step 8 for each additional node in cluster.
  11. SCP Node 1 authorized_keys file to all other nodes in cluster:
    • srv0dbxd11 | ORA1020 | /export/home/oracle/.ssh
      >
      scp authorized_keys 192.168.9.119:/export/home/oracle/.ssh/
  12. Change Permissions on Oracle users authorized_keys file. (If Necessary)
    • Persmissions should be 600 or rw------- for authorized_keys file. If not, do this:
      chmod 600 authorized_keys
  13. Test ssh keys from node you will be doing the installation on (You should not be prompted for a password)
    • srv0dbxd11 | ORA1020 | /export/home/oracle/.ssh
      >
      ssh 192.168.9.119
  14. Mount OracleCD:/export/source on Node 1 as root
    • [root@srv0dbxd11 ~]# mount OracleCD:/export/source /home/source
  15. Test pings to IC network as well before moving on. Fix eth1 if necessary.


Section 5: End SSH Keys and .rhosts Section
End Pre-Insallation Tasks for Oracle Servers


  1. At this point, hand off to DBA and instruct them to begin Oracle CRS, ASM, and DB Installation:
    Linux Oracle RAC Installation Guide
  2. When DBA completes installation you will then be required to configure ASM Storage for RAC.
    Use How to Configure Oracle ASM Disks for RAC Virtual Servers document to configure ASM 
    Then the DBA will continue with install Databases 
  3. Finally create Backup Storage using the How to Configure Oracle Backup Disk Storage


No comments:

Post a Comment