Monday, September 30, 2013

How to Reconfigure ASM Disks on Virtual Servers

Information about How to Reconfigure ASM Disks on Virtual Servers
This document will describe how to reconfigure the ASM disks connected to the dev and test Oracle RAC environments.
This document will also describe how to relocate OCR and Voting disk storage on virtual servers.

Document Creator Todd Walters
Reason for ASM Reconfigure
  1. Was originally built using 15k 73gb FC Drives on EMC SAN
  2. Dev & Test required more storage dsan/tsan
  3. There were empty SATA Slots on DAE
  4. Ordered 6 1tb 7200 SATA Drives
Overview of ASM Reconfigure
  • Install 6 SATA Drives using NaviSphere Service Taskbar
  • Create one Disk Raid 5 raid groups, providing  3.5 tb storage.
    • Current LUNs
      • lun99 for sandbox, lun100 for dev, lun 101 for test)
  • Attach Raid Group as datastore to each ESXi host (dev/test)
  • Create 2 200gb ASM disks on each datastore in shared directory
  • Provide ASM Disk location information to DBA
  • DBA alters Dev/Test ASM disk group adding the 2 200gb ASM disks.
  • Once ASM rebalances, DBA would then drop the four older 55gb ASM disks.
  • This would then leave 400gb for Dev, 400gb for test
  • Schedule downtime to remove the old 55gb ASM vmdk disks from each virtual server
  • During downtime relocate OCR and Voting Storage vmdk files to new SATA Datastore
  • As ASM storage need increase we would add 200gb ASM
  • Would also move RAC Sandbox to new datastore. 
  • Remove datastore that was using the 15k 73gb spindles and free those LUNS up for production. 
  • Update SAN Storage Layout diagram and create/update any relevant how-to documentation.
  • See How to Relocate OCR and Voting Disks to relocate the OCR and Voting off old datastore

Part 1: How to Add Storage to Existing DAE

  1. Follow the How to Add-Replace Disks on EMC Array Guide
  2. Once disks are added, create Raid Group, Bind the LUN, and add to the correct storage group/groups.
  3. For dev/test we add new Raid Group to esxd02 and esxt01 Storage Groups
Part II: How to Add DataStore to ESXi Host
  1. Follow Section 2: How to Add Datastore in the How to Configure ESXi using vSphere Client Guide
  2. However, some special steps are required, since we're adding more than 2tb of storage.
  3. When adding new datastore over 2tb follow this method;
    • Add the first LUN to ESX as a datastore 
    • Add 2nd LUN to the 1st datastore by using Extents.
    • This provides 3.5 tb of usuable capacity for dev & test
Part III: Create vmdk files for ASM Disk
  1. Login to one ESXi host via putty.
  2. Change directory to new datastore
    • ~ # cd /vmfs/volumes/emc-cx3-1lun102/
  3. Create a dev and test directory
  4. Change directory to dev and create two vmdk files
    • use vmkfstool -c (size) -a (adapter) -d (disk type) location
    • # vmkfstools -c 1G -a lsilogic -d eagerzeroedthick /vmfs/volumes/emc-cx3-1lun102/shared/test/tasmX.vmdkCreating disk '/vmfs/volumes/emc-cx3-1lun102/shared/test/tasmX.vmdk' and zeroing it out...
      Create: 55% done.
  5. Create the number of ASM Disks you will need. For dev/test here we begin with 2 200gb vmdk for each environment.
    • Example: ls | grep flat
    • tasm1-flat.vmdk
      tasm2-flat.vmdk
Part IV: Reconfiguring ASM Disks on Virtual Servers

note: In this step we will add the new ASM to the virtual machines. This will require the vm to be powered off to add the virtual hard disk.  Once ESXi hosts are licensed this will not be required.
  1. Coordinate with DBA to relocate services from nodes and then bring that node down.
  2. Power off node 1 first and perform work. Once back online then continue in order (node 2, node 3)
  3. Once VM is powered off, right-click on vm in vSphere Client and choose edit settings
  4. Go to Virtual Machine Properties Box select add on Add Hardware Wizard dialog box, Select Hard Disk and click Next
  5. On Select a Disk page, choose Use an existing virtual disk and click Next
  6. On Select Existing Disk page, click on Browse and browse to datastore where shared ASM virtual disks were created.
  7. Choose new asm  disk and click Next
  8. On  Specify Advanced Options page, under Virtual Device Node choose next scsi ID on SCSI 1 and click Next
  9. Click  Finish to complete adding the Virtual Disk.
  10. Select disk you just added and make sure to Set Mode to Independent > Persistent
  11. Power on new virtual machine and once online, ssh to vm via putty
  12. Run fdisk to view new disks
    • [root@srv0dbxd11 ~]# fdisk -l | grep Disk
      Disk /dev/sdl: 214.7 GB, 214748364800 bytes
      Disk /dev/sdm: 214.7 GB, 214748364800 bytes
      Determine the new disks device paths
  13. Repeat steps 3-10 to add each disk for each ASM vmdk, incrementing the SCSI ID each time. i.e SCSI 1:1, 1:2...
  14. Repeat for each node in the cluster then continue.
  15. Create one partition on each disk using fdisk /dev/sdX (only run on node 1)
    • Choose n > p > 1 > default > default > w  then run partprobe once return to prompt.
  16. Change directory to /etc/init.d
    • [root@srv0dbxd11 ~]# cd /etc/init.d
  17. ASM Naming Convention is L_DASM1  where:
      • L = Device letter, i.e /dev/sdl
      • D = Development
      • ASM# = ASM1 which should be /dev/sdg
      • so new ASM Disk would be L_DASM1
  18. Create ASM Disks using oracleasm createdisk Command:
    • syntax: ./oracleasm createdisk ASM-DiskName Path-To-ASM-Disk-Device
    • [root@srv0dbxd11 init.d]# ./oracleasm createdisk L_ASM5 /dev/sdl1
      Marking disk "/dev/sdl1" as an ASM disk:
  19. Run oracleasm to list and verify new ASM disks:
    • [root@srv0dbxd11 ~]# /etc/init.d/oracleasm listdisks
      G_BACKUP_ASM1
H_ASM1
I_ASM1
J_ASM1
K_ASM1L_DASM1
M_DASM2
  1. Run /etc/init.d/oracleasm scandisks to rescan for ASM.
  2. Notify DBA that new ASM disks are available and provide the information.
  3. sample email to dba
    • The new 200gb ASM disks have been made available to all three dev boxes.  Please add these disks to the dsan_data_01 ASM Disk group and remove the older ones once rebalanced.  Here's the new disks information:

Disk 1 -  L_DASM1   or ORCL:L_DASM1
Disk 2 -  M_DASM2  or ORCL:M_DASM2

Please add these using 'alter diskgroup dsan_data_01 add disk `ORCL:L_DASM1`'  ?

Once everything is completed please remove the following ASM disks (using alter disk group dsan_data_01 drop disk  ?)
H_ASM1
I_ASM1
J_ASM1
K_ASM1
  1. When DBA notifies you that ASM disks have been rebalanced, remove old ASM disks from ASM using /etc/init.d/oracleasm deletedisk oldasmdisk
    • [root@srv0dbxd11 init.d]# ./oracleasm listdisks
      G_BACKUP_ASM1
      I_ASM1
      J_ASM1
      K_ASM1
      L_DASM1
      M_DASM2
    • [root@srv0dbxd11 init.d]# ./oracleasm deletedisk I_ASM1
      Removing ASM disk "I_ASM1":                                [  OK  ]
      [root@srv0dbxd11 init.d]#
      ./oracleasm deletedisk J_ASM1
      Removing ASM disk "J_ASM1":                                [  OK  ]
      [root@srv0dbxd11 init.d]#
      ./oracleasm deletedisk K_ASM1
      Removing ASM disk "K_ASM1":                                [  OK  ]
      [root@srv0dbxd11 init.d]#
      ./oracleasm listdisks
      G_BACKUP_ASM1
      L_DASM1
      M_DASM2
  2. Then on remaining nodes I just ran oracleasm scandisks to rescan those nodes.
    • [root@srv0dbxd12 init.d]# ./oracleasm listdisks 
      G_BACKUP_ASM1
      H_ASM1
      I_ASM1
      J_ASM1
      K_ASM1
      L_DASM1
      M_DASM2
    • [root@srv0dbxd12 init.d]# ./oracleasm scandisks
      Scanning system for ASM disks:                             [  OK  ]
      [root@srv0dbxd12 init.d]#
      ./oracleasm listdisks
      G_BACKUP_ASM1
      L_DASM1
      M_DASM2
  3. As you can see ASM now only sees the two new 200gb ASM disks and the backup ASM disk.  From sqlplus as well, we only see the 3 ASM disks now:
    • SQL> select name, header_status, path from v$asm_disk;
      NAME                           HEADER_STATUS          PATH
      --------------------------------------------------------------------
      G_BACKUP_ASM1          MEMBER                        ORCL:G_BACKUP_ASM1
      L_DASM1                     MEMBER                        ORCL:L_DASM1
      M_DASM2                    MEMBER                        ORCL:M_
Part V: Bring offline and Remove old Virtual Disks
Finally schedule downtime to remove old ASM disks from virtual machines
  1. Power down the Virutal Machine at scheduled time.
  2. Right-click on VM in vSphere Client and choose 'edit settings'
  3. Locate the old ASM virtual disks to be remove and select it, then click remove.
  4. Repeat this process for each old VM disk
  5. Finally go into the old datastore shared location via putty and run rm *.vmdk to remove all old vmdk from that location.
  • See How to Relocate OCR and Voting Disks to relocate the OCR and Voting off old datastore

No comments:

Post a Comment