In order to facilitate manual failover below are the steps a sys admin should take when failing over the filesystems from one node to another when we have an LVM configured. NOTE: NODE02 uses ifcfg-eth5:0 for the listener VIP and NODE01 uses ifcfg-eth4:0 for the VIP. Please start these respectively.
On NODE02
1. Ensure the DB, DB listeners and the VIP has been stopped on the active node.
· Stop Oracle DB
· Stop Oracle Listener
· Stop the VIP Pointing to this DB Instance ( i.e. eth5:0)
# Ifdown eth5:0
2. Unmount all mount points related to PDB:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
/dev/mapper/vg--oradata-lv.pdb01 40G 8.3G 30G 23% /pdb01 /dev/mapper/vg--oradata-lv.pdb02 493G 147G 321G 32% /pdb02/dev/mapper/vg--oradata-lv.pdb03 493G 133G 335G 29% /pdb03/dev/mapper/vg--oradata-lv.pdb_arch 197G 84G 104G 45% /pdb_arch/dev/mapper/vg--oradata-lv.dbdump 49G 180M 46G 1% /dbdump/pdb
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# umount pdb* ( this unmounts all file systems with the name pdb..)
3. Check the status of LV on Active node (Node02).
# lvscan ( this will give an output as below on the active node)
ACTIVE '/dev/vg-oradata/lv.pdb01' [40.00 GB] inherit
ACTIVE '/dev/vg-oradata/lv.pdb02' [500.00 GB] inherit
ACTIVE '/dev/vg-oradata/lv.pdb03' [500.00 GB] inherit
ACTIVE '/dev/vg-oradata/lv.pdb_arch' [200.00 GB] inherit
ACTIVE '/dev/vg-oradata/lv.dbdump' [49.00 GB] inherit
4. De-Activate all LV's on the active node: * In order to de-activate all the LV’s run the below commands
Syntax: lvchange -an /dev/vg-oradata/lv_name_here (for all LV's)
# lvchange -an /dev/vg-oradata/lv_pdb01
# lvchange -an /dev/vg-oradata/lv_pdb02
# lvchange -an /dev/vg-oradata/lv_pdb03
# lvchange -an /dev/vg-oradata/lv_pdb_arch
# lvchange -an /dev/vg-oradata/lv_dbdump
5. Verifying De-Activated LV’s
# lvscan
inactive '/dev/vg-oradata/lv.pdb01' [40.00 GB] inherit
inactive '/dev/vg-oradata/lv.pdb02' [500.00 GB]
inherit inactive '/dev/vg-oradata/lv.pdb03' [500.00 GB]
inherit inactive '/dev/vg-oradata/lv.pdb_arch' [200.00 GB]
inherit inactive '/dev/vg-oradata/lv.dbdump' [49.00 GB] inherit
On NODE01
6. Activate the LV's on NODE01
Syntax: lvchange -ay /dev/vg-oradata/lv_name_here (for all LV's)
# lvchange -ay /dev/vg-oradata/lv_pdb01
# lvchange -ay /dev/vg-oradata/lv_pdb02
# lvchange -ay /dev/vg-oradata/lv_pdb03
# lvchange -ay /dev/vg-oradata/lv_pdb_arch
# lvchange -ay /dev/vg-oradata/lv_dbdump
7. Mount the File System:
Syntax: mount /dev/vg-oradata/lv_name_here /mount_point_name (for all mounts)
#mount /dev/mapper/vg--oradata-lv.pdb01 /pdb01
#mount /dev/mapper/vg--oradata-lv.pdb02 /pdb02
#mount /dev/mapper/vg--oradata-lv.pdb03 /pdb03
#mount /dev/mapper/vg--oradata-lv.pdb_arch /pdb_arch
#mount /dev/mapper/vg--oradata-lv.dbdump /dbdump/pdb
8. VIP interface activation on NODE01 ( on Node02 we have ifcfg-eth4:0 as the VIP)
# ifup eth4:0
9. Have DBA's start listener and database and verify.
On NODE02
1. Ensure the DB, DB listeners and the VIP has been stopped on the active node.
· Stop Oracle DB
· Stop Oracle Listener
· Stop the VIP Pointing to this DB Instance ( i.e. eth5:0)
# Ifdown eth5:0
2. Unmount all mount points related to PDB:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
/dev/mapper/vg--oradata-lv.pdb01 40G 8.3G 30G 23% /pdb01 /dev/mapper/vg--oradata-lv.pdb02 493G 147G 321G 32% /pdb02/dev/mapper/vg--oradata-lv.pdb03 493G 133G 335G 29% /pdb03/dev/mapper/vg--oradata-lv.pdb_arch 197G 84G 104G 45% /pdb_arch/dev/mapper/vg--oradata-lv.dbdump 49G 180M 46G 1% /dbdump/pdb
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# umount pdb* ( this unmounts all file systems with the name pdb..)
3. Check the status of LV on Active node (Node02).
# lvscan ( this will give an output as below on the active node)
ACTIVE '/dev/vg-oradata/lv.pdb01' [40.00 GB] inherit
ACTIVE '/dev/vg-oradata/lv.pdb02' [500.00 GB] inherit
ACTIVE '/dev/vg-oradata/lv.pdb03' [500.00 GB] inherit
ACTIVE '/dev/vg-oradata/lv.pdb_arch' [200.00 GB] inherit
ACTIVE '/dev/vg-oradata/lv.dbdump' [49.00 GB] inherit
4. De-Activate all LV's on the active node: * In order to de-activate all the LV’s run the below commands
Syntax: lvchange -an /dev/vg-oradata/lv_name_here (for all LV's)
# lvchange -an /dev/vg-oradata/lv_pdb01
# lvchange -an /dev/vg-oradata/lv_pdb02
# lvchange -an /dev/vg-oradata/lv_pdb03
# lvchange -an /dev/vg-oradata/lv_pdb_arch
# lvchange -an /dev/vg-oradata/lv_dbdump
5. Verifying De-Activated LV’s
# lvscan
inactive '/dev/vg-oradata/lv.pdb01' [40.00 GB] inherit
inactive '/dev/vg-oradata/lv.pdb02' [500.00 GB]
inherit inactive '/dev/vg-oradata/lv.pdb03' [500.00 GB]
inherit inactive '/dev/vg-oradata/lv.pdb_arch' [200.00 GB]
inherit inactive '/dev/vg-oradata/lv.dbdump' [49.00 GB] inherit
On NODE01
6. Activate the LV's on NODE01
Syntax: lvchange -ay /dev/vg-oradata/lv_name_here (for all LV's)
# lvchange -ay /dev/vg-oradata/lv_pdb01
# lvchange -ay /dev/vg-oradata/lv_pdb02
# lvchange -ay /dev/vg-oradata/lv_pdb03
# lvchange -ay /dev/vg-oradata/lv_pdb_arch
# lvchange -ay /dev/vg-oradata/lv_dbdump
7. Mount the File System:
Syntax: mount /dev/vg-oradata/lv_name_here /mount_point_name (for all mounts)
#mount /dev/mapper/vg--oradata-lv.pdb01 /pdb01
#mount /dev/mapper/vg--oradata-lv.pdb02 /pdb02
#mount /dev/mapper/vg--oradata-lv.pdb03 /pdb03
#mount /dev/mapper/vg--oradata-lv.pdb_arch /pdb_arch
#mount /dev/mapper/vg--oradata-lv.dbdump /dbdump/pdb
8. VIP interface activation on NODE01 ( on Node02 we have ifcfg-eth4:0 as the VIP)
# ifup eth4:0
9. Have DBA's start listener and database and verify.
Comments