Saturday 28 December 2013

Practical Guide to AIX "Volume Group Management"

Folks I am going to discuss about practical examples and real time usefull commands about AIX Volume Group Management.


Contents:


1)Volume Group Creation:

mkvg -y <vg> -s <PP size> <pv>  (normal volume group)
mkvg -y datavg -s 4 hdisk1

Use below options to creat Big & Scalable volume groups.

-B Creates a Big-type volume group
-S Creates a Scalable-type volume group.

Note: the PP size will be the size of the physical partition size you want  1, 2, (4), 8, 16, 32, 64, 128, 256, 512, 1024MB

2) List/Display Volume Group:

lsvg
lsvg <vg> (detailed)
lsvg -l <vg> (list all logical volumes in goup)
lsvg -p <vg> (list all physical volumes in group)
lsvg -o (lists all varied on)
lsvg -M <vg> (lists all PV, LV, PP deatils of a vg (PVname:PPnum LVname: LPnum :Copynum))
lsvg -o | lsvg -ip        lists pvs of online vgs
lsvg -o | lsvg -il        lists lvs of online vgs
lsvg -n <hdisk>           shows vg infos, but it is read from the VGDA on the specified disk (it is useful to compare it with different disks)

## Details volume group info for the hard disk
lqueryvg -Atp <pv>
lqueryvg -p <disk> -v (Determine the VG ID# on disk)
lqueryvg -p <disk> -L (Show all the LV ID#/names in the VG on disk)
lqueryvg -p <disk> -P (Show all the PV ID# that reside in the VG on disk)

3)Extending Volume Group:

#extendvg <vg> <pv>
#extendvg myvg hdisk5

4)Reducing Volume Group:

#reducevg -d <vg> <pv>
## removes the PVID from the VGDA when a disk has vanished without using the reducevg command
#reducevg <vg> <PVID>

5) Mirror Volume Group:

We can do mirroring in AIX, using mirrorvg command and we can create max of three copy of mirror.

If we have two PV’s in rootvg, now we want mirror, Data and OS installed in hdisk0 and now we want to mirror hdisk0 to hdisk1. Then your command will be
# mirrorvg –S –m rootvg hdisk1

S – Backgroup mirror
-m - exact (force) mirror
NOTE: in mirrored VG quorum should be off line because quorum is not recommended for mirror.

6)Un-Mirror Volume Group: 

Using Unmirror command we can Unmirror the VG
#unmirrorvg rootvg hdisk1
PV hdisk1 is removed from rootvg mirror.

7)Synchronize Volume Group:

Using Syncvg command we can sync the mirrored Vg and LV copy information’s

If we want to sync lvcopy
#syncvg –l lvname

#syncvg –l testlv
After executing the above command, testlv copy get sync with lv copied PV

If we want to sync mirrored PV’s
#syncvg –v rootvg
The above sync the mirrored PV’s in rootvg

8) Un-Lock Volume Group:

# chvg -u <vgname>          unlocks the volume group (if a command core dumping, or the system crashed and vg is left in locked state)
(Many LVM commands place a lock into the ODM to prevent other commands working on the same time.)

9)Re-Organise Volume Group:

# reorgvg   <vgname>
rearranges physical partitions within the vg to conform with the placement policy (outer edge...) for the lv.
(For this 1 free pp is needed, and the relocatable flag for lvs must be set to 'y': chlv -r...)

10) VarryOn Volume Group:

This is just for VG activation; some times clients want to deactivate VG for project restriction. After that we want to activate the VG for further data access

Suppose we want to activate testvg, then you should follow like below
#lsvg
rootvg
datavg
testvg
The above command shows what are VG’s available
#lsvg –o
rootvg
datavg
The above commands shows only online(active)  VG’s because testvg is offline so we have to activate testvg using "varyonvg". This makes us enable to mount the filesystems which were created on top of the testvg.

#varryonvg testvg

#lsvg –o
rootvg
datavg
testvg
Now above command is display the testvg.

11)Varryoff Volume Group:

This is just for VG deactivation; some clients want to deactivate VG for project Restriction. Suppose customer want deactivate testvg then your command will be
#lsvg –o
rootvg
datavg
testvg

#varryoff testvg

#lsvg –o
rootvg
datavg
The above command displays only two online VG’s and it will not show testvg because testvg is offline VG.

12) Rename Volume Group:

#varyoffvg <old vg name>
#lsvg -p <old vg name> (obtain disk names)
#exportvg <old vg name>
#import -y <new vg name> <pv>
#varyonvg <new vg name>
#mount -a

13) Exporting Volume Group:

Using exportvg command we can export VG (including all the PV’s) from one server to another server.

If you have ServerA, in this server has datavg with two PV’s. Now we want export datavg to ServerB

Before exporting the datavg, we should Varryoff the datavg, i.e. datavg is moved to offline.
#varryoff datavg (Varryoff the datavg)
#exportvg datavg (VG information removed from ODM
Now datavg is exported from the ServerA, after this run the following command to verify the export.
#lsvg
It won’t show datavg name. Because datavg is exported.

Then you should remove PV from the configuration
#rmdev –dl hdisk3
#rmdev –dl hdisk4
After that we can remove the PV’s from ServerA for import datavg to ServerB.

14)Importing Volume Group:

Using importvg command we can import the datavg to ServerB

First you should connect hdisk3, hdisk4, in ServerB then, run the
#cfgmgr (for hard disk detection)
Then check the PV’s installed or not using lspv command
#lspv (it will display the installed PV’s) if hdisk3, hdisk4 is available then PV’s are configured properly.
Then run the command importvg for import the datavg
#importvg –y datavg hdisk3 (VG information is added in ODM)
#importvg –y datavg hdisk4 (VG information is added in ODM)
NOTE:If ServerB has VG with same name datavg, This case we can rename the importing VG datavg to other name,
#importvg –y newdatavg hdisk3
#importvg –y newdatavg hdisk4
Like this we can import.

After importing the datavg, we no need to Varryon datavg, automatically it will Varryon while importing.

15)Removing Volume Group:

#varyoffvg <vg>
#exportvg <vg>
Note: the export command nukes everything regarding the volume goup in the ODM and /etc/filesystems

16) Check Volume Group Type:

Run the lsvg command on the volume group and look at the value for MAX PVs. The value is 32 for normal, 128 for big, and 1024 for scalable volume group.
VG type     Maximum PVs    Maximum LVs    Maximum PPs per VG    Maximum PP size
Normal VG     32              256            32,512 (1016 * 32)      1 GB
Big VG        128             512            130,048 (1016 * 128)    1 GB
Scalable VG   1024            4096           2,097,152               128 GB
If a physical volume is part of a volume group, it contains 2 additional reserved areas. One area contains both the VGSA and the VGDA, and this area is started from the first 128 reserved sectors (blocks) on the disk. The other area is at the end of the disk, and is reserved as a relocation pool for bad blocks.

17)Changing Normal VG to Big VG:

If you reached the MAX PV limit of a Normal VG and playing with the factor (chvg -t) is not possible anymore you can convert it to Big VG.

It is an online activity, but there must be free PPs on each physical volume, because VGDA will be expanded on all disks:
root@um-lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         23          00..00..00..00..23
hdisk4            active            1023        0           00..00..00..00..00

root@um-lpar: / # chvg -B bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk4 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        2 partitions and run chvg again.

In this case we have to migrate 2 PPs from hdisk4 to hdsik3 (so 2 PPs will be freed up on hdisk4):

root@um-lpar: / # lspv -M hdisk4
hdisk4:1        bblv:920
hdisk4:2        bblv:921
hdisk4:3        bblv:922
hdisk4:4        bblv:923
hdisk4:5        bblv:924
...

root@um-lpar: / # lspv -M hdisk3
hdisk3:484      bblv:3040
hdisk3:485      bblv:3041
hdisk3:486      bblv:3042
hdisk3:487      bblv:1
hdisk3:488      bblv:2
hdisk3:489-511

root@um-lpar: / # migratelp bblv/920 hdisk3/489
root@um-lpar: / # migratelp bblv/921 hdisk3/490

root@um-lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         21          00..00..00..00..21
hdisk4            active            1023        2           02..00..00..00..00

If we try again changing to Big VG, now it is successful:
root@um-lpar: / # chvg -B bbvg
0516-1216 chvg: Physical partitions are being migrated for volume group
        descriptor area expansion.  Please wait.
0516-1164 chvg: Volume group bbvg2 changed.  With given characteristics bbvg2
        can include up to 128 physical volumes with 1016 physical partitions each.

If you check again, freed up PPs has been used:
root@um-lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            509         0           00..00..00..00..00
hdisk3            active            509         17          00..00..00..00..17
hdisk4            active            1021        0           00..00..00..00..00

18)Changing Normal (or Big) VG to Scalable VG:

If you reached the MAX PV limit of a Normal or a Big VG and playing with the factor (chvg -t) is not possible anymore you can convert that VG to Scalable VG. A Scalable VG allows a maximum of 1024 PVs and 4096 LVs and a very big advantage that the maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis.

!!!Converting to Scalable VG is an offline activity (varyoffvg), and there must be free PPs on each physical volume, because VGDA will be expanded on all disks.
root@um-lpar: / # chvg -G bbvg
0516-1707 chvg: The volume group must be varied off during conversion to
        scalable volume group format.

root@um-lpar: / # varyoffvg bbvg
root@um-lpar: / # chvg -G bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk2 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        18 partitions and run chvg again.


After migrating some lps to free up required PPs (in this case it was 18), then changing to Scalable VG is successful:
root@um-lpar: / # chvg -G bbvg
0516-1224 chvg: WARNING, once this operation is completed, volume group bbvg
        cannot be imported into AIX 5.2 or lower versions. Continue (y/n) ?
...
0516-1712 chvg: Volume group bbvg changed.  bbvg can include up to 1024 physical volumes with 2097152 total physical partitions in the volume group.

19) Check VGDA (Volume Group Descriptor Area):

It is an area on the hard disk (PV) that contains information about the entire volume group. There is at least one VGDA per physical volume, one or two copies per disk. It contains physical volume list (PVIDs), logical volume list (LVIDs), physical partition map (maps lps to pps)
# lqueryvg -tAp hdisk0                                <--look into the VGDA (-A:all info, -t: tagged, without it only numbers)
Max LVs:        256
PP Size:        27                                    <--exponent of 2:2 to 7=128MB
Free PPs:       698
LV count:       11
PV count:       2
Total VGDAs:    3
Conc Allowed:   0
MAX PPs per PV  2032
MAX PVs:        16
Quorum (disk):  0
Quorum (dd):    0
Auto Varyon ?:  1
Conc Autovaryo  0
Varied on Conc  0
Logical:        00cebffe00004c000000010363f50ac5.1   hd5 1       <--1: count of mirror copies (00cebff...c5 is the VGID)
                00cebffe00004c000000010363f50ac5.2   hd6 1
                00cebffe00004c000000010363f50ac5.3   hd8 1
                ...
Physical:       00cebffe63f500ee                2   0            <--2:VGDA count 0:code for its state (active, missing, removed)
                00cebffe63f50314                1   0            (The sum of VGDA count should be the same as the Total VGDAs)
Total PPs:      1092
LTG size:       128
...
Max PPs:        32512

20)Mirroring rootvg (after disk replacement):

1. disk replaced -> cfgmgr           <--it will find the new disk (i.e. hdisk1)
2. extendvg rootvg hdisk1            <--sometimes extendvg -f rootvg...
(3. chvg -Qn rootvg)                 <--only if quorum setting has not yet been disabled, because this needs a restart
4. mirrorvg -s rootvg                <--add mirror for rootvg (-s: synchronization will not be done)
5. syncvg -v rootvg                  <--synchronize the new copy (lsvg rootvg | grep STALE)
6. bosboot -a                        <--we changed the system so create boot image (-a: create complete boot image and device)
                                     (hd5 is mirrorred, no need to do it for each disk. ie. bosboot -ad hdisk0)
7. bootlist -m normal hdisk0 hdisk1  <--set normal bootlist
8. bootlist -m service hdisk0 hdisk1 <--set bootlist when we want to boot into service mode
(9. shutdown -Fr)                    <--this is needed if quorum has been disabled
10.bootinfo -b                       <--shows the disk  which was used for boot

21)Miscellaneous VG Commands:

getlvodm -j <hdisk>       get the vgid for the hdisk from the odm
getlvodm -t <vgid>        get the vg name for the vgid from the odm
getlvodm -v <vgname>      get the vgid for the vg name from the odm
getlvodm -p <hdisk>       get the pvid for the hdisk from the odm
getlvodm -g <pvid>        get the hdisk for the pvid from the odm
lqueryvg -tcAp <hdisk>    get all the vgid and pvid information for the vg from the vgda (directly from the disk)
                          (you can compare the disk with odm: getlvodm <-> lqueryvg)
synclvodm <vgname>        synchronizes or rebuilds the lvcb, the device configuration database, and the vgdas on the physical volumes
redefinevg                it helps regain the basic ODM informations if those are corrupted (redefinevg -d hdisk0 rootvg)
readvgda hdisk40          shows details from the disk

0 blogger-disqus:

Post a Comment