Friday 5 July 2013

Native MPIO on VIO hosted AIX LPARs

Native MPIO on VIO hosted AIX LPARs:

We have AIX  LPARs hosted on a dual VIO server setup. Each VIO servers (lets call them vio1 and vio2), have 1 fiber connection each.
for this setup, MPIO is needed to be done at the client LPARs.
Our IBM p570 VIO hosted Lpars setup is similar to the one illustrated below figure.




In whole, our system has the following:


  • 2xVIO servers (1 fiber port each)
  • 1 dedicated LPAR (2 fiber ports)
  • 3 AIX  LPARs (all hosted via VIO).

from a client LPAR named ux0018, a disk defined in both VIO servers appear like this:
lspath | grep hdisk5
Enabled hdisk5 vscsi0
Enabled hdisk5 vscsi1

vscsi0 - is through vio1
vscsi1 - is through vio2
How do we know this?
Digging more in the path configuration for hdisk5 in ux0018:

#get the pvid:

lspv | grep hdisk5
hdisk5 00c14ebbea6747a8 ux0018apps03 active

This pvid is the same in both vio servers.
lspath -F'status name path_id parent connection' | grep -w hdisk5
Enabled hdisk5 0 vscsi0 860000000000
Enabled hdisk5 1 vscsi1 880000000000

This shows the LUN id that you can use to check in each vio server.
from vio1:
lspv | grep 00c14ebbea6747a8
hdisk22 00c14ebbea6747a8 None

from vio2:
lspv | grep 00c14ebbea6747a8
hdisk20 00c14ebbea6747a8 None

The same "disk" has different disk names but the same PVID in both vio servers. now check the LUN ID. here, vhost5 is the adapter assigned for ux0018:
vio1:
lsmap -vadapter vhost5 | grep -E "LUN|Backing device"
LUN 0x8600000000000000

Backing device hdisk22
vio2:
lsmap -vadapter vhost5 | grep -E "LUN|Backing device" LUN 0x8800000000000000

Backing device hdisk20

checking the attributes for hdisk5:

lsattr -El hdisk5
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
hcheck_cmd test_unit_rdy Health Check Command True
hcheck_interval 0 Health Check Interval True
hcheck_mode nonactive Health Check Mode True
max_transfer 0x40000 Maximum TRANSFER Size True
pvid 00c14ebbea6747a80000000000000000 Physical volume identifier False
queue_depth 3 Queue DEPTH True
reserve_policy no_reserve Reserve Policy True

For load balancing to work, algorithm must be set to round_robin. but:

chdev -l hdisk5 -a algorithm=round_robin
Method error (/etc/methods/chgdisk):
0514-018 The values specified for the following attributes are not valid:algorithm

Since native mpio only works in failover mode with this kind of setup,we have to do the "load balancing" manually. this means we redirect io to the unused path while maintaining the failover capability of the configuration. this would be a static LUN based load balancing setup.
To do this, we set the priority for hdisk5 path via vscsi0 to a lower value. meaning the path via vscsi1 is the primary path. since most of the disks are using vscsi0, hdisk5 will then not contend to the bandwidth on vscsi0. so we set the path to vscsi0 to priority=2:
chpath -l hdisk5 -p vscsi0 -w 860000000000 -a priority=2
path Changed

Checking the two paths:

lspath -l hdisk5 -p vscsi0 -E
priority 2 Priority True
lspath -l hdisk5 -p vscsi1 -E
priority 1 Priority True

Any I/O access now to/from hdisk5 will pass through vio2's fiber connection while still maintaining connection to vio1 in fail-over mode.

1 comment:

  1. Hi,
    I need to migrate one AIX VIO client under dual vios from one managed system to another one. This acquiring managed system is already up and running and has dual vios and 5 VIO clients. Both systems are in the same location and connected to the same SAN. All the hdisks provisioned to the VIO client comes from the SAN (vscsi and NPIV). As the VIO client is a Production system, I am a bit concerned about how to proceed. I have done much search but have not found any docs for such a move/migration. So, this is how I plan to proceed.

    On the new server:
    Create the VIO lpar from HMC
    Create vscsi adapter, Virtual FC adapter and Virtual Ethernet adapter from HMC.
    Map the respective device to both VIO servers from the HMC.

    On switch:
    Create a new host for the newly discovered WWN of the VFC.
    Create a new zone and map the LUNS previously assigned to the original client to the new VIO client.

    On both VIO servers on new managed system.
    Run cfgmgr to discover the newly assigned disks.
    Run chdev command to assign PVID to new disks
    Map the new disks to the newly created vhost
    Map the VFC to the newly created vhost

    Shutdown the running Prod VIO client
    Activate the new VIO client.
    Is there anything that need to be done with regards to the SEA or IP configuration???

    Verify if everything is OK. If any issue, can still fall-back by just activating the original VIO client.

    I would appreciate your input and comments with regards the proposed approach please. Also, if you see any flaws or missed steps/actions or commands that need to be executed, I will really apprecaie.
    Thanks






    ReplyDelete