In this post I`ll present how to mask/unmask a LUN with MASK_PATH plugin and using esxcli on ESXi 5.1.
MASKING
The datastore to be masked is called shared1-iscsi-ssd and it is an iSCSI datastore. To find out the device identifier ssh to ESXi host (or connect to vMA) and type the command:
It will return the volume name, VMFS UUID, number of extents, device name and partition number for all VMFS datastores mounted on the host. For shared1-iscsi-ssd the device name is t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________ (t10 naming).
ESXi host uses SCSI INQUIRY command to gather data about target devices and to generate a device name that is unique across all hosts. These names can have one of the following formats:
- naa.number
- t10.number
- eui.number
~ # esxcli storage core device list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________)
Has Settable Display Name: true
Size: 51199
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Vendor: FreeBSD
Model: iSCSI Disk
Revision: 0123
SCSI Level: 5
Is Pseudo: false
Status: degraded
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: supported
Other UIDs: vml.02000000003000000034127b79695343534920
Is Local SAS Device: false
Is Boot USB Device: false
The output provides lots of information about the device such size, device status, if it could be used as RDM, if it is seen as SSD or not, if it supports VAAI.
In order to mask the LUN we will apply filters to each path. List paths to the device:
~ # esxcli storage core path list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000006,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
UID: iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000006,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Runtime Name: vmhba33:C1:T1:L0
Device: t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Device Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________)
Adapter: vmhba33
Channel: 1
Target: 1
LUN: 0
Plugin: NMP
State: active
Transport: iscsi
Adapter Identifier: iqn.1998-01.com.vmware:ex5103-5b8ecbe2
Target Identifier: 00023d000006,iqn.2011-03.example.org.istgt:t1,t,1
Adapter Transport Details: iqn.1998-01.com.vmware:ex5103-5b8ecbe2
Target Transport Details: IQN=iqn.2011-03.example.org.istgt:t1 Alias= Session=00023d000006 PortalTag=1
Maximum IO Size: 131072
iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000003,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Runtime Name: vmhba33:C0:T1:L0
Plugin: NMP
State: active
...
There are 2 paths (the listing for the second path has been truncated). This listing also presents a lot of useful information - including runtime name, multipathing plugin type and path state.
If you are looking for preferred path, use the following command to display paths from the point of view of the multipathing plugin:
If you are looking for preferred path, use the following command to display paths from the point of view of the multipathing plugin:
~ # esxcli storage nmp path list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000006,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Runtime Name: vmhba33:C1:T1:L0
Device: t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Device Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________)
Group State: active
Array Priority: 0
Storage Array Type Path Config: SATP VMW_SATP_DEFAULT_AA does not support path configuration.
Path Selection Policy Path Config: {current: yes; preferred: yes}
iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000003,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Runtime Name: vmhba33:C0:T1:L0
Device: t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
Device Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________)
Group State: active
Array Priority: 0
Storage Array Type Path Config: SATP VMW_SATP_DEFAULT_AA does not support path configuration.
Path Selection Policy Path Config: {current: no; preferred: no}
Next we`ll start adding claimrules for the paths. Display current rule list:
~ # esxcli storage core claimrule list
Rule Class Rule Class Type Plugin Matches
---------- ----- ------- --------- --------- ---------------------------------
MP 0 runtime transport NMP transport=usb
MP 1 runtime transport NMP transport=sata
MP 2 runtime transport NMP transport=ide
MP 3 runtime transport NMP transport=block
MP 4 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 65535 runtime vendor NMP vendor=* model=*
Add one rule for each path we filter using runtime name information
~ # esxcli storage core claimrule add -r 200 -t location -A vmhba33 -C 0 -T 1 -L 0 -P MASK_PATH
~ # esxcli storage core claimrule add -r 201 -t location -A vmhba33 -C 1 -T 1 -L 0 -P MASK_PATH
Display the rules:
~# esxcli storage core claimrule list
Rule Class Rule Class Type Plugin Matches
---------- ----- ------- --------- --------- ----------------------------------------
MP 0 runtime transport NMP transport=usb
MP 1 runtime transport NMP transport=sata
MP 2 runtime transport NMP transport=ide
MP 3 runtime transport NMP transport=block
MP 4 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 200 file location MASK_PATH adapter=vmhba33 channel=0 target=1 lun=0
MP 201 file location MASK_PATH adapter=vmhba33 channel=1 target=1 lun=0
MP 65535 runtime vendor NMP vendor=* model=*
As you can see in class column, rules are added only in the configuration file (/etc/vmware/esx.conf) and we need to load these new rules:
~ # esxcli storage core claimrule load
~ # esxcli storage core claimrule list | grep vmhba33
MP 200 runtime location MASK_PATH adapter=vmhba33 channel=0 target=1 lun=0
MP 200 file location MASK_PATH adapter=vmhba33 channel=0 target=1 lun=0
MP 201 runtime location MASK_PATH adapter=vmhba33 channel=1 target=1 lun=0
MP 201 file location MASK_PATH adapter=vmhba33 channel=1 target=1 lun=0
Unclaim all paths and then run the loaded claimrules on each of the paths to reclaim them:
~ # esxcli storage core claiming reclaim -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
~ # esxcli storage core claiming unclaim -t location -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
~ # esxcli storage core claimrule run
Now the device is not seen by the host. A simple test is to try to add the datastore in vSphere Client - Configuration - Hardware - Storage - Add storage and compare the availability of the device on both masked/unmasked ESXi.
UNMASKING
The way to it is pretty straight forward: delete claim rules from file, reload claimrules, reclaim the paths and run the claimrules.
List the exsiting rules:
~ # esxcli storage core claimrule list | grep vmhba33
MP 200 runtime location MASK_PATH adapter=vmhba33 channel=1 target=1 lun=0
MP 200 file location MASK_PATH adapter=vmhba33 channel=1 target=1 lun=0
MP 201 runtime location MASK_PATH adapter=vmhba33 channel=0 target=1 lun=0
MP 201 file location MASK_PATH adapter=vmhba33 channel=0 target=1 lun=0
Remove the rules and check:
~ # esxcli storage core claimrule remove -r 200~ # esxcli storage core claimrule remove -r 201
~ # esxcli storage core claimrule list | grep vmhba33
MP 200 runtime location MASK_PATH adapter=vmhba33 channel=1 target=1 lun=0
MP 201 runtime location MASK_PATH adapter=vmhba33 channel=0 target=1 lun=0
Reload the claimrules from file
~ # esxcli storage core claimrule loadReclaim the device:
~ # esxcli storage core claiming reclaim -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
~ # esxcli storage core claimrule run
Check that the device is available and that all paths are up:
~ # esxcli storage core device list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
~ # esxcli storage nmp path list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
There is also VMware KB 1009449 that presents the masking procedure for both ESXi 4.x and 5.0.
No comments:
Post a Comment