Version-Release number of selected component (if applicable): RHEL7.6 QLE2742 FW:v8.08.204 DVR:v10.01.00.33.07.6-k EMC PowerMax 2000 supporting NVMe over FC Problem Description: 1. see below multipath output of NVMe device [root@e2e-4-10040 ~]# multipath -ll mpathmm (eui.600009700bcbb77230e700da0000023d) dm-4 NVME,EMC PowerMax_2000 size=25G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:1:0 nvme0n1 259:0 active ready running `- 1:0:1:0 nvme1n1 259:6 active ready running mpathb (INTEL_SSDSC2BX400G4R_BTHC638207XJ400VGN) dm-2 ATA ,INTEL SSDSC2BX40 size=373G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active `- 0:0:1:0 sdb 8:16 active ready running mpathmm is nvme device and for each path of it, it still uses SCSI style address, "0:0:1:0" and "1:0:1:0". but as you know, the address format is like "host:channel:id:lun". In our command output, seems the namespace id value is put into "id" and the "lun" is always zero. In my understanding, the "id" value should be related to the storage controller ports and "lun" should be the value of nvme name space id if we want to make the address meaning consistent across scsi and nvme. In one word, I believe the nsid should be put into the "lun" of path address and "id" should be decided by connected target ports order. How do you think?
Any updates to this BZ? Regards, Wayne.
will RHEL fix this?
I've modified the rhel7 output to match the rhel8 and upstream output for device-mapper controlled nvme devices. In HCIL format, the host and id values remain as they have been. However, the channel value is now equal to the controller id and the lun value is now equal to the namespace id.
Verified with device-mapper-multipath-0.4.9-128.el7: # multipath -ll 3600a098038304267573f4d37784f6849 dm-2 NETAPP ,LUN C-Mode size=15G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw `-+- policy='service-time 0' prio=50 status=active `- 1:0:0:0 sdc 8:32 active ready running mpatha (uuid.d1696189-324a-4f5c-a669-71cf60a32269) dm-4 NVME,NetApp ONTAP Controller size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 0:50624:1:1 nvme0n1 259:0 active ready running | `- 4:50688:1:1 nvme4n1 259:3 active ready running `-+- policy='service-time 0' prio=1 status=enabled |- 2:35777:1:1 nvme2n1 259:1 active ready running `- 3:35841:1:1 nvme3n1 259:2 active ready running # cat /sys/devices/virtual/nvme-fabrics/ctl/nvme0/cntlid 50624 # cat /sys/devices/virtual/nvme-fabrics/ctl/nvme0/nvme0n1/nsid 1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1066
*** Bug 1719563 has been marked as a duplicate of this bug. ***