Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Cause: Multpiath's listing output for nvme devices didn't include some nvme device information
Consequence: The listing output was confusing, and an inconsistent with the information format for scsi devices
Fix: The multipath listing output now includes the controller id where scsi devices listed the channel id, and the namespace id where scsi devics listed the lun
Result: The multipath listing output for nvme devices now includes more information, in a way consistent with the scsi device format.
Version-Release number of selected component (if applicable):
RHEL7.6
QLE2742 FW:v8.08.204 DVR:v10.01.00.33.07.6-k
EMC PowerMax 2000 supporting NVMe over FC
Problem Description:
1. see below multipath output of NVMe device
[root@e2e-4-10040 ~]# multipath -ll
mpathmm (eui.600009700bcbb77230e700da0000023d) dm-4 NVME,EMC PowerMax_2000
size=25G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 0:0:1:0 nvme0n1 259:0 active ready running
`- 1:0:1:0 nvme1n1 259:6 active ready running
mpathb (INTEL_SSDSC2BX400G4R_BTHC638207XJ400VGN) dm-2 ATA ,INTEL SSDSC2BX40
size=373G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 0:0:1:0 sdb 8:16 active ready running
mpathmm is nvme device and for each path of it, it still uses SCSI style address, "0:0:1:0" and "1:0:1:0". but as you know, the address format is like "host:channel:id:lun". In our command output, seems the namespace id value is put into "id" and the "lun" is always zero. In my understanding, the "id" value should be related to the storage controller ports and "lun" should be the value of nvme name space id if we want to make the address meaning consistent across scsi and nvme.
In one word, I believe the nsid should be put into the "lun" of path address and "id" should be decided by connected target ports order.
How do you think?
Comment 2berthiaume_wayne@emc.com
2019-06-05 00:50:20 UTC
I've modified the rhel7 output to match the rhel8 and upstream output for device-mapper controlled nvme devices. In HCIL format, the host and id values remain as they have been. However, the channel value is now equal to the controller id and the lun value is now equal to the namespace id.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2020:1066
Version-Release number of selected component (if applicable): RHEL7.6 QLE2742 FW:v8.08.204 DVR:v10.01.00.33.07.6-k EMC PowerMax 2000 supporting NVMe over FC Problem Description: 1. see below multipath output of NVMe device [root@e2e-4-10040 ~]# multipath -ll mpathmm (eui.600009700bcbb77230e700da0000023d) dm-4 NVME,EMC PowerMax_2000 size=25G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:1:0 nvme0n1 259:0 active ready running `- 1:0:1:0 nvme1n1 259:6 active ready running mpathb (INTEL_SSDSC2BX400G4R_BTHC638207XJ400VGN) dm-2 ATA ,INTEL SSDSC2BX40 size=373G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active `- 0:0:1:0 sdb 8:16 active ready running mpathmm is nvme device and for each path of it, it still uses SCSI style address, "0:0:1:0" and "1:0:1:0". but as you know, the address format is like "host:channel:id:lun". In our command output, seems the namespace id value is put into "id" and the "lun" is always zero. In my understanding, the "id" value should be related to the storage controller ports and "lun" should be the value of nvme name space id if we want to make the address meaning consistent across scsi and nvme. In one word, I believe the nsid should be put into the "lun" of path address and "id" should be decided by connected target ports order. How do you think?