RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1686708 - the single path address shown in "multipath -ll" for NVMe device is not consistent as SCSI devices
Summary: the single path address shown in "multipath -ll" for NVMe device is not cons...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: device-mapper-multipath
Version: 7.6
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: 7.8
Assignee: Ben Marzinski
QA Contact: Marco Patalano
URL:
Whiteboard:
: 1719563 (view as bug list)
Depends On:
Blocks: 1689420 1729245
TreeView+ depends on / blocked
 
Reported: 2019-03-08 05:06 UTC by heyi
Modified: 2021-09-03 12:06 UTC (History)
15 users (show)

Fixed In Version: device-mapper-multipath-0.4.9-128.el7
Doc Type: Bug Fix
Doc Text:
Cause: Multpiath's listing output for nvme devices didn't include some nvme device information Consequence: The listing output was confusing, and an inconsistent with the information format for scsi devices Fix: The multipath listing output now includes the controller id where scsi devices listed the channel id, and the namespace id where scsi devics listed the lun Result: The multipath listing output for nvme devices now includes more information, in a way consistent with the scsi device format.
Clone Of:
Environment:
Last Closed: 2020-03-31 19:47:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:1066 0 None None None 2020-03-31 19:47:31 UTC

Description heyi 2019-03-08 05:06:08 UTC
Version-Release number of selected component (if applicable):

RHEL7.6
QLE2742 FW:v8.08.204 DVR:v10.01.00.33.07.6-k
EMC PowerMax 2000 supporting NVMe over FC 

Problem Description:

1. see below multipath output of NVMe device
[root@e2e-4-10040 ~]# multipath -ll
mpathmm (eui.600009700bcbb77230e700da0000023d) dm-4 NVME,EMC PowerMax_2000
size=25G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 0:0:1:0 nvme0n1 259:0  active ready running
  `- 1:0:1:0 nvme1n1 259:6  active ready running

mpathb (INTEL_SSDSC2BX400G4R_BTHC638207XJ400VGN) dm-2 ATA     ,INTEL SSDSC2BX40
size=373G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 0:0:1:0 sdb     8:16   active ready running


    mpathmm is nvme device and for each path of it, it still uses SCSI style address, "0:0:1:0" and "1:0:1:0". but as you know, the address format is like "host:channel:id:lun". In our command output, seems the namespace id value is put into "id" and the "lun" is always zero. In my understanding, the "id" value should be related to the storage controller ports and "lun" should be the value of nvme name space id if we want to make the address meaning consistent across scsi and nvme. 

    In one word, I believe the nsid should be put into the "lun" of path address and "id" should be decided by connected target ports order. 

    How do you think?

Comment 2 berthiaume_wayne@emc.com 2019-06-05 00:50:20 UTC
Any updates to this BZ?

Regards,
Wayne.

Comment 3 heyi 2019-06-11 05:01:14 UTC
will RHEL fix this?

Comment 4 Ben Marzinski 2019-08-12 21:08:53 UTC
I've modified the rhel7 output to match the rhel8 and upstream output for device-mapper controlled nvme devices. In HCIL format, the host and id values remain as they have been.  However, the channel value is now equal to the controller id and the lun value is now equal to the namespace id.

Comment 7 Marco Patalano 2020-01-20 18:38:10 UTC
Verified with device-mapper-multipath-0.4.9-128.el7:

# multipath -ll
3600a098038304267573f4d37784f6849 dm-2 NETAPP  ,LUN C-Mode      
size=15G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 1:0:0:0     sdc     8:32  active ready running
mpatha (uuid.d1696189-324a-4f5c-a669-71cf60a32269) dm-4 NVME,NetApp ONTAP Controller                 
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:50624:1:1 nvme0n1 259:0 active ready running
| `- 4:50688:1:1 nvme4n1 259:3 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  |- 2:35777:1:1 nvme2n1 259:1 active ready running
  `- 3:35841:1:1 nvme3n1 259:2 active ready running


# cat /sys/devices/virtual/nvme-fabrics/ctl/nvme0/cntlid 
50624

# cat /sys/devices/virtual/nvme-fabrics/ctl/nvme0/nvme0n1/nsid 
1

Comment 9 errata-xmlrpc 2020-03-31 19:47:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1066

Comment 10 Ben Marzinski 2020-04-08 16:11:17 UTC
*** Bug 1719563 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.