This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2192922 - NVMe controllers are not reconnecting for 9mins or more during an initiator outage test
Summary: NVMe controllers are not reconnecting for 9mins or more during an initiator o...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: nvme-cli
Version: 8.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Maurizio Lombardi
QA Contact: Zhang Yi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-05-03 14:15 UTC by pely
Modified: 2024-01-27 04:25 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-23 13:02:41 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
udev logs (2.97 KB, text/plain)
2023-06-16 15:23 UTC, pely
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-8063 0 None Migrated None 2023-09-23 13:02:37 UTC
Red Hat Issue Tracker RHELPLAN-156312 0 None None None 2023-05-03 14:16:04 UTC

Description pely 2023-05-03 14:15:42 UTC
Description of problem:
After disrupting the initiator link for two minutes, the NVME controllers and namespaces are not getting recovered on a timely basis after the lpfc driver successfully recovers the FC logins and successfully reregisters the remote ports.  The recovery of NVME controllers and namespaces can take between 9 minutes to 61 minutes.

Version-Release number of selected component (if applicable):
Issue is seen on:

Linux dhcp-10-231-139-179 4.18.0-372.9.1.el8.x86_64 #1 SMP Fri Apr 15 22:12:19 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

[root@dhcp-10-231-139-179 ~]# cat /etc/os-release 
NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.6 (Ootpa)"



How reproducible:
Always.  Time to reproduction is about 10 minutes


Steps to Reproduce:
1. Map few SCSI and NVMe NS from a target to both the HBA ports. Enable
multipath and NVMe ANA to detect the multipath devices.  The IBM9500 target is in use for ECD, but any target capable of FCP and NVME will do.

Zone config:

Zone1: HBA Port0 + SCSI Tgt Port0 + NVMe Tgt Port0
Zone2: HBA Port1 + SCSI Tgt Port0 + NVMe Tgt Port0

[root@dhcp-10-231-133-36 ~]# nvme list-subsys
nvme-subsys0 - NQN=nqn.1986-03.com.ibm:nvme:2145.00000204E0607C1E
\
 +- nvme0 fc traddr=nn-0x5005076813003e0f:pn-0x50050768131b3e0f
host_traddr=nn-0x200000109bf67eba:pn-0x100000109bf67eba live
 +- nvme1 fc traddr=nn-0x5005076813003e0f:pn-0x50050768131b3e0f
host_traddr=nn-0x200000109bf67ebb:pn-0x100000109bf67ebb live


2. Do a port shut from Cisco64G switch. Enable the port after a sleep of 120
secs.  Again, this is not a switch issue so any vendor should be OK.

3. SCSI luns got detected, but NVMe controllers did not detected even after
waiting for more than ~10 minutes. 

Actual results:
NVME paths do not show up for long periods of time.

Expected results:
SCSI and NVME pathing should recover in a reasonable amount of time.

Additional info:

Comment 1 Ewan D. Milne 2023-05-15 18:48:21 UTC
Run udevadm monitor --property and see how long it takes for the udev event
to be generated for the rport discovery.  My guess is any delay is in the
systemd / udevd processing of the event and issuing of the nvme-cli command
but it would be worthwhile to check to make sure it isn't some delay in
nvme-cli itself.

Comment 2 pely 2023-06-16 15:22:59 UTC
(In reply to Ewan D. Milne from comment #1)
> Run udevadm monitor --property and see how long it takes for the udev event
> to be generated for the rport discovery.  My guess is any delay is in the
> systemd / udevd processing of the event and issuing of the nvme-cli command
> but it would be worthwhile to check to make sure it isn't some delay in
> nvme-cli itself.

Here is the udev log data from the last capture.

Yes, UDEV just stopped responding.

KERNEL event to remove the controller timestamp 1088

KERNEL[1088.664920] remove   /devices/virtual/nvme-fabrics/ctl/nvme1 (nvme)
ACTION=remove
DEVNAME=/dev/nvme1
DEVPATH=/devices/virtual/nvme-fabrics/ctl/nvme1
MAJOR=241
MINOR=1
NVME_HOST_IFACE=none
NVME_HOST_TRADDR=nn-0x200000109b214b9a:pn-0x100000109b214b9a
NVME_TRADDR=nn-0x5005076813003e0f:pn-0x50050768131b3e0f
NVME_TRSVCID=none
NVME_TRTYPE=fc
SEQNUM=20680
SUBSYSTEM=nvme

KERNEL event to change the fc_udev device.  timestamp 1194, 1.5 minutes
KERNEL[1194.523462] change   /devices/virtual/fc/fc_udev_device (fc)
ACTION=change
DEVPATH=/devices/virtual/fc/fc_udev_device
FC_EVENT=nvmediscovery
NVMEFC_HOST_TRADDR=nn-0x200000109b214b9a:pn-0x100000109b214b9a
NVMEFC_TRADDR=nn-0x5005076813003e0f:pn-0x50050768131b3e0f
SEQNUM=29439
SUBSYSTEM=fc

Now user space UDEV finally responds to the KERNEL remove event.  1196 - 1088 =
108 seconds
UDEV  [1196.639621] remove   /devices/virtual/nvme-fabrics/ctl/nvme1 (nvme)
ACTION=remove
DEVNAME=/dev/nvme1
DEVPATH=/devices/virtual/nvme-fabrics/ctl/nvme1
MAJOR=241
MINOR=1
NVME_HOST_IFACE=none
NVME_HOST_TRADDR=nn-0x200000109b214b9a:pn-0x100000109b214b9a
NVME_TRADDR=nn-0x5005076813003e0f:pn-0x50050768131b3e0f
NVME_TRSVCID=none
NVME_TRTYPE=fc
SEQNUM=20680
SUBSYSTEM=nvme
USEC_INITIALIZED=1196616665

Now user space UDEV finally responds to the KERNEL change event.  1770 - 1194 =
576 seconds

UDEV  [1770.802920] change   /devices/virtual/fc/fc_udev_device (fc)
ACTION=change
DEVPATH=/devices/virtual/fc/fc_udev_device
FC_EVENT=nvmediscovery
NVMEFC_HOST_TRADDR=nn-0x200000109b214b9a:pn-0x100000109b214b9a
NVMEFC_TRADDR=nn-0x5005076813003e0f:pn-0x50050768131b3e0f
SEQNUM=29439
SUBSYSTEM=fc
USEC_INITIALIZED=1770779672

Comment 3 pely 2023-06-16 15:23:34 UTC
Created attachment 1971195 [details]
udev logs

Comment 4 pely 2023-08-31 13:04:11 UTC
Maurizio, Ewan,

Given comment 2 and the udev logs, is there any path forward?  Does RedHat want to investigate this UDEV delay?  Is there a way forward in RHEL 8.6?

If not, I would rather close it as WILLNOTFIX and also close the Broadcom Bug.  RHEL 8.9 is about to go GA  and the UDEV solution for NVME events hasn't
exactly been robust.

Comment 5 RHEL Program Management 2023-09-23 13:01:56 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 6 RHEL Program Management 2023-09-23 13:02:41 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.

Comment 7 Red Hat Bugzilla 2024-01-27 04:25:32 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.