Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2118945

Summary: NVMe connect command hung and Call traces
Product: Red Hat Enterprise Linux 9 Reporter: Saurav Kashyap <saurav.kashyap>
Component: libnvmeAssignee: Maurizio Lombardi <mlombard>
Status: CLOSED MIGRATED QA Contact: Zhang Yi <yizhan>
Severity: urgent Docs Contact:
Priority: high    
Version: 9.0CC: aeasi, jmeneghi
Target Milestone: rcKeywords: MigratedToJIRA, Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: x86_64   
OS: Linux   
Whiteboard: NVMe_P0
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-23 12:56:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Analaysis none

Description Saurav Kashyap 2022-08-17 07:50:59 UTC
Description of problem: NVMe connect command hung and Call traces


Version-Release number of selected component (if applicable):
5.14.0-70.13.1.el9_0.x86_64


How reproducible:
Easily



Steps to Reproduce:
1) Configure the system for HPE Millbury testing.
2) RHEL 9 BFS system on SN1700Q adapter.
3) Present NVMe namespace
4) Reboot the server or Initiator Port toggle
5) Expected: NVMe namespace are discovered without any call trace
6) Actual: NVMe namespace not discovered and connect command got hung and resulted in Call trace.

Actual results:
Connect command hangs

Expected results:
Connect should not hang

--------------
[root@MillburyRegRH9 ~]# nvme list-subsys
nvme-subsys0 - NQN=nqn.2020-07.com.hpe:5553f61e-48a2-449b-a28c-759d4d938820
\
+- nvme0 fc traddr=nn-0x2ff70102ac0282dd:pn-0x22340102adf282dd host_traddr=nn-0x51402ec012c9cdef:pn-0x51402ec012c9cdee live
+- nvme1 fc traddr=nn-0x2ff70102ac0282dd:pn-0x22340102adf282dd host_traddr=nn-0x51402ec012c9ab5b:pn-0x51402ec012c9ab5a live
+- nvme2 fc traddr=nn-0x2ff70102ac0282dd:pn-0x23320102adf282dd host_traddr=nn-0x51402ec012c9ab5b:pn-0x51402ec012c9ab5a live
+- nvme3 fc traddr=nn-0x2ff70102ac0282dd:pn-0x23320102adf282dd host_traddr=nn-0x51402ec012c9cdef:pn-0*x51402ec012c9cdee connecting*
-----------------




Additional info:

Comment 1 Saurav Kashyap 2022-08-17 07:51:40 UTC
Call trace:

Aug 9 11:59:27 MillburyRegRH9 systemd[9932]: Finished Cleanup of User's Temporary Files and Directories.
Aug 9 11:59:41 MillburyRegRH9 kernel: INFO: task nvme:8970 blocked for more than 245 seconds.
Aug 9 11:59:41 MillburyRegRH9 kernel: Tainted: G OE --------- — 5.14.0-70.13.1.el9_0.x86_64 #1
Aug 9 11:59:41 MillburyRegRH9 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 9 11:59:41 MillburyRegRH9 kernel: task:nvme state stack: 0 pid: 8970 ppid: 1 flags:0x00004000
Aug 9 11:59:41 MillburyRegRH9 kernel: Call Trace:
Aug 9 11:59:41 MillburyRegRH9 kernel: __schedule+0x203/0x560
Aug 9 11:59:41 MillburyRegRH9 kernel: schedule+0x43/0xb0
Aug 9 11:59:41 MillburyRegRH9 kernel: schedule_timeout+0x115/0x150
Aug 9 11:59:41 MillburyRegRH9 kernel: ? enqueue_task+0x48/0x140
Aug 9 11:59:41 MillburyRegRH9 kernel: ? __prepare_to_swait+0x4b/0x70
Aug 9 11:59:41 MillburyRegRH9 kernel: wait_for_completion+0x89/0xe0
Aug 9 11:59:41 MillburyRegRH9 kernel: __flush_work.isra.0+0x160/0x220
Aug 9 11:59:41 MillburyRegRH9 kernel: ? flush_workqueue_prep_pwqs+0x110/0x110
Aug 9 11:59:41 MillburyRegRH9 kernel: nvme_fc_init_ctrl+0x4f3/0x510 [nvme_fc]
Aug 9 11:59:41 MillburyRegRH9 kernel: nvme_fc_create_ctrl+0x1ac/0x250 [nvme_fc]
Aug 9 11:59:41 MillburyRegRH9 kernel: nvmf_create_ctrl+0x12c/0x220 [nvme_fabrics]
Aug 9 11:59:41 MillburyRegRH9 kernel: nvmf_dev_write+0x7d/0xd2 [nvme_fabrics]
Aug 9 11:59:41 MillburyRegRH9 kernel: vfs_write+0xb9/0x270
Aug 9 11:59:41 MillburyRegRH9 kernel: ksys_write+0x5f/0xe0
Aug 9 11:59:41 MillburyRegRH9 kernel: do_syscall_64+0x38/0x90
Aug 9 11:59:41 MillburyRegRH9 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae
Aug 9 11:59:41 MillburyRegRH9 kernel: RIP: 0033:0x7fe34c1ab127
Aug 9 11:59:41 MillburyRegRH9 kernel: RSP: 002b:00007ffed1eabc28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
Aug 9 11:59:41 MillburyRegRH9 kernel: RAX: ffffffffffffffda RBX: 0000000000000153 RCX: 00007fe34c1ab127
Aug 9 11:59:41 MillburyRegRH9 kernel: RDX: 0000000000000153 RSI: 00007ffed1eacd00 RDI: 0000000000000003
Aug 9 11:59:41 MillburyRegRH9 kernel: RBP: 000055649a4eea50 R08: 000000000000002b R09: 00007fe34c21d4e0
Aug 9 11:59:41 MillburyRegRH9 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000055649855a160
Aug 9 11:59:41 MillburyRegRH9 kernel: R13: 000055649855a1bb R14: 0000000000000000 R15: 0000000000000001
Aug 9 11:59:41 MillburyRegRH9 kernel: INFO: task nvme:8979 blocked for more than 245 seconds.
Aug 9 11:59:41 MillburyRegRH9 kernel: Tainted: G OE --------- — 5.14.0-70.13.1.el9_0.x86_64 #1
Aug 9 11:59:41 MillburyRegRH9 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 9 11:59:41 MillburyRegRH9 kernel: task:nvme state stack: 0 pid: 8979 ppid: 1 flags:0x00000000
Aug 9 11:59:41 MillburyRegRH9 kernel: Call Trace:
Aug 9 11:59:41 MillburyRegRH9 kernel: __schedule+0x203/0x560
Aug 9 11:59:41 MillburyRegRH9 kernel: schedule+0x43/0xb0
Aug 9 11:59:41 MillburyRegRH9 kernel: schedule_preempt_disabled+0x11/0x20
Aug 9 11:59:41 MillburyRegRH9 kernel: __mutex_lock.constprop.0+0x234/0x430
Aug 9 11:59:41 MillburyRegRH9 kernel: ? __fput+0xff/0x240
Aug 9 11:59:41 MillburyRegRH9 kernel: ? __check_object_size.part.0+0x11f/0x140
Aug 9 11:59:41 MillburyRegRH9 kernel: ? _copy_from_user+0x28/0x60
Aug 9 11:59:41 MillburyRegRH9 kernel: nvmf_dev_write+0x44/0xd2 [nvme_fabrics]
Aug 9 11:59:41 MillburyRegRH9 kernel: vfs_write+0xb9/0x270
Aug 9 11:59:41 MillburyRegRH9 kernel: ksys_write+0x5f/0xe0
Aug 9 11:59:41 MillburyRegRH9 kernel: do_syscall_64+0x38/0x90
Aug 9 11:59:41 MillburyRegRH9 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae
Aug 9 11:59:41 MillburyRegRH9 kernel: RIP: 0033:0x7f56e234f127
Aug 9 11:59:41 MillburyRegRH9 kernel: RSP: 002b:00007ffc79ecb9f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
Aug 9 11:59:41 MillburyRegRH9 kernel: RAX: ffffffffffffffda RBX: 0000000000000153 RCX: 00007f56e234f127
Aug 9 11:59:41 MillburyRegRH9 kernel: RDX: 0000000000000153 RSI: 00007ffc79eccad0 RDI: 0000000000000003
Aug 9 11:59:41 MillburyRegRH9 kernel: RBP: 000056267f5afe90 R08: 000000000000002b R09: 00007f56e23c14e0
Aug 9 11:59:41 MillburyRegRH9 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000056267e72b160
Aug 9 11:59:41 MillburyRegRH9 kernel: R13: 000056267e72b1bb R14: 0000000000000000 R15: 0000000000000001

Comment 2 Saurav Kashyap 2022-08-17 08:20:52 UTC
I have share the crash files at https://secureshare.marvell.com/w/f-48f7cf08-3045-4a28-ae84-844104798998

Comment 3 Saurav Kashyap 2022-08-17 08:21:14 UTC
Created attachment 1905901 [details]
Analaysis

Comment 9 John Meneghini 2023-07-18 16:03:17 UTC
Sauruv, Is this problem reproducible with RHEL9.2?

Comment 10 Saurav Kashyap 2023-08-01 04:43:24 UTC
Hi John,
It is not reproducible on RH9.2. Even on 9.0, it's not easily reproducible. We can close this BZ for now and will reopen if we are able to figure out a consistent way of reproduction.

Comment 11 RHEL Program Management 2023-09-23 12:54:45 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 12 RHEL Program Management 2023-09-23 12:56:19 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.