Bug 2118945
| Summary: | NVMe connect command hung and Call traces | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Saurav Kashyap <saurav.kashyap> | ||||
| Component: | libnvme | Assignee: | Maurizio Lombardi <mlombard> | ||||
| Status: | CLOSED MIGRATED | QA Contact: | Zhang Yi <yizhan> | ||||
| Severity: | urgent | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 9.0 | CC: | aeasi, jmeneghi | ||||
| Target Milestone: | rc | Keywords: | MigratedToJIRA, Triaged | ||||
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | NVMe_P0 | ||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2023-09-23 12:56:19 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Saurav Kashyap
2022-08-17 07:50:59 UTC
Call trace: Aug 9 11:59:27 MillburyRegRH9 systemd[9932]: Finished Cleanup of User's Temporary Files and Directories. Aug 9 11:59:41 MillburyRegRH9 kernel: INFO: task nvme:8970 blocked for more than 245 seconds. Aug 9 11:59:41 MillburyRegRH9 kernel: Tainted: G OE --------- — 5.14.0-70.13.1.el9_0.x86_64 #1 Aug 9 11:59:41 MillburyRegRH9 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 9 11:59:41 MillburyRegRH9 kernel: task:nvme state stack: 0 pid: 8970 ppid: 1 flags:0x00004000 Aug 9 11:59:41 MillburyRegRH9 kernel: Call Trace: Aug 9 11:59:41 MillburyRegRH9 kernel: __schedule+0x203/0x560 Aug 9 11:59:41 MillburyRegRH9 kernel: schedule+0x43/0xb0 Aug 9 11:59:41 MillburyRegRH9 kernel: schedule_timeout+0x115/0x150 Aug 9 11:59:41 MillburyRegRH9 kernel: ? enqueue_task+0x48/0x140 Aug 9 11:59:41 MillburyRegRH9 kernel: ? __prepare_to_swait+0x4b/0x70 Aug 9 11:59:41 MillburyRegRH9 kernel: wait_for_completion+0x89/0xe0 Aug 9 11:59:41 MillburyRegRH9 kernel: __flush_work.isra.0+0x160/0x220 Aug 9 11:59:41 MillburyRegRH9 kernel: ? flush_workqueue_prep_pwqs+0x110/0x110 Aug 9 11:59:41 MillburyRegRH9 kernel: nvme_fc_init_ctrl+0x4f3/0x510 [nvme_fc] Aug 9 11:59:41 MillburyRegRH9 kernel: nvme_fc_create_ctrl+0x1ac/0x250 [nvme_fc] Aug 9 11:59:41 MillburyRegRH9 kernel: nvmf_create_ctrl+0x12c/0x220 [nvme_fabrics] Aug 9 11:59:41 MillburyRegRH9 kernel: nvmf_dev_write+0x7d/0xd2 [nvme_fabrics] Aug 9 11:59:41 MillburyRegRH9 kernel: vfs_write+0xb9/0x270 Aug 9 11:59:41 MillburyRegRH9 kernel: ksys_write+0x5f/0xe0 Aug 9 11:59:41 MillburyRegRH9 kernel: do_syscall_64+0x38/0x90 Aug 9 11:59:41 MillburyRegRH9 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae Aug 9 11:59:41 MillburyRegRH9 kernel: RIP: 0033:0x7fe34c1ab127 Aug 9 11:59:41 MillburyRegRH9 kernel: RSP: 002b:00007ffed1eabc28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 Aug 9 11:59:41 MillburyRegRH9 kernel: RAX: ffffffffffffffda RBX: 0000000000000153 RCX: 00007fe34c1ab127 Aug 9 11:59:41 MillburyRegRH9 kernel: RDX: 0000000000000153 RSI: 00007ffed1eacd00 RDI: 0000000000000003 Aug 9 11:59:41 MillburyRegRH9 kernel: RBP: 000055649a4eea50 R08: 000000000000002b R09: 00007fe34c21d4e0 Aug 9 11:59:41 MillburyRegRH9 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000055649855a160 Aug 9 11:59:41 MillburyRegRH9 kernel: R13: 000055649855a1bb R14: 0000000000000000 R15: 0000000000000001 Aug 9 11:59:41 MillburyRegRH9 kernel: INFO: task nvme:8979 blocked for more than 245 seconds. Aug 9 11:59:41 MillburyRegRH9 kernel: Tainted: G OE --------- — 5.14.0-70.13.1.el9_0.x86_64 #1 Aug 9 11:59:41 MillburyRegRH9 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 9 11:59:41 MillburyRegRH9 kernel: task:nvme state stack: 0 pid: 8979 ppid: 1 flags:0x00000000 Aug 9 11:59:41 MillburyRegRH9 kernel: Call Trace: Aug 9 11:59:41 MillburyRegRH9 kernel: __schedule+0x203/0x560 Aug 9 11:59:41 MillburyRegRH9 kernel: schedule+0x43/0xb0 Aug 9 11:59:41 MillburyRegRH9 kernel: schedule_preempt_disabled+0x11/0x20 Aug 9 11:59:41 MillburyRegRH9 kernel: __mutex_lock.constprop.0+0x234/0x430 Aug 9 11:59:41 MillburyRegRH9 kernel: ? __fput+0xff/0x240 Aug 9 11:59:41 MillburyRegRH9 kernel: ? __check_object_size.part.0+0x11f/0x140 Aug 9 11:59:41 MillburyRegRH9 kernel: ? _copy_from_user+0x28/0x60 Aug 9 11:59:41 MillburyRegRH9 kernel: nvmf_dev_write+0x44/0xd2 [nvme_fabrics] Aug 9 11:59:41 MillburyRegRH9 kernel: vfs_write+0xb9/0x270 Aug 9 11:59:41 MillburyRegRH9 kernel: ksys_write+0x5f/0xe0 Aug 9 11:59:41 MillburyRegRH9 kernel: do_syscall_64+0x38/0x90 Aug 9 11:59:41 MillburyRegRH9 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae Aug 9 11:59:41 MillburyRegRH9 kernel: RIP: 0033:0x7f56e234f127 Aug 9 11:59:41 MillburyRegRH9 kernel: RSP: 002b:00007ffc79ecb9f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 Aug 9 11:59:41 MillburyRegRH9 kernel: RAX: ffffffffffffffda RBX: 0000000000000153 RCX: 00007f56e234f127 Aug 9 11:59:41 MillburyRegRH9 kernel: RDX: 0000000000000153 RSI: 00007ffc79eccad0 RDI: 0000000000000003 Aug 9 11:59:41 MillburyRegRH9 kernel: RBP: 000056267f5afe90 R08: 000000000000002b R09: 00007f56e23c14e0 Aug 9 11:59:41 MillburyRegRH9 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000056267e72b160 Aug 9 11:59:41 MillburyRegRH9 kernel: R13: 000056267e72b1bb R14: 0000000000000000 R15: 0000000000000001 I have share the crash files at https://secureshare.marvell.com/w/f-48f7cf08-3045-4a28-ae84-844104798998 Created attachment 1905901 [details]
Analaysis
Sauruv, Is this problem reproducible with RHEL9.2? Hi John, It is not reproducible on RH9.2. Even on 9.0, it's not easily reproducible. We can close this BZ for now and will reopen if we are able to figure out a consistent way of reproduction. Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug. This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. |