RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2126076 - After Cluster Node restore paths from previous disconnection, reboot pacemaker will cause the host kernel panic
Summary: After Cluster Node restore paths from previous disconnection, reboot pacemake...
Keywords:
Status: CLOSED DUPLICATE of bug 2102770
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: kernel
Version: 9.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Robert Peterson
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-09-12 11:05 UTC by wilson.hua
Modified: 2022-10-10 15:20 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-10-10 15:20:18 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
pcs report contain log when crash happens (537.59 KB, application/x-bzip)
2022-09-21 05:41 UTC, wilson.hua
no flags Details
crash dump happens at Sep 6th 04:08 (382.65 KB, text/plain)
2022-09-21 07:48 UTC, wilson.hua
no flags Details
Crash dump at Sep 6th 04:31 (382.65 KB, text/plain)
2022-09-21 07:48 UTC, wilson.hua
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-133732 0 None None None 2022-09-12 11:08:52 UTC

Internal Links: 2143262

Description wilson.hua 2022-09-12 11:05:27 UTC
Description of problem:

We got two nodes cluster connecting to shared storage. When one node lost all its connection to the shared storage and then restored, restart the pacemaker will cause the node panic and reboot

Version-Release number of selected component (if applicable):

[root@e2e-l4-094078 ~]# uname -a
Linux e2e-l4-094078 5.14.0-108.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jun 9 10:35:31 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux


How reproducible:


Steps to Reproduce:
1.Create Cluster including at least 2 nodes, all nodes connect to same shared storage, Create A/A service on the nodes 
2.Select one node, disconnect it from the shared storage, the service gets down, but pcs status still shows all service up and online
3.Restore the connection to the storage, restart pacemaker service

Actual results:

Node gets panic and reboot

Expected results:

After pacemaker restart, the service should be available and ready to run IO to the shared storage from the node

Additional info:

[ 1686.577169] ------------[ cut here ]------------
[ 1686.577178] kernel BUG at fs/inode.c:1678!
[ 1686.577189] invalid opcode: 0000 [#1] PREEMPT SMP PTI
[ 1686.577194] CPU: 11 PID: 35820 Comm: umount Kdump: loaded Tainted: G          I      --------- ---  5.14.0-108.el9.x86_64 #1
[ 1686.577199] Hardware name: Dell Inc. PowerEdge R740/0923K0, BIOS 2.12.2 07/09/2021
[ 1686.577201] RIP: 0010:iput+0x1aa/0x1d0
[ 1686.577211] Code: 5b 5d 41 5c e9 a7 f8 ff ff 48 8b 95 98 00 00 00 8b 4d 48 48 89 d0 85 c9 74 a1 48 83 bd e0 00 00 00 00 0f 85 3d ff ff ff eb 91 <0f> 0b c3 0f 0b e9 11 ff ff ff 48 89 ef e8 e4 fd ff ff 4c 89 e7 5b
[ 1686.577215] RSP: 0018:ffffa1e1cd0afce8 EFLAGS: 00010202
[ 1686.577219] RAX: ffffa1e1cd0afd08 RBX: ffffa1e1cd0afd08 RCX: 000000000040003e
[ 1686.577222] RDX: ffff8b7307ab6780 RSI: ffff8b7307ab7500 RDI: ffff8b7188a99ae8
[ 1686.577225] RBP: ffff8b7188a99ae8 R08: ffffa1e1cd0afd08 R09: 0000000000000000
[ 1686.577227] R10: 0000000000000000 R11: 0000000000000000 R12: dead000000000122
[ 1686.577230] R13: dead000000000100 R14: ffff8b73025b3590 R15: 0000000000000000
[ 1686.577232] FS:  00007f95eb559800(0000) GS:ffff8b80bfd40000(0000) knlGS:0000000000000000
[ 1686.577235] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1686.577238] CR2: 000056148b46b018 CR3: 00000012019a6001 CR4: 00000000007706e0
[ 1686.577241] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1686.577243] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 1686.577245] PKRU: 55555554
[ 1686.577247] Call Trace:
[ 1686.577253]  gfs2_jindex_free+0x10b/0x150 [gfs2]
[ 1686.577284]  gfs2_put_super+0x156/0x1e0 [gfs2]
[ 1686.577307]  generic_shutdown_super+0x74/0x110
[ 1686.577313]  kill_block_super+0x21/0x50
[ 1686.577318]  deactivate_locked_super+0x36/0xa0
[ 1686.577323]  cleanup_mnt+0x131/0x190
[ 1686.577328]  task_work_run+0x5c/0x90
[ 1686.577335]  exit_to_user_mode_loop+0x152/0x160
[ 1686.577341]  exit_to_user_mode_prepare+0xb2/0xf0
[ 1686.577344]  syscall_exit_to_user_mode+0x12/0x30
[ 1686.577350]  do_syscall_64+0x69/0x80
[ 1686.577354]  ? __do_sys_newfstatat+0x31/0x70
[ 1686.577359]  ? syscall_exit_work+0x112/0x150
[ 1686.577362]  ? syscall_exit_to_user_mode+0x12/0x30
[ 1686.577366]  ? syscall_exit_work+0x112/0x150
[ 1686.577369]  ? syscall_exit_to_user_mode+0x12/0x30
[ 1686.577372]  ? do_syscall_64+0x69/0x80
[ 1686.577374]  ? do_syscall_64+0x69/0x80
[ 1686.577377]  ? syscall_exit_to_user_mode+0x12/0x30
[ 1686.577380]  ? do_syscall_64+0x69/0x80
[ 1686.577382]  ? exc_page_fault+0x62/0x140
[ 1686.577385]  ? asm_exc_page_fault+0x8/0x30
[ 1686.577393]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 1686.577398] RIP: 0033:0x7f95eb7ac6bb
[ 1686.577402] Code: 0f 1e fa 48 89 fe 31 ff e9 72 08 00 00 66 90 f3 0f 1e fa 31 f6 e9 05 00 00 00 0f 1f 44 00 00 f3 0f 1e fa b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 31 97 0a 00 f7 d8
[ 1686.577405] RSP: 002b:00007ffdedadfac8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 1686.577408] RAX: 0000000000000000 RBX: 000056148b46b430 RCX: 00007f95eb7ac6bb
[ 1686.577411] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000056148b46b460
[ 1686.577412] RBP: 000056148b467390 R08: 0000000000000000 R09: 00007ffdedade850
[ 1686.577415] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[ 1686.577417] R13: 000056148b46b460 R14: 000056148b4674a0 R15: 000056148b467390
[ 1686.577421] Modules linked in: gfs2 joydev dlm snd_seq_dummy snd_hrtimer snd_seq snd_timer snd_seq_device snd soundcore qrtr rfkill sunrpc dm_service_time dm_multipath intel_rapl_msr intel_rapl_common isst_if_common skx_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass rapl intel_cstate dcdbas dell_smbios intel_uncore wmi_bmof dell_wmi_descriptor pcspkr ipmi_ssif mlx5_ib ib_uverbs ib_core mei_me i2c_i801 i2c_smbus lpc_ich mei intel_pch_thermal acpi_ipmi ipmi_si acpi_power_meter xfs libcrc32c sd_mod sg lpfc mlx5_core qla2xxx mgag200 drm_kms_helper nvmet_fc nvmet syscopyarea sysfillrect sysimgblt fb_sys_fops cec nvme_fc crct10dif_pclmul crc32_pclmul crc32c_intel nvme_fabrics ahci mlxfw nvme_core i40e ghash_clmulni_intel drm wdat_wdt psample libahci t10_pi tls scsi_transport_fc igb libata megaraid_sas pci_hyperv_intf dca i2c_algo_bit wmi dm_mirror dm_region_hash dm_log dm_mod ipmi_devintf ipmi_msghandler fuse

Comment 1 Ken Gaillot 2022-09-12 21:09:38 UTC
Hi,

Can you attach the result of "pcs cluster report" covering the time before the crash? Most likely this will need to be reassigned to the kernel component, but I can check whether there were any hints of trouble beforehand.

Comment 2 wilson.hua 2022-09-21 05:41:57 UTC
Created attachment 1913251 [details]
pcs report contain log when crash happens

Comment 3 wilson.hua 2022-09-21 05:42:40 UTC
Have attached the pcs report

[root@e2e-l4-236128 ~]# pcs cluster report --from "2022-09-06 04:00:00" --to "2022-09-06 05:00:00" /crash_report.log
e2e-l4-236128:  Calculated node list: e2e-l4-236128 e2e-l4-236129
e2e-l4-236128:  Collecting data from e2e-l4-236128 e2e-l4-236129  (09/06/22 04:00:00 to 09/06/22 05:00:00)
e2e-l4-236128:  Searching for where Pacemaker daemons live... this may take a while
e2e-l4-236128:  Pacemaker daemons found in: /usr/libexec/pacemaker
e2e-l4-236128:  Pacemaker config files found in: /var/lib/pacemaker/cib
e2e-l4-236128:  Pacemaker scheduler inputs found in: /var/lib/pacemaker/pengine
e2e-l4-236128:  WARN: Skipping bad logfile '/var/log/cluster/corosync.log': Could not determine log dates
e2e-l4-236128:  Including segment [144-232] from /var/log/cluster/corosync.log-20220907.gz
e2e-l4-236128:  Including segment [116807-129428] from /var/log/messages-20220911
e2e-l4-236128:  Including segment [116807-129428] from /var/log/messages-20220911
e2e-l4-236128:  Including segment [8841-9921] from /var/log/pacemaker/pacemaker.log-20220911.gz
e2e-l4-236129:  Searching for where Pacemaker daemons live... this may take a while
e2e-l4-236129:  Pacemaker daemons found in: /usr/libexec/pacemaker
e2e-l4-236129:  Pacemaker config files found in: /var/lib/pacemaker/cib
e2e-l4-236129:  Pacemaker scheduler inputs found in: /var/lib/pacemaker/pengine
e2e-l4-236129:  WARN: Skipping bad logfile '/var/log/cluster/corosync.log': Could not determine log dates
e2e-l4-236129:  Including segment [131-166] from /var/log/cluster/corosync.log-20220907.gz
e2e-l4-236129:  Including segment [76161-77673] from /var/log/messages-20220911
e2e-l4-236129:  Including segment [2597-3795] from /var/log/pacemaker/pacemaker.log-20220911.gz
e2e-l4-236128:
e2e-l4-236128:  Collected results are available in /crash_report.log.tar.bz2
e2e-l4-236128:
e2e-l4-236128:  Please create a bug entry at
e2e-l4-236128:      https://bugzilla.redhat.com/
e2e-l4-236128:  Include a description of your problem and attach this tarball
e2e-l4-236128:
e2e-l4-236128:  Thank you for taking time to create this report.
e2e-l4-236128:

Comment 4 Reid Wahl 2022-09-21 07:35:26 UTC
Can you confirm that the crash occurred on node e2e-l4-236128 around 04:31 on Sep 6 in the cluster report? Can you upload the vmcore-dmesg.txt from that crash?

You mentioned that the issue occurs when restarting pacemaker. Did node e2e-l4-236128 crash while stopping pacemaker or while starting it?


Node e2e-l4-236128's logs don't contain anything from a pacemaker restart. Those logs probably didn't get flushed to disk before the reboot.

Node e2e-l4-236128 shows a gfs2 withdrawal at 04:28:30, after the storage issues began:
```
Sep  6 04:28:30 e2e-l4-236128 kernel: I/O error, dev dm-16, sector 269168 op 0x0:(READ) flags 0x3000 phys_seg 2 prio class 0
Sep  6 04:28:30 e2e-l4-236128 kernel: I/O error, dev dm-16, sector 49024 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0
Sep  6 04:28:30 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: Error 10 writing to journal, jid=0 
Sep  6 04:28:30 e2e-l4-236128 kernel: I/O error, dev dm-16, sector 269120 op 0x0:(READ) flags 0x3000 phys_seg 1 prio class 0
Sep  6 04:28:30 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: about to withdraw this file system
Sep  6 04:28:30 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: dirty_inode: glock -5
Sep  6 04:28:30 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: G:  s:EX n:2/826e f:lDpIon t:UN d:UN/36450000 a:0 v:0 r:6 m:60 p:0
Sep  6 04:28:30 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0:  I: n:1/33390 t:8 f:0x00 d:0x00000000 s:3391094784 p:0
Sep  6 04:28:30 e2e-l4-236128 kernel: SELinux: inode_doinit_use_xattr:  getxattr returned 5 for dev=dm-71 ino=33390
Sep  6 04:28:30 e2e-l4-236128 kernel: SELinux: inode_doinit_use_xattr:  getxattr returned 5 for dev=dm-71 ino=33390
Sep  6 04:28:30 e2e-l4-236128 kernel: SELinux: inode_doinit_use_xattr:  getxattr returned 5 for dev=dm-71 ino=33390
Sep  6 04:28:30 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: The logd daemon is withdrawing.
...
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: Requesting recovery of jid 0.
Sep  6 04:28:35 e2e-l4-236128 kernel: I/O error, dev dm-16, sector 2200 op 0x0:(READ) flags 0x3000 phys_seg 1 prio class 0
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: Reprocessing of jid 0 failed with -5.
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: Journal recovery skipped for jid 0 until next mount.
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: Glock dequeues delayed: 0
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: telling LM to unmount
Sep  6 04:28:35 e2e-l4-236128 kernel: dlm: gfs2: leaving the lockspace group...
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: recover_prep ignored due to withdraw.
Sep  6 04:28:35 e2e-l4-236128 kernel: dlm: gfs2: group event done 0
Sep  6 04:28:35 e2e-l4-236128 kernel: dlm: gfs2: release_lockspace final free
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2: fsid=RHCS:gfs2.0: File system withdrawn
Sep  6 04:28:35 e2e-l4-236128 kernel: CPU: 7 PID: 6662 Comm: gfs2_logd Kdump: loaded Tainted: G          I      --------- ---  5.14.0-108.el9.x86_64 #1
Sep  6 04:28:35 e2e-l4-236128 kernel: Hardware name: Dell Inc. PowerEdge R740/0923K0, BIOS 2.12.2 07/09/2021 
Sep  6 04:28:35 e2e-l4-236128 kernel: Call Trace:
Sep  6 04:28:35 e2e-l4-236128 kernel: dump_stack_lvl+0x34/0x44
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2_withdraw.cold+0xb5/0xde [gfs2] 
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2_log_flush+0x156/0xb20 [gfs2]
Sep  6 04:28:35 e2e-l4-236128 kernel: ? gfs2_ail1_empty+0x253/0x310 [gfs2]
Sep  6 04:28:35 e2e-l4-236128 kernel: ? schedule_timeout+0x90/0x150
Sep  6 04:28:35 e2e-l4-236128 kernel: gfs2_logd+0xec/0x2c0 [gfs2]
Sep  6 04:28:35 e2e-l4-236128 kernel: ? dequeue_task_stop+0x70/0x70 
Sep  6 04:28:35 e2e-l4-236128 kernel: ? gfs2_log_commit+0x1a0/0x1a0 [gfs2]
Sep  6 04:28:35 e2e-l4-236128 kernel: kthread+0x149/0x170
Sep  6 04:28:35 e2e-l4-236128 kernel: ? set_kthread_struct+0x40/0x40 
Sep  6 04:28:35 e2e-l4-236128 kernel: ret_from_fork+0x22/0x30
```

There are some SELinux denials for dlm_controld shortly thereafter:
```
Sep  6 04:28:38 e2e-l4-236128 setroubleshoot[23135]: SELinux is preventing /usr/sbin/dlm_controld from using the signull access on a process. For complete SELinux messages run: sealert -l 13962ff8-d126-4e5e-bff2-623a111ca44e
```


Node e2e-l4-236129's logs suggest that someone tried to stop pacemaker on node e2e-l4-236128 around 04:31:17, but node e2e-l4-236128 failed (possibly while stopping the cluster_fs resource):
```
Sep 06 04:31:17.126 e2e-l4-236129 pacemaker-controld  [3741] (handle_shutdown_request)  info: Creating shutdown request for e2e-l4-236128 (state=S_IDLE)
...
Sep 06 04:31:17.149 e2e-l4-236129 pacemaker-controld  [3741] (te_rsc_command)   notice: Initiating stop operation cluster_fs_stop_0 on e2e-l4-236128 | action 29
...
Sep 06 04:31:23.761 e2e-l4-236129 pacemaker-controld  [3741] (node_left)    info: Group crmd event 6: e2e-l4-236128 (node 1 pid 4288) left via cluster exit
Sep 06 04:31:23.762 e2e-l4-236129 pacemaker-controld  [3741] (crm_update_peer_proc)     info: node_left: Node e2e-l4-236128[1] - corosync-cpg is now offline
Sep 06 04:31:23.762 e2e-l4-236129 pacemaker-controld  [3741] (peer_update_callback)     info: Node e2e-l4-236128 is no longer a peer | DC=true old=0x4000000 new=0000000
```

At this point, node e2e-l4-236129 fenced node e2e-l4-236128. But node e2e-l4-236128 was likely already panicking.

-----

In addition to the questions at the beginning of this comment, can you see if the issue is reproducible with SELinux in permissive mode? Please run `setenforce 0` before reproducing.

I'm going to assign this BZ to the gfs2 team. We can continue to look at the Pacemaker side as needed.

Comment 5 wilson.hua 2022-09-21 07:48:05 UTC
Created attachment 1913272 [details]
crash dump happens at Sep 6th 04:08

Comment 6 wilson.hua 2022-09-21 07:48:44 UTC
Created attachment 1913274 [details]
Crash dump at Sep 6th 04:31

Comment 7 wilson.hua 2022-09-21 07:49:53 UTC
The system just panic right after I run systemctl restart pacemaker
So I'm not sure the panic happens at pacemaker stop or start step...

Comment 8 wilson.hua 2022-09-21 07:52:10 UTC
Also uploaded the vmcore-mesg.txt, please check

Comment 9 Robert Peterson 2022-10-10 15:20:18 UTC
I already found and fixed this bug upstream. The upstream patch is:

04133b607a78f2fd3daadbe5519513942b0f3a05

For now:
https://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2.git/commit/fs/gfs2?h=for-next&id=04133b607a78f2fd3daadbe5519513942b0f3a05

Andreas submitted a pull request for it yesterday, so hopefully Linus will put it into mainline soon.

I've already submitted a merge request for Centos Stream that includes this patch.

https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/1325

This was done for bug #2102770. So I'm closing this bug as a duplicate of that one.

*** This bug has been marked as a duplicate of bug 2102770 ***


Note You need to log in before you can comment on or make changes to this bug.