+++ This bug was initially created as a clone of Bug #2031765 +++ Description of problem: Fencing history shows incorrect date & time of completed actions. Example: Fencing History: * reboot of virt-142 successful: delegate=virt-141, client=pacemaker-controld.54108, origin=virt-141, completed='1970-01-01 02:41:53 +01:00' Version-Release number of selected component (if applicable): pacemaker-2.1.2-1.el9 How reproducible: Always Steps to Reproduce: 1. Configure cluster with fence device. 2. Cause failure on one of the nodes, so the node is fenced. 3. Run 'pcs status --full' and check "Fencing History". Actual results: [root@virt-141 ~]# pcs status --full Cluster name: STSRHTS31857 Cluster Summary: * Stack: corosync * Current DC: virt-141 (1) (version 2.1.2-1.el9-ada5c3b36e2) - partition with quorum * Last updated: Mon Dec 13 13:03:28 2021 * Last change: Mon Dec 13 11:19:31 2021 by root via cibadmin on virt-141 * 2 nodes configured * 2 resource instances configured Node List: * Online: [ virt-141 (1) virt-142 (2) ] Full List of Resources: * fence-virt-141 (stonith:fence_xvm): Started virt-141 * fence-virt-142 (stonith:fence_xvm): Started virt-141 Migration Summary: Fencing History: * reboot of virt-142 successful: delegate=virt-141, client=pacemaker-controld.54108, origin=virt-141, completed='1970-01-01 02:41:53 +01:00' * reboot of virt-142 successful: delegate=virt-141, client=stonith_admin.60043, origin=virt-141, completed='1970-01-01 01:13:08 +01:00' Tickets: PCSD Status: virt-141: Online virt-142: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled Expected results: To have a correct date & time of completed fencing action. Additional info: --- Additional comment from Ken Gaillot on 2021-12-13 21:37:55 UTC --- This was a regression introduced in the upstream 2.1.2 release by commit f52bc8e.
Fixed upstream as of commit 0339e89f
[root@virt-540 ~]# rpm -q pacemaker pacemaker-2.1.2-4.el8.x86_64 [root@virt-540 ~]# pcs status Cluster name: STSRHTS13192 Cluster Summary: * Stack: corosync * Current DC: virt-541 (version 2.1.2-4.el8-ada5c3b36e2) - partition with quorum * Last updated: Mon Feb 7 18:07:21 2022 * Last change: Mon Feb 7 18:04:08 2022 by root via cibadmin on virt-540 * 2 nodes configured * 2 resource instances configured Node List: * Online: [ virt-540 virt-541 ] Full List of Resources: * fence-virt-540 (stonith:fence_xvm): Started virt-540 * fence-virt-541 (stonith:fence_xvm): Started virt-541 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@virt-540 ~]# pcs stonith fence virt-541 Node: virt-541 fenced [root@virt-540 ~]# pcs status --full Cluster name: STSRHTS13192 Cluster Summary: * Stack: corosync * Current DC: virt-540 (1) (version 2.1.2-4.el8-ada5c3b36e2) - partition with quorum * Last updated: Mon Feb 7 18:10:40 2022 * Last change: Mon Feb 7 18:04:08 2022 by root via cibadmin on virt-540 * 2 nodes configured * 2 resource instances configured Node List: * Online: [ virt-540 (1) virt-541 (2) ] Full List of Resources: * fence-virt-540 (stonith:fence_xvm): Started virt-540 * fence-virt-541 (stonith:fence_xvm): Started virt-541 Migration Summary: Fencing History: * reboot of virt-541 successful: delegate=virt-540, client=stonith_admin.55240, origin=virt-540, completed='2022-02-07 18:07:41 +01:00' Tickets: PCSD Status: virt-540: Online virt-541: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1885