Description of problem: Fencing history shows incorrect date & time of completed actions. Example: Fencing History: * reboot of virt-142 successful: delegate=virt-141, client=pacemaker-controld.54108, origin=virt-141, completed='1970-01-01 02:41:53 +01:00' Version-Release number of selected component (if applicable): pacemaker-2.1.2-1.el9 How reproducible: Always Steps to Reproduce: 1. Configure cluster with fence device. 2. Cause failure on one of the nodes, so the node is fenced. 3. Run 'pcs status --full' and check "Fencing History". Actual results: [root@virt-141 ~]# pcs status --full Cluster name: STSRHTS31857 Cluster Summary: * Stack: corosync * Current DC: virt-141 (1) (version 2.1.2-1.el9-ada5c3b36e2) - partition with quorum * Last updated: Mon Dec 13 13:03:28 2021 * Last change: Mon Dec 13 11:19:31 2021 by root via cibadmin on virt-141 * 2 nodes configured * 2 resource instances configured Node List: * Online: [ virt-141 (1) virt-142 (2) ] Full List of Resources: * fence-virt-141 (stonith:fence_xvm): Started virt-141 * fence-virt-142 (stonith:fence_xvm): Started virt-141 Migration Summary: Fencing History: * reboot of virt-142 successful: delegate=virt-141, client=pacemaker-controld.54108, origin=virt-141, completed='1970-01-01 02:41:53 +01:00' * reboot of virt-142 successful: delegate=virt-141, client=stonith_admin.60043, origin=virt-141, completed='1970-01-01 01:13:08 +01:00' Tickets: PCSD Status: virt-141: Online virt-142: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled Expected results: To have a correct date & time of completed fencing action. Additional info:
This was a regression introduced in the upstream 2.1.2 release by commit f52bc8e.
Fixed upstream as of commit 0339e89f
[root@virt-027 ~]# rpm -q pacemaker pacemaker-2.1.2-4.el9.x86_64 [root@virt-027 ~]# pcs status Cluster name: STSRHTS23380 Cluster Summary: * Stack: corosync * Current DC: virt-026 (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum * Last updated: Mon Feb 7 18:00:19 2022 * Last change: Mon Feb 7 17:56:55 2022 by root via cibadmin on virt-026 * 2 nodes configured * 2 resource instances configured Node List: * Online: [ virt-026 virt-027 ] Full List of Resources: * fence-virt-026 (stonith:fence_xvm): Started virt-026 * fence-virt-027 (stonith:fence_xvm): Started virt-027 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@virt-027 ~]# pcs stonith fence virt-026 Node: virt-026 fenced [root@virt-027 ~]# pcs status --full Cluster name: STSRHTS23380 Cluster Summary: * Stack: corosync * Current DC: virt-027 (2) (version 2.1.2-4.el9-ada5c3b36e2) - partition with quorum * Last updated: Mon Feb 7 18:01:51 2022 * Last change: Mon Feb 7 17:56:55 2022 by root via cibadmin on virt-026 * 2 nodes configured * 2 resource instances configured Node List: * Online: [ virt-026 (1) virt-027 (2) ] Full List of Resources: * fence-virt-026 (stonith:fence_xvm): Started virt-027 * fence-virt-027 (stonith:fence_xvm): Started virt-027 Migration Summary: Fencing History: * reboot of virt-026 successful: delegate=virt-027, client=stonith_admin.59951, origin=virt-027, completed='2022-02-07 18:00:43 +01:00' Tickets: PCSD Status: virt-026: Online virt-027: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (new packages: pacemaker), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:2293