Bug 1555938

Summary: Synchronize fence history on restarted nodes
Product: Red Hat Enterprise Linux 7 Reporter: Ken Gaillot <kgaillot>
Component: pacemakerAssignee: Klaus Wenninger <kwenning>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: high Docs Contact:
Priority: high    
Version: 7.6CC: abeekhof, cfeist, cluster-maint, mmazoure, phagara
Target Milestone: rc   
Target Release: 7.7   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pacemaker-1.1.20-1.el7 Doc Type: No Doc Update
Doc Text:
Mostly under-the-hood for end users
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-08-06 12:53:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1461964, 1608369    

Description Ken Gaillot 2018-03-14 22:16:30 UTC
Description of problem: Pacemaker's stonithd synchronizes fence history across all nodes, but only since the local stonithd was started. If a node restarts, it loses any fence history from before it was started, leading to different history views (stonith_admin --history) from different nodes.

When a stonithd joins the cluster, any existing history should be synchronized to it. This would keep the view the same regardless of node, and preserve fence history across the active lifetime of the cluster.

Comment 1 Ken Gaillot 2018-06-26 22:03:58 UTC
Moving to RHEL 7 to target 7.7. If this doesn't make RHEL 8 via initial rebase, it will be cloned for RHEL 8.

Comment 3 Klaus Wenninger 2018-08-17 13:43:20 UTC
Available on 1.1-branch with
https://github.com/ClusterLabs/pacemaker/commit/a9ecba18a3cb3d0ab9157f87e27f9d25ac2a34a4

Comment 4 Ken Gaillot 2019-02-02 00:40:36 UTC
QA: Start a cluster with multiple nodes. Fence one of them. When the fenced node reboots and rejoins the cluster, run "stonith_admin --history '*'" on it. Before the fix, it will show nothing; after the fix, it will show the actual fence history.

Comment 5 Patrik Hagara 2019-03-20 10:24:30 UTC
qa-ack+

reproducer in comment#4

Comment 7 Michal Mazourek 2019-06-12 14:25:44 UTC
BEFORE (pacemaker-1.1.19-8.el7)
======
## Fence one node
[root@virt-186 ~]# pcs stonith fence virt-178
Node: virt-178 fenced

## Wait for the fenced node to join cluster again and check its stonith history
[root@virt-178 ~]# stonith_admin --history '*'
[root@virt-178 ~]#

## Other nodes of the cluster
[root@virt-186 ~]# stonith_admin --history '*'
virt-187 was able to reboot node virt-178 on behalf of stonith_admin.16457 from virt-186 at Wed Jun 12 15:17:37 2019

AFTER (pacemaker-1.1.20-5.el7)
=====
## Fence one node
[root@virt-022 ~]# pcs stonith fence virt-018
Node: virt-018 fenced

## Wait for the fenced node to join cluster again and check its stonith history
[root@virt-018 ~]# stonith_admin --history '*'
virt-012 was able to reboot node virt-018 on behalf of stonith_admin.11812 from virt-022 at Wed Jun 12 15:56:56 2019

## Now its synchronized with the stonith history on other nodes
[root@virt-022 ~]# stonith_admin --history '*'
virt-012 was able to reboot node virt-018 on behalf of stonith_admin.11812 from virt-022 at Wed Jun 12 15:56:56 2019

RESULT
======
Before the fix, fenced node lost any stonith history from before. Now the stonith history is synchronized with other nodes, even after reboot.

Verified for version pacemaker-1.1.20-5.el7

Comment 9 errata-xmlrpc 2019-08-06 12:53:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2129