RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1082146 - Adding 'delay' parameter of fencing for single node in a two node cluster with a shared fence device in RHEL7 cluster with pacemaker.
Summary: Adding 'delay' parameter of fencing for single node in a two node cluster wit...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.0
Hardware: All
OS: Linux
medium
low
Target Milestone: pre-dev-freeze
: 8.6
Assignee: Oyvind Albrigtsen
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On: 1682116
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-03-28 19:41 UTC by Nitin Yewale
Modified: 2024-03-08 13:44 UTC (History)
12 users (show)

Fixed In Version: pacemaker-2.1.2-1.el8
Doc Type: Enhancement
Doc Text:
.The `pcmk_delay_base` parameter may now take different values for different nodes When configuring a fence device, you now can specify different values for different nodes with the `pcmk_delay_base parameter`. This allows a single fence device to be used in a two-node cluster, with a different delay for each node. This helps prevent a situation where each node attempts to fence the other node at the same time. To specify different values for different nodes, you map the host names to the delay value for that node using a similar syntax to pcmk_host_map. For example, node1:0;node2:10s would use no delay when fencing node1 and a 10-second delay when fencing node2.
Clone Of:
Environment:
Last Closed: 2022-05-10 14:09:46 UTC
Type: Feature Request
Target Upstream Version: 2.1.2
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Cluster Labs 5440 0 None None None 2020-10-02 21:08:47 UTC
Red Hat Issue Tracker CLUSTERQE-5161 0 None None None 2021-10-26 14:02:37 UTC
Red Hat Issue Tracker CLUSTERQE-5485 0 None None None 2022-03-14 17:01:13 UTC
Red Hat Knowledge Base (Solution) 54829 0 None None None 2018-09-06 13:30:56 UTC
Red Hat Knowledge Base (Solution) 3565071 0 None None None 2018-09-06 13:30:37 UTC
Red Hat Product Errata RHBA-2022:1885 0 None None None 2022-05-10 14:10:01 UTC

Description Nitin Yewale 2014-03-28 19:41:25 UTC
Description of problem:
Adding 'delay' parameter of fencing for single node in a two node cluster with a shared fence device.

When we configure a shared fence device, we could configure fencing for two nodes in one command / by specifing one device only. Can we have a feature to add the 'dealy' parameter of fencing to one node only in the same command. 

At present to set a delay parameter to one node, we need to create a separate fence device per node, or create levels referring to that device.

Version-Release number of selected component (if applicable):
pcs - any release

How reproducible:
Every time

Steps to Reproduce:
1. Configure shared fence device like fence_vmware_soap or fence_rhevm and it does not allow to set 'delay' parameter of fencing for a single node. We will need to create  a separate fence device per node, or create levels referring to that device.


Actual results:
'delay' parameter of fencing is applicable to both the nodes.

Expected results:
Single shared fence_device should be allowed to add 'dealy' parameter (for that matter any parameter that is applicable to single node) 

Additional info:

Comment 1 Nitin Yewale 2014-03-28 19:53:43 UTC
Additional Info :

There's no customer request behind this (And hence no RFE template) but seems potentially useful for customers in the futre.

Comment 11 Ken Gaillot 2018-09-07 17:20:40 UTC
I could see implementing this as a pcmk_delay_map attribute with similar syntax as pcmk_host_map, e.g. pcmk_delay_map="node1:0;node2:1". The values would have the same format as any other time attribute in pacemaker (e.g. "2", "2s" or "2000ms" would be accepted and equivalent).

For the record, here is existing functionality related to this design, most of which was added after this bug was filed:

* Pacemaker calculates the delay for any given device as the value of pcmk_delay_base (if specified) plus a random amount keeping the total delay at or below pcmk_delay_max (if specified).

* Two-node clusters may obtain quorum from qdevice, which involves a lightweight process running on a third host. The node that retains connectivity to the third host will keep quorum and thus be the sole node to fence. Qdevice may be extended in the future to allow other heuristics when deciding which host (or partition) keeps quorum.

* A simpler but similar approach as qdevice is to use the fence_heuristics_ping fence device in a topology level with the real device. The ping device will fail if it is unable to ping a configured address. This ensures that any node that loses connectivity to that address is unable to fence other nodes.

Comment 13 Ken Gaillot 2018-12-19 23:08:10 UTC
Moving to RHEL 8 only, as this will not make 7.7, which will be the last RHEL 7 feature release

Comment 16 RHEL Program Management 2020-09-19 00:10:01 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 21 Ken Gaillot 2020-10-02 21:08:48 UTC
Due to developer time constraints, and the availability of a workaround (configuring a separate fencing device for each host), this issue has been opened as an upstream bug report, and will be closed here. This bug will be reopened if developer time becomes available for it.

Comment 22 Ken Gaillot 2021-08-25 20:34:59 UTC
Fixed upstream by commit 7dd33e79

Comment 29 Markéta Smazová 2022-01-07 15:42:07 UTC
>   [root@virt-134 ~]# rpm -q pacemaker
>   pacemaker-2.1.2-2.el8.x86_64

Check pcmk_delay_base in metadata:

>   [root@virt-134 ~]# pcs stonith describe fence_xvm | grep delay -A1
>     delay: Fencing delay (in seconds; default=0)
>     domain: Virtual Machine (domain name) to fence (deprecated; use port)
>   --
>     pcmk_delay_max: Enable a delay of no more than the time specified before
>                     executing fencing actions. Pacemaker derives the overall delay
>                     by taking the value of pcmk_delay_base and adding a random
>                     delay value such that the sum is kept below this maximum. This
>                     prevents double fencing when using slow devices such as sbd.
>                     Use this to enable a random delay for fencing actions. The
>                     overall delay is derived from this random delay value adding a
>                     static delay so that the sum is kept below the maximum delay.
>     pcmk_delay_base: Enable a base delay for fencing actions and specify base
>                      delay value. This prevents double fencing when different
>                      delays are configured on the nodes. Use this to enable a
>                      static delay for fencing actions. The overall delay is
>                      derived from a random delay value adding this static delay so
>                      that the sum is kept below the maximum delay. Set to eg.
>                      node1:1s;node2:5 to set different value per node.


case 1
-------

Create shared fence device, set pcmk_delay_base to 5s:

>   [root@virt-134 ~]# pcs stonith create shared1 fence_xvm pcmk_host_map="virt-134:virt-134.cluster-qe.lab.eng.brq.redhat.com;virt-135:virt-135.cluster-qe.lab.eng.brq.redhat.com" pcmk_delay_base=virt-135:5s op monitor interval=60s

>   [root@virt-134 ~]# pcs stonith config
>    Resource: shared1 (class=stonith type=fence_xvm)
>     Attributes: pcmk_delay_base=virt-135:5s pcmk_host_map=virt-134:virt-134.cluster-qe.lab.eng.brq.redhat.com;virt-135:virt-135.cluster-qe.lab.eng.brq.redhat.com
>     Operations: monitor interval=60s (shared1-monitor-interval-60s)

>   [root@virt-134 ~]# pcs cluster enable --all
>   virt-134: Cluster Enabled
>   virt-135: Cluster Enabled

>   [root@virt-134 ~]# pcs status
>   Cluster name: STSRHTS31148
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-135 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
>     * Last updated: Thu Jan  6 17:58:39 2022
>     * Last change:  Thu Jan  6 17:57:51 2022 by root via cibadmin on virt-134
>     * 2 nodes configured
>     * 1 resource instance configured

>   Node List:
>     * Online: [ virt-134 virt-135 ]

>   Full List of Resources:
>     * shared1	(stonith:fence_xvm):	 Started virt-134

>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled


Block corosync to cause fencing:

>   [root@virt-135 ~]# ip6tables -A INPUT ! -i lo -p udp --dport 5404 -j DROP && ip6tables -A INPUT ! -i lo -p udp --dport 5405 -j DROP && ip6tables -A OUTPUT ! -o lo -p udp --sport 5404 -j DROP && ip6tables -A OUTPUT ! -o lo -p udp --sport 5405 -j DROP


Fencing on virt-135 is delayed for 5s, so virt-134 is fenced:

>   [root@virt-135 ~]# crm_mon -1m
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-135 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
>     * Last updated: Thu Jan  6 18:16:04 2022
>     * Last change:  Thu Jan  6 17:57:51 2022 by root via cibadmin on virt-134
>     * 2 nodes configured
>     * 1 resource instance configured

>   Node List:
>     * Online: [ virt-135 ]
>     * OFFLINE: [ virt-134 ]

>   Active Resources:
>     * shared1	(stonith:fence_xvm):	 Started virt-135

>   Fencing History:
>     * reboot of virt-134 successful: delegate=virt-135, client=pacemaker-controld.6685, origin=virt-135, completed='2022-01-06 18:15:52 +01:00'


Pacemaker log excerpts:

>   [root@virt-134 ~]# tail -f /var/log/pacemaker/pacemaker.log | grep fenced
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (node_left) 	info: Group stonith-ng event 2: virt-135 (node 2 pid 6681) left via cluster exit
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (crm_update_peer_proc) 	info: node_left: Node virt-135[2] - corosync-cpg is now offline
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (update_peer_state_iter) 	notice: Node virt-135 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (crm_reap_dead_member) 	info: Removing node with name virt-135 and id 2 from membership cache
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (reap_crm_member) 	notice: Purged 1 peer with id=2 and/or uname=virt-135 from the membership cache
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (pcmk_cpg_membership) 	info: Group stonith-ng event 2: virt-134 (node 1 pid 2942) is member
>   Jan 06 18:15:49 virt-134 pacemaker-schedulerd[2945] (pe_fence_node) 	warning: Cluster node virt-135 will be fenced: peer is no longer part of the cluster
>   Jan 06 18:15:49 virt-134 pacemaker-schedulerd[2945] (order_stop_vs_fencing) 	info: shared1_stop_0 is implicit because virt-135 is fenced
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (handle_request) 	notice: Client pacemaker-controld.2946 wants to fence (reboot) virt-135 using any device
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (initiate_remote_stonith_op) 	notice: Requesting peer fencing (reboot) targeting virt-135 | id=05f53d83 state=querying base_timeout=60
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (can_fence_host_with_device) 	notice: shared1 is eligible to fence (reboot) virt-135 (aka. 'virt-135.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (process_remote_stonith_query) 	info: Query result 1 of 1 from virt-134 for virt-135/reboot (1 device) 05f53d83-016d-48c9-bbb9-8a7c733656ca
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (process_remote_stonith_query) 	info: All query replies have arrived, continuing (1 expected/1 received) 
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (call_remote_stonith) 	info: Total timeout set to 60 for peer's fencing targeting virt-135 for pacemaker-controld.2946|id=05f53d83
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (call_remote_stonith) 	notice: Requesting that virt-134 perform 'reboot' action targeting virt-135 | for client pacemaker-controld.2946 (72s, 0s)
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (can_fence_host_with_device) 	notice: shared1 is eligible to fence (reboot) virt-135 (aka. 'virt-135.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (stonith_fence_get_devices_cb) 	info: Found 1 matching device for target 'virt-135'
>   Jan 06 18:15:49 virt-134 pacemaker-fenced    [2942] (schedule_stonith_command) 	notice: Delaying 'reboot' action targeting virt-135 using shared1 for 5s | timeout=60s requested_delay=0s base=5s max=5s


>   [root@virt-135 ~]# tail -f /var/log/pacemaker/pacemaker.log | grep fenced
>   Jan 06 18:15:49 virt-135 pacemaker-fenced    [6681] (node_left) 	info: Group stonith-ng event 6: virt-134 (node 1 pid 2942) left via cluster exit
>   Jan 06 18:15:49 virt-135 pacemaker-fenced    [6681] (crm_update_peer_proc) 	info: node_left: Node virt-134[1] - corosync-cpg is now offline
>   Jan 06 18:15:49 virt-135 pacemaker-fenced    [6681] (update_peer_state_iter) 	notice: Node virt-134 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
>   Jan 06 18:15:49 virt-135 pacemaker-fenced    [6681] (crm_reap_dead_member) 	info: Removing node with name virt-134 and id 1 from membership cache
>   Jan 06 18:15:49 virt-135 pacemaker-fenced    [6681] (reap_crm_member) 	notice: Purged 1 peer with id=1 and/or uname=virt-134 from the membership cache
>   Jan 06 18:15:49 virt-135 pacemaker-fenced    [6681] (pcmk_cpg_membership) 	info: Group stonith-ng event 6: virt-135 (node 2 pid 6681) is member
>   Jan 06 18:15:50 virt-135 pacemaker-schedulerd[6684] (pe_fence_node) 	warning: Cluster node virt-134 will be fenced: peer is no longer part of the cluster
>   Jan 06 18:15:50 virt-135 pacemaker-fenced    [6681] (handle_request) 	notice: Client pacemaker-controld.6685 wants to fence (reboot) virt-134 using any device
>   Jan 06 18:15:50 virt-135 pacemaker-fenced    [6681] (initiate_remote_stonith_op) 	notice: Requesting peer fencing (reboot) targeting virt-134 | id=33360b9a state=querying base_timeout=60
>   Jan 06 18:15:50 virt-135 pacemaker-fenced    [6681] (can_fence_host_with_device) 	notice: shared1 is eligible to fence (reboot) virt-134 (aka. 'virt-134.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 06 18:15:50 virt-135 pacemaker-fenced    [6681] (process_remote_stonith_query) 	info: Query result 1 of 1 from virt-135 for virt-134/reboot (1 device) 33360b9a-d04d-4928-9271-4551edeb8284
>   Jan 06 18:15:50 virt-135 pacemaker-fenced    [6681] (call_remote_stonith) 	info: Total timeout set to 60 for peer's fencing targeting virt-134 for pacemaker-controld.6685|id=33360b9a
>   Jan 06 18:15:50 virt-135 pacemaker-fenced    [6681] (call_remote_stonith) 	notice: Requesting that virt-135 perform 'reboot' action targeting virt-134 | for client pacemaker-controld.6685 (72s, 0s)
>   Jan 06 18:15:50 virt-135 pacemaker-fenced    [6681] (can_fence_host_with_device) 	notice: shared1 is eligible to fence (reboot) virt-134 (aka. 'virt-134.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 06 18:15:50 virt-135 pacemaker-fenced    [6681] (stonith_fence_get_devices_cb) 	info: Found 1 matching device for target 'virt-134'
>   Jan 06 18:15:52 virt-135 pacemaker-fenced    [6681] (log_async_result) 	notice: Operation 'reboot' [8135] targeting virt-134 using shared1 returned 0 | call 13 from pacemaker-controld.6685
>   Jan 06 18:15:52 virt-135 pacemaker-fenced    [6681] (remote_op_done) 	notice: Operation 'reboot' targeting virt-134 by virt-135 for pacemaker-controld.6685@virt-135: OK | id=33360b9a


case 2
-------

Create shared fence device, set pcmk_delay_base to 10:

>   [root@virt-134 ~]# pcs stonith create shared2 fence_xvm pcmk_host_map="virt-134:virt-134.cluster-qe.lab.eng.brq.redhat.com;virt-135:virt-135.cluster-qe.lab.eng.brq.redhat.com" pcmk_delay_base=virt-134:10 op monitor interval=60s

>   [root@virt-134 ~]# pcs stonith config
>    Resource: shared2 (class=stonith type=fence_xvm)
>     Attributes: pcmk_delay_base=virt-134:10 pcmk_host_map=virt-134:virt-134.cluster-qe.lab.eng.brq.redhat.com;virt-135:virt-135.cluster-qe.lab.eng.brq.redhat.com
>     Operations: monitor interval=60s (shared2-monitor-interval-60s)

>   [root@virt-134 ~]# pcs status
>   Cluster name: STSRHTS31148
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-135 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
>     * Last updated: Fri Jan  7 10:48:55 2022
>     * Last change:  Fri Jan  7 10:48:28 2022 by root via cibadmin on virt-134
>     * 2 nodes configured
>     * 1 resource instance configured

>   Node List:
>     * Online: [ virt-134 virt-135 ]

>   Full List of Resources:
>     * shared2	(stonith:fence_xvm):	 Started virt-134

>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled


Block corosync to cause fencing:

>   [root@virt-134 ~]# ip6tables -A INPUT ! -i lo -p udp --dport 5404 -j DROP && ip6tables -A INPUT ! -i lo -p udp --dport 5405 -j DROP && ip6tables -A OUTPUT ! -o lo -p udp --sport 5404 -j DROP && ip6tables -A OUTPUT ! -o lo -p udp --sport 5405 -j DROP


Fencing on virt-134 is delayed for 10s, so virt-135 is fenced:

>   [root@virt-134 ~]# crm_mon -1m
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-134 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
>     * Last updated: Fri Jan  7 10:50:46 2022
>     * Last change:  Fri Jan  7 10:48:28 2022 by root via cibadmin on virt-134
>     * 2 nodes configured
>     * 1 resource instance configured

>   Node List:
>     * Online: [ virt-134 ]
>     * OFFLINE: [ virt-135 ]

>   Active Resources:
>     * shared2	(stonith:fence_xvm):	 Started virt-134

>   Fencing History:
>     * reboot of virt-135 successful: delegate=virt-134, client=pacemaker-controld.2990, origin=virt-134, completed='2022-01-07 10:50:24 +01:00'


Pacemaker log excerpts:

>   [root@virt-134 ~]# tail -f /var/log/pacemaker/pacemaker.log | grep fenced
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (node_left) 	info: Group stonith-ng event 2: virt-135 (node 2 pid 6681) left via cluster exit
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (crm_update_peer_proc) 	info: node_left: Node virt-135[2] - corosync-cpg is now offline
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (update_peer_state_iter) 	notice: Node virt-135 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (crm_reap_dead_member) 	info: Removing node with name virt-135 and id 2 from membership cache
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (reap_crm_member) 	notice: Purged 1 peer with id=2 and/or uname=virt-135 from the membership cache
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (pcmk_cpg_membership) 	info: Group stonith-ng event 2: virt-134 (node 1 pid 2986) is member
>   Jan 07 10:50:21 virt-134 pacemaker-schedulerd[2989] (pe_fence_node) 	warning: Cluster node virt-135 will be fenced: peer is no longer part of the cluster
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (handle_request) 	notice: Client pacemaker-controld.2990 wants to fence (reboot) virt-135 using any device
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (initiate_remote_stonith_op) 	notice: Requesting peer fencing (reboot) targeting virt-135 | id=13b22228 state=querying base_timeout=60
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (can_fence_host_with_device) 	notice: shared2 is eligible to fence (reboot) virt-135 (aka. 'virt-135.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (process_remote_stonith_query) 	info: Query result 1 of 1 from virt-134 for virt-135/reboot (1 device) 13b22228-6ed8-482d-8b56-94ab928e3b6f
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (call_remote_stonith) 	info: Total timeout set to 60 for peer's fencing targeting virt-135 for pacemaker-controld.2990|id=13b22228
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (call_remote_stonith) 	notice: Requesting that virt-134 perform 'reboot' action targeting virt-135 | for client pacemaker-controld.2990 (72s, 0s)
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (can_fence_host_with_device) 	notice: shared2 is eligible to fence (reboot) virt-135 (aka. 'virt-135.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 10:50:21 virt-134 pacemaker-fenced    [2986] (stonith_fence_get_devices_cb) 	info: Found 1 matching device for target 'virt-135'
>   Jan 07 10:50:24 virt-134 pacemaker-fenced    [2986] (log_async_result) 	notice: Operation 'reboot' [53366] targeting virt-135 using shared2 returned 0 | call 3 from pacemaker-controld.2990
>   Jan 07 10:50:24 virt-134 pacemaker-fenced    [2986] (remote_op_done) 	notice: Operation 'reboot' targeting virt-135 by virt-134 for pacemaker-controld.2990@virt-134: OK | id=13b22228


>   [root@virt-135 ~]# tail -f /var/log/pacemaker/pacemaker.log | grep fenced
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (node_left) 	info: Group stonith-ng event 8: virt-134 (node 1 pid 2986) left via cluster exit
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (crm_update_peer_proc) 	info: node_left: Node virt-134[1] - corosync-cpg is now offline
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (update_peer_state_iter) 	notice: Node virt-134 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (crm_reap_dead_member) 	info: Removing node with name virt-134 and id 1 from membership cache
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (reap_crm_member) 	notice: Purged 1 peer with id=1 and/or uname=virt-134 from the membership cache
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (pcmk_cpg_membership) 	info: Group stonith-ng event 8: virt-135 (node 2 pid 6681) is member
>   Jan 07 10:50:21 virt-135 pacemaker-schedulerd[6684] (pe_fence_node) 	warning: Cluster node virt-134 will be fenced: peer is no longer part of the cluster
>   Jan 07 10:50:21 virt-135 pacemaker-schedulerd[6684] (order_stop_vs_fencing) 	info: shared2_stop_0 is implicit because virt-134 is fenced
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (handle_request) 	notice: Client pacemaker-controld.6685 wants to fence (reboot) virt-134 using any device
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (initiate_remote_stonith_op) 	notice: Requesting peer fencing (reboot) targeting virt-134 | id=23ad5cad state=querying base_timeout=60
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (can_fence_host_with_device) 	notice: shared2 is eligible to fence (reboot) virt-134 (aka. 'virt-134.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (process_remote_stonith_query) 	info: Query result 1 of 1 from virt-135 for virt-134/reboot (1 device) 23ad5cad-3c12-4c22-9fa1-033eeed6dd07
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (process_remote_stonith_query) 	info: All query replies have arrived, continuing (1 expected/1 received) 
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (call_remote_stonith) 	info: Total timeout set to 60 for peer's fencing targeting virt-134 for pacemaker-controld.6685|id=23ad5cad
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (call_remote_stonith) 	notice: Requesting that virt-135 perform 'reboot' action targeting virt-134 | for client pacemaker-controld.6685 (72s, 0s)
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (can_fence_host_with_device) 	notice: shared2 is eligible to fence (reboot) virt-134 (aka. 'virt-134.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (stonith_fence_get_devices_cb) 	info: Found 1 matching device for target 'virt-134'
>   Jan 07 10:50:21 virt-135 pacemaker-fenced    [6681] (schedule_stonith_command) 	notice: Delaying 'reboot' action targeting virt-134 using shared2 for 10s | timeout=60s requested_delay=0s base=10s max=10s



case 3
-------

Create shared fence device, set pcmk_delay_base to 8000ms:

>   [root@virt-134 ~]# pcs stonith create shared3 fence_xvm pcmk_host_map="virt-134:virt-134.cluster-qe.lab.eng.brq.redhat.com;virt-135:virt-135.cluster-qe.lab.eng.brq.redhat.com" pcmk_delay_base=virt-134:8000ms op monitor interval=60s

>   [root@virt-134 ~]# pcs stonith config
>    Resource: shared3 (class=stonith type=fence_xvm)
>     Attributes: pcmk_delay_base=virt-134:8000ms pcmk_host_map=virt-134:virt-134.cluster-qe.lab.eng.brq.redhat.com;virt-135:virt-135.cluster-qe.lab.eng.brq.redhat.com
>     Operations: monitor interval=60s (shared3-monitor-interval-60s)

>   [root@virt-134 ~]# pcs status
>   Cluster name: STSRHTS31148
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-134 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
>     * Last updated: Fri Jan  7 14:21:47 2022
>     * Last change:  Fri Jan  7 14:21:25 2022 by root via cibadmin on virt-134
>     * 2 nodes configured
>     * 1 resource instance configured

>   Node List:
>     * Online: [ virt-134 virt-135 ]

>   Full List of Resources:
>     * shared3	(stonith:fence_xvm):	 Started virt-134

>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled


Block corosync to cause fencing:

>   [root@virt-135 ~]# ip6tables -A INPUT ! -i lo -p udp --dport 5404 -j DROP && ip6tables -A INPUT ! -i lo -p udp --dport 5405 -j DROP && ip6tables -A OUTPUT ! -o lo -p udp --sport 5404 -j DROP && ip6tables -A OUTPUT ! -o lo -p udp --sport 5405 -j DROP


Fencing on virt-134 is delayed for 8s, so virt-135 is fenced:

>   [root@virt-134 ~]# crm_mon -1m
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-134 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
>     * Last updated: Fri Jan  7 14:57:25 2022
>     * Last change:  Fri Jan  7 14:21:25 2022 by root via cibadmin on virt-134
>     * 2 nodes configured
>     * 1 resource instance configured

>   Node List:
>     * Online: [ virt-134 ]
>     * OFFLINE: [ virt-135 ]

>   Active Resources:
>     * shared3	(stonith:fence_xvm):	 Started virt-134

>   Fencing History:
>     * reboot of virt-135 successful: delegate=virt-134, client=pacemaker-controld.2990, origin=virt-134, completed='2022-01-07 14:56:53 +01:00'


Pacemaker log excerpts:

>   [root@virt-134 ~]# tail -f /var/log/pacemaker/pacemaker.log | grep fenced
>   Jan 07 14:56:49 virt-134 pacemaker-fenced    [2986] (node_left) 	info: Group stonith-ng event 4: virt-135 (node 2 pid 2910) left via cluster exit
>   Jan 07 14:56:49 virt-134 pacemaker-fenced    [2986] (crm_update_peer_proc) 	info: node_left: Node virt-135[2] - corosync-cpg is now offline
>   Jan 07 14:56:49 virt-134 pacemaker-fenced    [2986] (update_peer_state_iter) 	notice: Node virt-135 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
>   Jan 07 14:56:49 virt-134 pacemaker-fenced    [2986] (crm_reap_dead_member) 	info: Removing node with name virt-135 and id 2 from membership cache
>   Jan 07 14:56:49 virt-134 pacemaker-fenced    [2986] (reap_crm_member) 	notice: Purged 1 peer with id=2 and/or uname=virt-135 from the membership cache
>   Jan 07 14:56:49 virt-134 pacemaker-fenced    [2986] (pcmk_cpg_membership) 	info: Group stonith-ng event 4: virt-134 (node 1 pid 2986) is member
>   Jan 07 14:56:50 virt-134 pacemaker-schedulerd[2989] (pe_fence_node) 	warning: Cluster node virt-135 will be fenced: peer is no longer part of the cluster
>   Jan 07 14:56:50 virt-134 pacemaker-fenced    [2986] (handle_request) 	notice: Client pacemaker-controld.2990 wants to fence (reboot) virt-135 using any device
>   Jan 07 14:56:50 virt-134 pacemaker-fenced    [2986] (initiate_remote_stonith_op) 	notice: Requesting peer fencing (reboot) targeting virt-135 | id=c8d6a5a4 state=querying base_timeout=60
>   Jan 07 14:56:50 virt-134 pacemaker-fenced    [2986] (can_fence_host_with_device) 	notice: shared3 is eligible to fence (reboot) virt-135 (aka. 'virt-135.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 14:56:50 virt-134 pacemaker-fenced    [2986] (process_remote_stonith_query) 	info: Query result 1 of 1 from virt-134 for virt-135/reboot (1 device) c8d6a5a4-3ffb-4daa-8920-aacce2a280ba
>   Jan 07 14:56:50 virt-134 pacemaker-fenced    [2986] (call_remote_stonith) 	info: Total timeout set to 60 for peer's fencing targeting virt-135 for pacemaker-controld.2990|id=c8d6a5a4
>   Jan 07 14:56:50 virt-134 pacemaker-fenced    [2986] (call_remote_stonith) 	notice: Requesting that virt-134 perform 'reboot' action targeting virt-135 | for client pacemaker-controld.2990 (72s, 0s)
>   Jan 07 14:56:50 virt-134 pacemaker-fenced    [2986] (can_fence_host_with_device) 	notice: shared3 is eligible to fence (reboot) virt-135 (aka. 'virt-135.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 14:56:50 virt-134 pacemaker-fenced    [2986] (stonith_fence_get_devices_cb) 	info: Found 1 matching device for target 'virt-135'
>   Jan 07 14:56:53 virt-134 pacemaker-fenced    [2986] (log_async_result) 	notice: Operation 'reboot' [66124] targeting virt-135 using shared3 returned 0 | call 5 from pacemaker-controld.2990
>   Jan 07 14:56:53 virt-134 pacemaker-fenced    [2986] (remote_op_done) 	notice: Operation 'reboot' targeting virt-135 by virt-134 for pacemaker-controld.2990@virt-134: OK | id=c8d6a5a4


>   [root@virt-135 ~]# tail -f /var/log/pacemaker/pacemaker.log | grep fenced
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (node_left) 	info: Group stonith-ng event 2: virt-134 (node 1 pid 2986) left via cluster exit
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (crm_update_peer_proc) 	info: node_left: Node virt-134[1] - corosync-cpg is now offline
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (update_peer_state_iter) 	notice: Node virt-134 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (crm_reap_dead_member) 	info: Removing node with name virt-134 and id 1 from membership cache
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (reap_crm_member) 	notice: Purged 1 peer with id=1 and/or uname=virt-134 from the membership cache
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (pcmk_cpg_membership) 	info: Group stonith-ng event 2: virt-135 (node 2 pid 2910) is member
>   Jan 07 14:56:49 virt-135 pacemaker-schedulerd[2913] (pe_fence_node) 	warning: Cluster node virt-134 will be fenced: peer is no longer part of the cluster
>   Jan 07 14:56:49 virt-135 pacemaker-schedulerd[2913] (order_stop_vs_fencing) 	info: shared3_stop_0 is implicit because virt-134 is fenced
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (handle_request) 	notice: Client pacemaker-controld.2914 wants to fence (reboot) virt-134 using any device
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (initiate_remote_stonith_op) 	notice: Requesting peer fencing (reboot) targeting virt-134 | id=cd71fb9e state=querying base_timeout=60
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (can_fence_host_with_device) 	notice: shared3 is eligible to fence (reboot) virt-134 (aka. 'virt-134.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (process_remote_stonith_query) 	info: Query result 1 of 1 from virt-135 for virt-134/reboot (1 device) cd71fb9e-bec7-425a-8de3-f83642bdaf37
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (process_remote_stonith_query) 	info: All query replies have arrived, continuing (1 expected/1 received) 
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (call_remote_stonith) 	info: Total timeout set to 60 for peer's fencing targeting virt-134 for pacemaker-controld.2914|id=cd71fb9e
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (call_remote_stonith) 	notice: Requesting that virt-135 perform 'reboot' action targeting virt-134 | for client pacemaker-controld.2914 (72s, 0s)
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (can_fence_host_with_device) 	notice: shared3 is eligible to fence (reboot) virt-134 (aka. 'virt-134.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (stonith_fence_get_devices_cb) 	info: Found 1 matching device for target 'virt-134'
>   Jan 07 14:56:49 virt-135 pacemaker-fenced    [2910] (schedule_stonith_command) 	notice: Delaying 'reboot' action targeting virt-134 using shared3 for 8s | timeout=60s requested_delay=0s base=8s max=8s


case 4
-------

Create shared fence device, set pcmk_delay_base on one node to 5s and on the other to 10s:

>   [root@virt-134 ~]# pcs stonith create shared4 fence_xvm pcmk_host_map="virt-134:virt-134.cluster-qe.lab.eng.brq.redhat.com;virt-135:virt-135.cluster-qe.lab.eng.brq.redhat.com" pcmk_delay_base="virt-134:5;virt-135:10" op monitor interval=60s
>   [root@virt-134 ~]# pcs stonith config
>    Resource: shared4 (class=stonith type=fence_xvm)
>     Attributes: pcmk_delay_base=virt-134:5;virt-135:10 pcmk_host_map=virt-134:virt-134.cluster-qe.lab.eng.brq.redhat.com;virt-135:virt-135.cluster-qe.lab.eng.brq.redhat.com
>     Operations: monitor interval=60s (shared4-monitor-interval-60s)

>   [root@virt-134 ~]# pcs status
>   Cluster name: STSRHTS31148
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-134 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
>     * Last updated: Fri Jan  7 16:27:03 2022
>     * Last change:  Fri Jan  7 16:26:21 2022 by root via cibadmin on virt-134
>     * 2 nodes configured
>     * 1 resource instance configured

>   Node List:
>     * Online: [ virt-134 virt-135 ]

>   Full List of Resources:
>     * shared4	(stonith:fence_xvm):	 Started virt-134

>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled


Block corosync to cause fencing:

>   [root@virt-134 ~]# ip6tables -A INPUT ! -i lo -p udp --dport 5404 -j DROP && ip6tables -A INPUT ! -i lo -p udp --dport 5405 -j DROP && ip6tables -A OUTPUT ! -o lo -p udp --sport 5404 -j DROP && ip6tables -A OUTPUT ! -o lo -p udp --sport 5405 -j DROP


Delay on virt-135 is higher than on virt-134, so virt-134 is fenced:

>   [root@virt-135 ~]# crm_mon -1m
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-135 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
>     * Last updated: Fri Jan  7 16:28:44 2022
>     * Last change:  Fri Jan  7 16:26:21 2022 by root via cibadmin on virt-134
>     * 2 nodes configured
>     * 1 resource instance configured

>   Node List:
>     * Online: [ virt-135 ]
>     * OFFLINE: [ virt-134 ]

>   Active Resources:
>     * shared4	(stonith:fence_xvm):	 Started virt-135

>   Fencing History:
>     * reboot of virt-134 successful: delegate=virt-135, client=pacemaker-controld.2949, origin=virt-135, completed='2022-01-07 16:28:29 +01:00'

Pacemaker log excerpts:

>   [root@virt-134 ~]# tail -f /var/log/pacemaker/pacemaker.log | grep fenced
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (node_left) 	info: Group stonith-ng event 6: virt-135 (node 2 pid 2945) left via cluster exit
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (crm_update_peer_proc) 	info: node_left: Node virt-135[2] - corosync-cpg is now offline
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (update_peer_state_iter) 	notice: Node virt-135 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (crm_reap_dead_member) 	info: Removing node with name virt-135 and id 2 from membership cache
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (reap_crm_member) 	notice: Purged 1 peer with id=2 and/or uname=virt-135 from the membership cache
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (pcmk_cpg_membership) 	info: Group stonith-ng event 6: virt-134 (node 1 pid 2986) is member
>   Jan 07 16:28:22 virt-134 pacemaker-schedulerd[2989] (pe_fence_node) 	warning: Cluster node virt-135 will be fenced: peer is no longer part of the cluster
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (handle_request) 	notice: Client pacemaker-controld.2990 wants to fence (reboot) virt-135 using any device
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (initiate_remote_stonith_op) 	notice: Requesting peer fencing (reboot) targeting virt-135 | id=fe2ef4d8 state=querying base_timeout=60
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (can_fence_host_with_device) 	notice: shared4 is eligible to fence (reboot) virt-135 (aka. 'virt-135.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (process_remote_stonith_query) 	info: Query result 1 of 1 from virt-134 for virt-135/reboot (1 device) fe2ef4d8-ff9c-492c-91e6-70266f728766
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (call_remote_stonith) 	info: Total timeout set to 60 for peer's fencing targeting virt-135 for pacemaker-controld.2990|id=fe2ef4d8
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (call_remote_stonith) 	notice: Requesting that virt-134 perform 'reboot' action targeting virt-135 | for client pacemaker-controld.2990 (72s, 0s)
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (can_fence_host_with_device) 	notice: shared4 is eligible to fence (reboot) virt-135 (aka. 'virt-135.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (stonith_fence_get_devices_cb) 	info: Found 1 matching device for target 'virt-135'
>   Jan 07 16:28:22 virt-134 pacemaker-fenced    [2986] (schedule_stonith_command) 	notice: Delaying 'reboot' action targeting virt-135 using shared4 for 10s | timeout=60s requested_delay=0s base=10s max=10s

>   [root@virt-135 ~]# tail -f /var/log/pacemaker/pacemaker.log | grep fenced
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (node_left) 	info: Group stonith-ng event 1: virt-134 (node 1 pid 2986) left via cluster exit
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (crm_update_peer_proc) 	info: node_left: Node virt-134[1] - corosync-cpg is now offline
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (update_peer_state_iter) 	notice: Node virt-134 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (crm_reap_dead_member) 	info: Removing node with name virt-134 and id 1 from membership cache
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (reap_crm_member) 	notice: Purged 1 peer with id=1 and/or uname=virt-134 from the membership cache
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (pcmk_cpg_membership) 	info: Group stonith-ng event 1: virt-135 (node 2 pid 2945) is member
>   Jan 07 16:28:22 virt-135 pacemaker-schedulerd[2948] (pe_fence_node) 	warning: Cluster node virt-134 will be fenced: peer is no longer part of the cluster
>   Jan 07 16:28:22 virt-135 pacemaker-schedulerd[2948] (order_stop_vs_fencing) 	info: shared4_stop_0 is implicit because virt-134 is fenced
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (handle_request) 	notice: Client pacemaker-controld.2949 wants to fence (reboot) virt-134 using any device
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (initiate_remote_stonith_op) 	notice: Requesting peer fencing (reboot) targeting virt-134 | id=2372736e state=querying base_timeout=60
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (can_fence_host_with_device) 	notice: shared4 is eligible to fence (reboot) virt-134 (aka. 'virt-134.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (process_remote_stonith_query) 	info: Query result 1 of 1 from virt-135 for virt-134/reboot (1 device) 2372736e-20db-4428-990e-9e8cbe2ab93d
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (process_remote_stonith_query) 	info: All query replies have arrived, continuing (1 expected/1 received) 
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (call_remote_stonith) 	info: Total timeout set to 60 for peer's fencing targeting virt-134 for pacemaker-controld.2949|id=2372736e
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (call_remote_stonith) 	notice: Requesting that virt-135 perform 'reboot' action targeting virt-134 | for client pacemaker-controld.2949 (72s, 0s)
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (can_fence_host_with_device) 	notice: shared4 is eligible to fence (reboot) virt-134 (aka. 'virt-134.cluster-qe.lab.eng.brq.redhat.com'): static-list
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (stonith_fence_get_devices_cb) 	info: Found 1 matching device for target 'virt-134'
>   Jan 07 16:28:22 virt-135 pacemaker-fenced    [2945] (schedule_stonith_command) 	notice: Delaying 'reboot' action targeting virt-134 using shared4 for 5s | timeout=60s requested_delay=0s base=5s max=5s
>   Jan 07 16:28:29 virt-135 pacemaker-fenced    [2945] (log_async_result) 	notice: Operation 'reboot' [8406] targeting virt-134 using shared4 returned 0 | call 3 from pacemaker-controld.2949
>   Jan 07 16:28:29 virt-135 pacemaker-fenced    [2945] (remote_op_done) 	notice: Operation 'reboot' targeting virt-134 by virt-135 for pacemaker-controld.2949@virt-135: OK | id=2372736e


marking verified in pacemaker-2.1.2-2.el8

Comment 33 errata-xmlrpc 2022-05-10 14:09:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:1885


Note You need to log in before you can comment on or make changes to this bug.