RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1454933 - Fencing occurs from a node even if fencing resource is banned from that node
Summary: Fencing occurs from a node even if fencing resource is banned from that node
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: 7.5
Assignee: Klaus Wenninger
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-23 19:40 UTC by Klaus Wenninger
Modified: 2020-12-14 08:44 UTC (History)
6 users (show)

Fixed In Version: pacemaker-1.1.18-1.el7
Doc Type: Bug Fix
Doc Text:
Cause: Previously, Pacemaker's stonithd did not watch the cluster configuration for rules that would affect stonith resources. Consequence: If a stonith device were banned from a node using a rule, Pacemaker might still execute the device from that node. Fix: Pacemaker's stonithd now watches the cluster configuration for rules that might affect stonith resources. Result: Banning a stonith device from a node using a rule prevents the cluster from executing that device from that node.
Clone Of:
Environment:
Last Closed: 2018-04-10 15:28:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0860 0 None None None 2018-04-10 15:29:53 UTC

Description Klaus Wenninger 2017-05-23 19:40:46 UTC
Description of problem:

Fencing resources are used even if they are not started on a node.
But there are 2 ways to prevent them from being used from a specific node:

- disable the fencing-resource all the way
- ban it from that node (pcs resource ban {fencing-resource} {node})

This was already addressed by bz1240330 and verified to work with pacemaker 1.1.15.
Although it is not 100% clear if verification included dynamic creation/deletion of ban rules.
Anyway creation of a ban rule doesn't trigger an update of the stonith-devices - according to logs - in stonithd and thus the rule becomes effective just after another change in the cib that does triggers stonithd to update or a restart of pacemaker/stonithd on the node.

Version-Release number of selected component (if applicable):
pacemaker 1.1.17rc1


How reproducible:
100%


Steps to Reproduce:
1. Setup a 3-node-cluster with a fencing-resource-primitive that is able to fence all 3 nodes
2. Use 'pcs resource ban {fencing-resource} {node}' to ban the fencing resource from node1 & node2
3. Rip the power-cord of node3 (virtually is just fine ;-) ) to trigger the other 2 nodes to fence it

Actual results:
The fencing-resource is being used to fence although it is banned from both remaining nodes

Expected results:
node3 stays unclean as the is no fencing-device available to fence node3


Additional info:
Forcing fencing-resources to be executed from an explicit node (see bz1449155 for instance) e.g. requires rules like that.
Note that the problem just occurs on dynamic creation of the location-constraints.
Nasty thing though as an unchanged cib creates a different behaviour after restart.

Comment 3 Ken Gaillot 2017-06-09 21:23:27 UTC
Missed 7.4 deadline, will get into 7.5

Comment 4 michal novacek 2017-08-04 10:19:44 UTC
qa-ack+: clear reproducer in initial comment

Comment 6 Patrik Hagara 2018-02-15 14:06:07 UTC
setup:
* 3-node cluster
* a single fence_xvm stonith resource capable of fencing all nodes
* fence resource banned on nodes 1 and 2, so that only node 3 can use it
* drop incoming cluster traffic on node 3 via an iptables rule


before the fix (1.1.16-12.el7-94ff4df):
* node 3 declared lost shortly after blocking cluster traffic
* quorum retained with the remaining 2 nodes
* failed node (virt-258 in the log excerpt below) is briefly in "UNCLEAN (offline)" state
* one of the remaining cluster nodes performs fencing operation even though the location constraints should prevent that
* failed node gets fenced, reboots and rejoins the cluster

> stonith-ng:     info: stonith_fence_get_devices_cb:   Found 1 matching devices for 'virt-258'
> stonith-ng:   notice: log_operation:  Operation 'reboot' [5443] (call 3 from crmd.1626) for host 'virt-258' with device 'fence' returned: 0 (OK)
> stonith-ng:   notice: remote_op_done: Operation reboot of virt-258 by virt-244 for crmd.1626: OK
> crmd:   notice: tengine_stonith_notify: Peer virt-258 was terminated (reboot) by virt-244 for virt-245: OK (ref=5d4ae1b9-0e01-4ff9-a1f7-7d0c3f70b638) by client crmd.1626


after the fix (1.1.18-11.el7-2b07d5c5a9):
* node 3 declared lost shortly after blocking cluster traffic
* quorum retained with the remaining 2 nodes
* neither of the remaining nodes can fence the failed node due to the -INFINITY location contraints
* failed node (virt-283 in the log excerpt below) remains in "UNCLEAN (offline)" state

> stonith-ng:   notice: remote_op_done:	Operation reboot of virt-283 by <no-one> for crmd.26189: No such device
> crmd:   notice: tengine_stonith_notify:	Peer virt-283 was not terminated (reboot) by <anyone> on behalf of crmd.26189: No such device | initiator=virt-282 ref=d64f1489-7f92-4e97-b4e4-1ecb5ac7f837

Comment 9 errata-xmlrpc 2018-04-10 15:28:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0860


Note You need to log in before you can comment on or make changes to this bug.