Bug 2104705

Summary: A pacemaker resource fails to monitor/stop operations because it returns "rc=189"
Product: Red Hat Enterprise Linux 7 Reporter: yatanaka
Component: pacemakerAssignee: Ken Gaillot <kgaillot>
Status: CLOSED WONTFIX QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 7.9CC: cluster-maint, nwahl
Target Milestone: rcFlags: kgaillot: needinfo? (nwahl)
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-01-12 20:11:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description yatanaka 2022-07-07 00:08:23 UTC
Description of problem:

This is a Red Hat OpenStack Platform 13 environment.
In this environment, there are 3 cluster nodes (Controller nodes) and 6 remote node (Compute node).
This environment has the below STONITH levels for a remote node.
~~~
  Target: Hostname6
    Level 1 - stonith-fence_kdump-0000000000aa,stonith-fence_compute-fence-nova
    Level 2 - stonith-fence_ipmi-diag-0000000000aa,stonith-fence_kdump-0000000000aa,stonith-fence_compute-fence-nova
    Level 3 - stonith-fence_ipmilan-0000000000aa,stonith-fence_compute-fence-nova
~~~

My customer tested fencing by "pcs stonith fence Hostname6", and fencing succeeded as below.
~~~
Fencing History:
* unfencing of Hostname6 successful: delegate=controller0, client=crmd.761778, origin=controller0,
    completed='Mon Jul  4 16:15:18 2022'
* reboot of Hostname6 successful: delegate=controller1, client=stonith_admin.615233, origin=controller1,
    completed='Mon Jul  4 16:13:13 2022'
~~~

However, after fencing, `compute-unfence-trigger` resource status become FAILED (blocked) and the remote node reboots repeatedly.
`compute-unfence-trigger_stop_0` failed due to "'unknown' (189)" error.
~~~
 Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]
     compute-unfence-trigger    (ocf::pacemaker:Dummy): Started Hostname1
     compute-unfence-trigger    (ocf::pacemaker:Dummy): Started Hostname2
     compute-unfence-trigger    (ocf::pacemaker:Dummy): Started Hostname3
     compute-unfence-trigger    (ocf::pacemaker:Dummy): Started Hostname4
     compute-unfence-trigger    (ocf::pacemaker:Dummy): Started Hostname5
     compute-unfence-trigger    (ocf::pacemaker:Dummy): FAILED Hostname6 (blocked)

Failed Resource Actions:
* compute-unfence-trigger_stop_0 on Hostname6 'unknown' (189): call=16, status=Error, exitreason='',
    last-rc-change='Mon Jul  4 16:15:19 2022', queued=0ms, exec=0ms
Version-Release number of selected component (if applicable):
~~~

I found a similar bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1704870.
Apparently, the same issue is occurring even though the fixed version (pacemaker-1.1.23-1.el7_9.1.x86_64) is used.


How reproducible:

My customer hits this issue with the setting below.

Steps to Reproduce:

1. Configure STONITH with reference to https://access.redhat.com/solutions/1480813
~~~
  Target: Hostname6
    Level 1 - stonith-fence_kdump-0000000000aa,stonith-fence_compute-fence-nova
    Level 2 - stonith-fence_ipmi-diag-0000000000aa,stonith-fence_kdump-0000000000aa,stonith-fence_compute-fence-nova
    Level 3 - stonith-fence_ipmilan-0000000000aa,stonith-fence_compute-fence-nova
~~~

2. test fencing by the following command 

  # pcs stonith fence Hostname6


Actual results:
`compute-unfence-trigger` resource status become "FAILED (blocked)".
The remote node reboots repeatedly.

Expected results:
`compute-unfence-trigger` resource status become "Started".
The remote node reboots only once by fencing.

Comment 19 Ken Gaillot 2023-01-12 20:11:12 UTC
This issue is believed to affect only RHEL 7 clusters, and only when fencing is manually initiated (such as via "pcs stonith fence"). Given the status of the life cycle, this will not be fixed in RHEL 7. Upgrading to RHEL 8 or later should avoid the issue.

If there is sufficient reason to address this in RHEL 7, a backport of commit efc639cc83 would be worth testing.