Bug 1506372
Summary: | resource probe return codes other than OCF_NOT_RUNNING all reported as error | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | John Ruemker <jruemker> | |
Component: | pacemaker | Assignee: | Chris Lumens <clumens> | |
Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | |
Severity: | medium | Docs Contact: | ||
Priority: | low | |||
Version: | 8.0 | CC: | cluster-maint, kgaillot, msmazova, sbradley | |
Target Milestone: | pre-dev-freeze | Keywords: | Triaged | |
Target Release: | 8.6 | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | pacemaker-2.1.2-3.el8 | Doc Type: | Enhancement | |
Doc Text: |
Feature: If a resource agent returns 2 (parameter invalid locally) or 5 (not installed) for a probe action (non-recurring monitor), Pacemaker will treat that as if the agent returned 7 (not running).
Reason: Users sometimes intentionally leave software uninstalled or unconfigured on a node that will never run the associated cluster resource (due to location constraints, etc.). Previously, unless the individual resource agent implemented a workaround, this would result in probe failures showing up in status displays, which would make it more difficult to notice real problems.
Result: If a probe returns not installed or not configured, that will no longer be displayed as a failed action, but rather as a reason when showing the resource as stopped.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2039982 (view as bug list) | Environment: | ||
Last Closed: | 2022-05-10 14:09:46 UTC | Type: | Enhancement | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2039982 |
Description
John Ruemker
2017-10-25 19:38:14 UTC
(In reply to John Ruemker from comment #0) > Proposal summary: We should consider whether pacemaker's handling and > reporting of probe return-codes is ideal, or could be improved - such as by > reporting OCF_ERR_INSTALLED and OCF_ERR_CONFIGURED less concerningly, or by > having that trigger a ban on nodes that gets reported clearly not as an > error but as a resource-state condition. > What I was picturing with that suggestion was something like this in crm_mon / pcs status reflecting that we've banned a resource on a node due to its probe result: Clone Set: jrummy1-clone [jrummy1] Started: [ rhel7-node1.example.com rhel7-node2.example.com rhel7-node3.example.com ] Stopped - Not Configured: [ rhel7-node4.example.com ] There could be better terminology to use here, but my point is that we could report this just as a condition of how the resource runs - being banned or stopped on the affected nodes - rather than claiming it as an error. Technically its not really an error - the resource wasn't expected to be running and wasn't running - so I feel it shouldn't rise to the level of other errors that get reported in crm_mon output as Failed Actions. The above example helps make clear what was detected without making it something that has to be acted on. To raise one more point that might be useful context here: The reason I was thinking about this at all to begin with is that we were considering if the LVM agent is returning the correct code when probed on a node without access to the relevant VG. This probe result is creating a Failed Action report due to it not returning OCF_NOT_RUNNING. It is clear there are some agents that avoid this (as is hinted at in the Pacemaker-explained section I referenced earlier), so I wanted to see if LVM should or could being doing a better job with its return. What I found was that the agents that avoid these errors are ones that mask these "not able to run here" conditions in their probe results. ocf:heartbeat:named is an example here - if its executing a probe, it translates OCF_ERR_INSTALLED and OCF_ERR_CONFIGURED to OCF_NOT_RUNNING. While that does successfully avoid the Failed Action report, it does it by masking the real condition that was detected, which isn't exactly ideal. When considering what is the best strategy for all of our agents, it feels like it should be preferred to have them report exactly what they detect, and have pacemaker use that information intelligently and only take action or create alerts when it matters. And that is my thinking behind this request. > While that does successfully avoid the Failed Action report, it does it by masking
> the real condition that was detected, which isn't exactly ideal.
The problem is that we have no idea if the admin knows the software is not installed there. That may be why they are looking at pcs status.
If the agent does mask the error, eventually the admin will find out when the start operation fails (unless the admin explicitly tells the cluster not to start it on those nodes). Granted its not good to find out during an emergency that your one and only backup server didn't have the required bits.
If the agent doesn't mask the error (but the cluster does), then we'll never start it there and not report an error. So the admin may be left wondering why the service isn't running.
If neither the agent nor the cluster masks the error, we'll get bugs like this one from admins annoyed about the noise from persistent expected "failures".
A bit of a no-win situation
(In reply to John Ruemker from comment #1) > (In reply to John Ruemker from comment #0) > > Proposal summary: We should consider whether pacemaker's handling and > > reporting of probe return-codes is ideal, or could be improved - such as by > > reporting OCF_ERR_INSTALLED and OCF_ERR_CONFIGURED less concerningly, or by > > having that trigger a ban on nodes that gets reported clearly not as an > > error but as a resource-state condition. > > > > What I was picturing with that suggestion was something like this in crm_mon > / pcs status reflecting that we've banned a resource on a node due to its > probe result: > > Clone Set: jrummy1-clone [jrummy1] > Started: [ rhel7-node1.example.com rhel7-node2.example.com > rhel7-node3.example.com ] > Stopped - Not Configured: [ rhel7-node4.example.com ] Oh, I didnt see this before my previous reply but I like it! Implemented upstream by commits f2e5189~1..ccff6eb QA: To test, create a ocf:heartbeat:named resource without installing named on at least one node. Before the change, pcs status will show a failed resource action for the probe on any node without named. After, there should be no failed action, but the cluster should not attempt to start named on that node, even if a location constraint prefers it. (Any resource agent without a workaround for probes would work; named just happens to be one.) before fix ---------- > [root@virt-320 ~]# rpm -q pacemaker > pacemaker-2.1.0-8.el8.x86_64 > [root@virt-320 ~]# pcs status > Cluster name: STSRHTS24683 > Cluster Summary: > * Stack: corosync > * Current DC: virt-321 (version 2.1.0-8.el8-7c3f660707) - partition with quorum > * Last updated: Tue Feb 22 16:41:39 2022 > * Last change: Tue Feb 22 16:05:41 2022 by root via cibadmin on virt-320 > * 2 nodes configured > * 2 resource instances configured > Node List: > * Online: [ virt-320 virt-321 ] > Full List of Resources: > * fence-virt-320 (stonith:fence_xvm): Started virt-320 > * fence-virt-321 (stonith:fence_xvm): Started virt-321 > Daemon Status: > corosync: active/disabled > pacemaker: active/disabled > pcsd: active/enabled Install package "bind", which provides "named" on only one of the cluster nodes. > [root@virt-321 ~]# rpm -q bind > bind-9.11.26-6.el8.x86_64 > [root@virt-320 ~]# rpm -q bind > package bind is not installed Start "named" on the node, where "bind" is installed: > [root@virt-321 ~]# systemctl start named && systemctl is-active named > active Create an ocf:heartbeat:named resource on the node, where "named" is not installed: > [root@virt-320 ~]# pcs resource create named1 ocf:heartbeat:named > [root@virt-320 ~]# pcs status > Cluster name: STSRHTS24683 > Cluster Summary: > * Stack: corosync > * Current DC: virt-321 (version 2.1.0-8.el8-7c3f660707) - partition with quorum > * Last updated: Tue Feb 22 16:47:03 2022 > * Last change: Tue Feb 22 16:46:59 2022 by root via cibadmin on virt-320 > * 2 nodes configured > * 3 resource instances configured > Node List: > * Online: [ virt-320 virt-321 ] > Full List of Resources: > * fence-virt-320 (stonith:fence_xvm): Started virt-320 > * fence-virt-321 (stonith:fence_xvm): Started virt-321 > * named1 (ocf::heartbeat:named): Stopped > Failed Resource Actions: > * named1_monitor_0 on virt-320 'not installed' (5): call=15, status='complete', exitreason='Setup problem: couldn't find command: /usr/sbin/named', last-rc-change='2022-02-22 16:46:59 +01:00', queued=0ms, exec=35ms > Daemon Status: > corosync: active/disabled > pacemaker: active/disabled > pcsd: active/enabled Pcs status shows a failed resource action for the probe on a node without "named" installed. after fix --------- > [root@virt-036 ~]# rpm -q pacemaker > pacemaker-2.1.2-4.el8.x86_64 > [root@virt-036 ~]# pcs status > Cluster name: STSRHTS14728 > Cluster Summary: > * Stack: corosync > * Current DC: virt-040 (version 2.1.2-4.el8-ada5c3b36e2) - partition with quorum > * Last updated: Tue Feb 22 15:37:04 2022 > * Last change: Tue Feb 22 11:52:38 2022 by root via cibadmin on virt-036 > * 2 nodes configured > * 2 resource instances configured > Node List: > * Online: [ virt-036 virt-040 ] > Full List of Resources: > * fence-virt-036 (stonith:fence_xvm): Started virt-036 > * fence-virt-040 (stonith:fence_xvm): Started virt-040 > Daemon Status: > corosync: active/disabled > pacemaker: active/disabled > pcsd: active/enabled Install package "bind", which provides "named" on only one of the cluster nodes. > [root@virt-040 ~]# rpm -q bind > bind-9.11.36-2.el8.x86_64 > > [root@virt-036 ~]# rpm -q bind > package bind is not installed Start "named" on the node, where "bind" is installed: > [root@virt-040 ~]# systemctl start named && systemctl is-active named > active Create an ocf:heartbeat:named resource on the node, where "named" is not installed: > [root@virt-036 ~]# pcs resource create named2 ocf:heartbeat:named > [root@virt-036 ~]# pcs resource > * named2 (ocf::heartbeat:named): Stopped (not installed) > [root@virt-036 ~]# pcs status > Cluster name: STSRHTS14728 > Cluster Summary: > * Stack: corosync > * Current DC: virt-040 (version 2.1.2-4.el8-ada5c3b36e2) - partition with quorum > * Last updated: Tue Feb 22 15:39:58 2022 > * Last change: Tue Feb 22 15:39:50 2022 by root via cibadmin on virt-036 > * 2 nodes configured > * 3 resource instances configured > Node List: > * Online: [ virt-036 virt-040 ] > Full List of Resources: > * fence-virt-036 (stonith:fence_xvm): Started virt-036 > * fence-virt-040 (stonith:fence_xvm): Started virt-040 > * named2 (ocf::heartbeat:named): Stopped (not installed) > Daemon Status: > corosync: active/disabled > pacemaker: active/disabled > pcsd: active/enabled Pcs status is not showing failed resource action. Resource named2 is stopped. marking verified in pacemaker-2.1.2-4.el8 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1885 |