Bug 1577085

Summary: `crm_mon -s`: Improve printed outputs and return codes
Product: Red Hat Enterprise Linux 8 Reporter: Reid Wahl <nwahl>
Component: pacemakerAssignee: Ken Gaillot <kgaillot>
Status: CLOSED WONTFIX QA Contact: cluster-qe <cluster-qe>
Severity: low Docs Contact:
Priority: low    
Version: 8.0CC: alexander.kohr, cluster-maint, nwahl
Target Milestone: pre-dev-freezeKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-26 15:15:58 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Reid Wahl 2018-05-11 07:51:03 UTC
Description of problem:

Customer has raised the concern that the output and return codes of `crm_mon -s` do not always accurately reflect the status of the cluster.

`crm_mon -s` is meant to provide a one-line output suitable for Nagios. Per the guidelines here (https://www.monitoring-plugins.org/doc/guidelines.html#AEN78), Nagios plugins can return one of four statuses: OK, Warning, Critical, or Unknown.


Excerpts from the customer:

    Bugzilla 1576103... looks like it says that both unclean (online)
    and unclean (offline) would be warning states.  Currently offline
    is critical, should the bugzilla be reworded such that we are
    recommending unclean (online) be warning and unclean (offline) be
    a critical error? ...a single system being down in a two node cluster
    would result and lack of redundancy with imminent failure is just
    around the corner and would probably step up to the level of
    critical.) Though maybe some kind of mathematical equation that if
    greater than or equal to fifty percent of the nodes are offline it
    would go from warning to critical as having one offline node in a
    cluster leaves you in a position of imminent disaster, however in a
    three or more node cluster having one unclean offline node would
    still only be a warning level as another node could fail before it
    becomes critical. This math may be over simplistic but because maybe
    in a 10 node cluster having 4 nodes offline might cause a
    unacceptable slow down.      

    New potential consideration for the simple output.
    ...I would think that ["no DC"] would be a candidate to be an
    "Unknown" as it might be a configuration issue like maybe you
    running the report on a box that doesn't yet have a cluster
    configured. ...Regardless of whether I am right or wrong on how the
    prior works I would think that the ideal simple output would have
    at least one unknown result for when the script is somehow added to
    a box that does not have either cluster installed on it, to
    returning an unknown result along the lines of "Can't find cluster
    software, some specific application, config file,…,  or a cluster 
    configuration"


--------------------

Version-Release number of selected component (if applicable):

pacemaker-1.1.18-11
master


--------------------

How reproducible:

Always


--------------------

Steps to Reproduce:

Run `crm_mon -s` with varying degrees of resource cleanliness and numbers of offline nodes. Exact details TBD.


--------------------

Actual results:

Output and return codes may be too nonspecific. For example, "OK" and "Warning" are the only options.


--------------------

Expected results:

Exact details TBD. Output and return codes accurately reflect status of cluster in alignment with Nagios plugin guidelines. For example, certain conditions may qualify as a critical state.


--------------------

Additional info:

Related to Bug 1576103 - `crm_mon -s` prints "CLUSTER OK" when there are unclean (online) nodes

Comment 4 Ken Gaillot 2019-03-27 20:58:27 UTC
Due to this not making the 7.7 time frame, I'm moving it to RHEL 8 only, as RHEL 7 will only be getting bug fixes from this point.

The approach I'm leaning to here is creating a new tool (maybe called crm_check) that would solely be a monitoring plugin, since it doesn't really overlap with crm_mon much. We would likely handle Bug 1576103 at the same time.

Unfortunately due to developer constraints, I cannot commit to a release time frame.

Comment 7 Ken Gaillot 2021-07-26 15:15:58 UTC
Upstream has deprecated the crm_mon -s option and will eventually drop it. Users are recommended to use a community-supplied or custom plugin instead.

If demand warrants, we could write a new plugin (separate from crm_mon) and submit it upstream. However at this time, especially given the diversity of monitoring systems in use (nagios, prometheus, zabbix, etc.), it seems unlikely to be a priority.