RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1427273 - Support planned changes in pacemaker failure handling
Summary: Support planned changes in pacemaker failure handling
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-27 18:11 UTC by Ken Gaillot
Modified: 2018-10-30 08:06 UTC (History)
9 users (show)

Fixed In Version: pcs-0.9.165-1.el7
Doc Type: Release Note
Doc Text:
The "pcs" command now supports filtering resource failures by an operation and its interval Pacemaker now tracks resource failures per a resource operation on top of a resource name, and a node. The "pcs resource failcount show" command now allows filtering failures by a resource, node, operation, and interval. It provides an option to display failures aggregated per a resource and node or detailed per a resource, node, operation, and its interval. Additionally, the "pcs resource failcount reset" command now allows filtering failures by a resource, node, operation, and interval.
Clone Of:
: 1591308 (view as bug list)
Environment:
Last Closed: 2018-10-30 08:05:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix + test (54.10 KB, patch)
2018-06-14 14:17 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1549576 1 medium CLOSED double increment of failcount upon single failure of start operation 2023-09-15 00:06:40 UTC
Red Hat Bugzilla 1588667 1 None None None 2022-03-13 15:05:09 UTC
Red Hat Knowledge Base (Solution) 2894001 0 None None None 2018-08-07 13:31:45 UTC
Red Hat Knowledge Base (Solution) 3543701 0 None None None 2018-08-07 13:32:15 UTC
Red Hat Product Errata RHBA-2018:3066 0 None None None 2018-10-30 08:06:10 UTC

Internal Links: 1549576 1588667

Description Ken Gaillot 2017-02-27 18:11:12 UTC
The fixes for Bug 1328448 and Bug 1371576 will involve a major overhaul of Pacemaker's failure handling options. This should be mostly transparent to pcs, since they will simply involve new operation options.

One change that will affect pcs is that Pacemaker will now track failures per operation, not just per resource. This functionality is still in active development, so details may change, but the current plan is that crm_resource --cleanup will get a new option, --operation <op> <interval>. If the option is not supplied, the behavior is the same as before (all failures for the resource will be cleaned). If the option is specified, only failures for that particular operation+resource will be cleaned. So, pcs resource cleanup should get a similar option.

The goal is to have the new options available for 7.4, but that is not certain at this point.

Comment 2 Ken Gaillot 2017-03-14 20:41:20 UTC
The per-operation failure tracking will almost certainly make 7.4, though the new failure handling options likely won't.

This will affect pcs resource cleanup and pcs resource failcount.

crm_resource --cleanup and crm_failcount will both get new options -n/--operation <op> and -I/--interval <interval>. (Our option parsing code doesn't allow two values for one option.)

Operation requires a resource to be specified (cleanup allows omitting the resource, in which case all resources are cleaned up -- and now all operations for all resources). Interval requires an operation to be specified.

If operation is not specified, the behavior is the same as before (in effect, it defaults to all operations on the specified resource). If operation is specified but not interval, interval defaults to 0 (NOT all intervals).

Examples:

crm_resource -C    --> still cleans up all failures cluster-wide
crm_resource -C -r myrsc    --> still cleans up all failures of myrsc
crm_resource -C -r myrsc -n start   --> cleans up only start failures of myrsc
crm_resource -C -r myrsc -n monitor -I 10s  --> cleans up failures of 10-second monitor of myrsc

Comment 9 Tomas Jelinek 2018-06-14 14:17:35 UTC
Created attachment 1451421 [details]
proposed fix + test

Comment 13 Ivan Devat 2018-06-26 06:55:23 UTC
After Fix:

[ant ~] $ rpm -q pcs pcs-snmp
pcs-0.9.165-1.el7.x86_64
pcs-snmp-0.9.165-1.el7.x86_64

[ant ~] $ pcs resource
 R1     (ocf::heartbeat:Dummy): Started bee
 R2     (ocf::heartbeat:Dummy): Started ant

> generate some fails

[ant ~] $ crm_resource -F -r R1 -N bee
Waiting for 1 replies from the CRMd. OK
[ant ~] $ crm_resource -F -r R1 -N bee
Waiting for 1 replies from the CRMd. OK
[ant ~] $ crm_resource -F -r R1 -N ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ crm_resource -F -r R2 -N ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ crm_resource -F -r R2 -N bee
Waiting for 1 replies from the CRMd. OK

> show all failcounts

[ant ~] $ pcs resource failcount show
Failcounts for resource 'R1'
  ant: 2
  bee: 3
Failcounts for resource 'R2'
  ant: 1
  bee: 1

[ant ~] $ pcs resource failcount show --full
Failcounts for resource 'R1'
  ant:
    asyncmon 0ms: 2
  bee:
    asyncmon 0ms: 3
Failcounts for resource 'R2'
  ant:
    asyncmon 0ms: 1
  bee:
    asyncmon 0ms: 1

> show specific failcounts

[ant ~] $ pcs resource failcount show R1
Failcounts for resource 'R1'
  ant: 2
  bee: 3

[ant ~] $ pcs resource failcount show R2
Failcounts for resource 'R2'
  ant: 1
  bee: 1

[ant ~] $ pcs resource failcount show R1 ant
Failcounts for resource 'R1' on node 'ant'
  ant: 2

[ant ~] $ pcs resource failcount show R1 ant start
No failcounts for operation 'start' of resource 'R1' on node 'ant'

[ant ~] $ pcs resource failcount show R1 ant asyncmon
Failcounts for operation 'asyncmon' of resource 'R1' on node 'ant'
  ant: 2

[ant ~] $ pcs resource failcount show R1 ant asyncmon 1
No failcounts for operation 'asyncmon' with interval '1' of resource 'R1' on node 'ant'

[ant ~] $ pcs resource failcount show R1 ant asyncmon 0
Failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant'
  ant: 2

> reset failcounts

> interval

[ant ~] $ pcs resource failcount reset R1 ant asyncmon 1
Cleaned up R1 on ant
[ant ~] $ pcs resource failcount show R1 ant asyncmon 0
Failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant'
  ant: 2
[ant ~] $ pcs resource failcount reset R1 ant asyncmon 0

Cleaned up R1 on ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ pcs resource failcount show R1 ant asyncmon 0
No failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant'

> operation

[ant ~] $ pcs resource failcount reset R2 ant start
Cleaned up R2 on ant
[ant ~] $ pcs resource failcount show R2 ant asyncmon
Failcounts for operation 'asyncmon' of resource 'R2' on node 'ant'
  ant: 1

[ant ~] $ pcs resource failcount reset R2 ant asyncmon
Cleaned up R2 on ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ pcs resource failcount show R2 ant asyncmon
No failcounts for operation 'asyncmon' of resource 'R2' on node 'ant'

> node

[ant ~] $ pcs resource failcount reset R1 ant
Cleaned up R1 on ant
[ant ~] $ pcs resource failcount show R1 bee 
Failcounts for resource 'R1' on node 'bee'
  bee: 3

[ant ~] $ pcs resource failcount reset R1 bee
Cleaned up R1 on bee
Waiting for 1 replies from the CRMd. OK
[ant ~] $ pcs resource failcount show R1 bee 
No failcounts for resource 'R1' on node 'bee'

> resource

[ant ~] $ pcs resource failcount reset R1
Cleaned up R1 on bee
Cleaned up R1 on ant
[ant ~] $ pcs resource failcount show R2 
Failcounts for resource 'R2'
  bee: 1

[ant ~] $ pcs resource failcount reset R2
Cleaned up R2 on bee
Cleaned up R2 on ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ pcs resource failcount show R2 
No failcounts for resource 'R2'

Comment 19 errata-xmlrpc 2018-10-30 08:05:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3066


Note You need to log in before you can comment on or make changes to this bug.