Bug 1427273

Summary: Support planned changes in pacemaker failure handling
Product: Red Hat Enterprise Linux 7 Reporter: Ken Gaillot <kgaillot>
Component: pcsAssignee: Tomas Jelinek <tojeline>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact: Steven J. Levine <slevine>
Priority: medium    
Version: 7.3CC: cfeist, cluster-maint, idevat, kgaillot, mlisik, omular, rsteiger, sbradley, tojeline
Target Milestone: rcKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pcs-0.9.165-1.el7 Doc Type: Release Note
Doc Text:
The "pcs" command now supports filtering resource failures by an operation and its interval Pacemaker now tracks resource failures per a resource operation on top of a resource name, and a node. The "pcs resource failcount show" command now allows filtering failures by a resource, node, operation, and interval. It provides an option to display failures aggregated per a resource and node or detailed per a resource, node, operation, and its interval. Additionally, the "pcs resource failcount reset" command now allows filtering failures by a resource, node, operation, and interval.
Story Points: ---
Clone Of:
: 1591308 (view as bug list) Environment:
Last Closed: 2018-10-30 08:05:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
proposed fix + test none

Description Ken Gaillot 2017-02-27 18:11:12 UTC
The fixes for Bug 1328448 and Bug 1371576 will involve a major overhaul of Pacemaker's failure handling options. This should be mostly transparent to pcs, since they will simply involve new operation options.

One change that will affect pcs is that Pacemaker will now track failures per operation, not just per resource. This functionality is still in active development, so details may change, but the current plan is that crm_resource --cleanup will get a new option, --operation <op> <interval>. If the option is not supplied, the behavior is the same as before (all failures for the resource will be cleaned). If the option is specified, only failures for that particular operation+resource will be cleaned. So, pcs resource cleanup should get a similar option.

The goal is to have the new options available for 7.4, but that is not certain at this point.

Comment 2 Ken Gaillot 2017-03-14 20:41:20 UTC
The per-operation failure tracking will almost certainly make 7.4, though the new failure handling options likely won't.

This will affect pcs resource cleanup and pcs resource failcount.

crm_resource --cleanup and crm_failcount will both get new options -n/--operation <op> and -I/--interval <interval>. (Our option parsing code doesn't allow two values for one option.)

Operation requires a resource to be specified (cleanup allows omitting the resource, in which case all resources are cleaned up -- and now all operations for all resources). Interval requires an operation to be specified.

If operation is not specified, the behavior is the same as before (in effect, it defaults to all operations on the specified resource). If operation is specified but not interval, interval defaults to 0 (NOT all intervals).

Examples:

crm_resource -C    --> still cleans up all failures cluster-wide
crm_resource -C -r myrsc    --> still cleans up all failures of myrsc
crm_resource -C -r myrsc -n start   --> cleans up only start failures of myrsc
crm_resource -C -r myrsc -n monitor -I 10s  --> cleans up failures of 10-second monitor of myrsc

Comment 9 Tomas Jelinek 2018-06-14 14:17:35 UTC
Created attachment 1451421 [details]
proposed fix + test

Comment 13 Ivan Devat 2018-06-26 06:55:23 UTC
After Fix:

[ant ~] $ rpm -q pcs pcs-snmp
pcs-0.9.165-1.el7.x86_64
pcs-snmp-0.9.165-1.el7.x86_64

[ant ~] $ pcs resource
 R1     (ocf::heartbeat:Dummy): Started bee
 R2     (ocf::heartbeat:Dummy): Started ant

> generate some fails

[ant ~] $ crm_resource -F -r R1 -N bee
Waiting for 1 replies from the CRMd. OK
[ant ~] $ crm_resource -F -r R1 -N bee
Waiting for 1 replies from the CRMd. OK
[ant ~] $ crm_resource -F -r R1 -N ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ crm_resource -F -r R2 -N ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ crm_resource -F -r R2 -N bee
Waiting for 1 replies from the CRMd. OK

> show all failcounts

[ant ~] $ pcs resource failcount show
Failcounts for resource 'R1'
  ant: 2
  bee: 3
Failcounts for resource 'R2'
  ant: 1
  bee: 1

[ant ~] $ pcs resource failcount show --full
Failcounts for resource 'R1'
  ant:
    asyncmon 0ms: 2
  bee:
    asyncmon 0ms: 3
Failcounts for resource 'R2'
  ant:
    asyncmon 0ms: 1
  bee:
    asyncmon 0ms: 1

> show specific failcounts

[ant ~] $ pcs resource failcount show R1
Failcounts for resource 'R1'
  ant: 2
  bee: 3

[ant ~] $ pcs resource failcount show R2
Failcounts for resource 'R2'
  ant: 1
  bee: 1

[ant ~] $ pcs resource failcount show R1 ant
Failcounts for resource 'R1' on node 'ant'
  ant: 2

[ant ~] $ pcs resource failcount show R1 ant start
No failcounts for operation 'start' of resource 'R1' on node 'ant'

[ant ~] $ pcs resource failcount show R1 ant asyncmon
Failcounts for operation 'asyncmon' of resource 'R1' on node 'ant'
  ant: 2

[ant ~] $ pcs resource failcount show R1 ant asyncmon 1
No failcounts for operation 'asyncmon' with interval '1' of resource 'R1' on node 'ant'

[ant ~] $ pcs resource failcount show R1 ant asyncmon 0
Failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant'
  ant: 2

> reset failcounts

> interval

[ant ~] $ pcs resource failcount reset R1 ant asyncmon 1
Cleaned up R1 on ant
[ant ~] $ pcs resource failcount show R1 ant asyncmon 0
Failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant'
  ant: 2
[ant ~] $ pcs resource failcount reset R1 ant asyncmon 0

Cleaned up R1 on ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ pcs resource failcount show R1 ant asyncmon 0
No failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant'

> operation

[ant ~] $ pcs resource failcount reset R2 ant start
Cleaned up R2 on ant
[ant ~] $ pcs resource failcount show R2 ant asyncmon
Failcounts for operation 'asyncmon' of resource 'R2' on node 'ant'
  ant: 1

[ant ~] $ pcs resource failcount reset R2 ant asyncmon
Cleaned up R2 on ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ pcs resource failcount show R2 ant asyncmon
No failcounts for operation 'asyncmon' of resource 'R2' on node 'ant'

> node

[ant ~] $ pcs resource failcount reset R1 ant
Cleaned up R1 on ant
[ant ~] $ pcs resource failcount show R1 bee 
Failcounts for resource 'R1' on node 'bee'
  bee: 3

[ant ~] $ pcs resource failcount reset R1 bee
Cleaned up R1 on bee
Waiting for 1 replies from the CRMd. OK
[ant ~] $ pcs resource failcount show R1 bee 
No failcounts for resource 'R1' on node 'bee'

> resource

[ant ~] $ pcs resource failcount reset R1
Cleaned up R1 on bee
Cleaned up R1 on ant
[ant ~] $ pcs resource failcount show R2 
Failcounts for resource 'R2'
  bee: 1

[ant ~] $ pcs resource failcount reset R2
Cleaned up R2 on bee
Cleaned up R2 on ant
Waiting for 1 replies from the CRMd. OK
[ant ~] $ pcs resource failcount show R2 
No failcounts for resource 'R2'

Comment 19 errata-xmlrpc 2018-10-30 08:05:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3066