Red Hat Bugzilla – Bug 1427273
Support planned changes in pacemaker failure handling
Last modified: 2018-10-30 04:06:11 EDT
The fixes for Bug 1328448 and Bug 1371576 will involve a major overhaul of Pacemaker's failure handling options. This should be mostly transparent to pcs, since they will simply involve new operation options. One change that will affect pcs is that Pacemaker will now track failures per operation, not just per resource. This functionality is still in active development, so details may change, but the current plan is that crm_resource --cleanup will get a new option, --operation <op> <interval>. If the option is not supplied, the behavior is the same as before (all failures for the resource will be cleaned). If the option is specified, only failures for that particular operation+resource will be cleaned. So, pcs resource cleanup should get a similar option. The goal is to have the new options available for 7.4, but that is not certain at this point.
The per-operation failure tracking will almost certainly make 7.4, though the new failure handling options likely won't. This will affect pcs resource cleanup and pcs resource failcount. crm_resource --cleanup and crm_failcount will both get new options -n/--operation <op> and -I/--interval <interval>. (Our option parsing code doesn't allow two values for one option.) Operation requires a resource to be specified (cleanup allows omitting the resource, in which case all resources are cleaned up -- and now all operations for all resources). Interval requires an operation to be specified. If operation is not specified, the behavior is the same as before (in effect, it defaults to all operations on the specified resource). If operation is specified but not interval, interval defaults to 0 (NOT all intervals). Examples: crm_resource -C --> still cleans up all failures cluster-wide crm_resource -C -r myrsc --> still cleans up all failures of myrsc crm_resource -C -r myrsc -n start --> cleans up only start failures of myrsc crm_resource -C -r myrsc -n monitor -I 10s --> cleans up failures of 10-second monitor of myrsc
Created attachment 1451421 [details] proposed fix + test
After Fix: [ant ~] $ rpm -q pcs pcs-snmp pcs-0.9.165-1.el7.x86_64 pcs-snmp-0.9.165-1.el7.x86_64 [ant ~] $ pcs resource R1 (ocf::heartbeat:Dummy): Started bee R2 (ocf::heartbeat:Dummy): Started ant > generate some fails [ant ~] $ crm_resource -F -r R1 -N bee Waiting for 1 replies from the CRMd. OK [ant ~] $ crm_resource -F -r R1 -N bee Waiting for 1 replies from the CRMd. OK [ant ~] $ crm_resource -F -r R1 -N ant Waiting for 1 replies from the CRMd. OK [ant ~] $ crm_resource -F -r R2 -N ant Waiting for 1 replies from the CRMd. OK [ant ~] $ crm_resource -F -r R2 -N bee Waiting for 1 replies from the CRMd. OK > show all failcounts [ant ~] $ pcs resource failcount show Failcounts for resource 'R1' ant: 2 bee: 3 Failcounts for resource 'R2' ant: 1 bee: 1 [ant ~] $ pcs resource failcount show --full Failcounts for resource 'R1' ant: asyncmon 0ms: 2 bee: asyncmon 0ms: 3 Failcounts for resource 'R2' ant: asyncmon 0ms: 1 bee: asyncmon 0ms: 1 > show specific failcounts [ant ~] $ pcs resource failcount show R1 Failcounts for resource 'R1' ant: 2 bee: 3 [ant ~] $ pcs resource failcount show R2 Failcounts for resource 'R2' ant: 1 bee: 1 [ant ~] $ pcs resource failcount show R1 ant Failcounts for resource 'R1' on node 'ant' ant: 2 [ant ~] $ pcs resource failcount show R1 ant start No failcounts for operation 'start' of resource 'R1' on node 'ant' [ant ~] $ pcs resource failcount show R1 ant asyncmon Failcounts for operation 'asyncmon' of resource 'R1' on node 'ant' ant: 2 [ant ~] $ pcs resource failcount show R1 ant asyncmon 1 No failcounts for operation 'asyncmon' with interval '1' of resource 'R1' on node 'ant' [ant ~] $ pcs resource failcount show R1 ant asyncmon 0 Failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant' ant: 2 > reset failcounts > interval [ant ~] $ pcs resource failcount reset R1 ant asyncmon 1 Cleaned up R1 on ant [ant ~] $ pcs resource failcount show R1 ant asyncmon 0 Failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant' ant: 2 [ant ~] $ pcs resource failcount reset R1 ant asyncmon 0 Cleaned up R1 on ant Waiting for 1 replies from the CRMd. OK [ant ~] $ pcs resource failcount show R1 ant asyncmon 0 No failcounts for operation 'asyncmon' with interval '0' of resource 'R1' on node 'ant' > operation [ant ~] $ pcs resource failcount reset R2 ant start Cleaned up R2 on ant [ant ~] $ pcs resource failcount show R2 ant asyncmon Failcounts for operation 'asyncmon' of resource 'R2' on node 'ant' ant: 1 [ant ~] $ pcs resource failcount reset R2 ant asyncmon Cleaned up R2 on ant Waiting for 1 replies from the CRMd. OK [ant ~] $ pcs resource failcount show R2 ant asyncmon No failcounts for operation 'asyncmon' of resource 'R2' on node 'ant' > node [ant ~] $ pcs resource failcount reset R1 ant Cleaned up R1 on ant [ant ~] $ pcs resource failcount show R1 bee Failcounts for resource 'R1' on node 'bee' bee: 3 [ant ~] $ pcs resource failcount reset R1 bee Cleaned up R1 on bee Waiting for 1 replies from the CRMd. OK [ant ~] $ pcs resource failcount show R1 bee No failcounts for resource 'R1' on node 'bee' > resource [ant ~] $ pcs resource failcount reset R1 Cleaned up R1 on bee Cleaned up R1 on ant [ant ~] $ pcs resource failcount show R2 Failcounts for resource 'R2' bee: 1 [ant ~] $ pcs resource failcount reset R2 Cleaned up R2 on bee Cleaned up R2 on ant Waiting for 1 replies from the CRMd. OK [ant ~] $ pcs resource failcount show R2 No failcounts for resource 'R2'
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3066