Bug 1508350

Summary: pcs resource cleanup is overkill in most scenarios
Product: Red Hat Enterprise Linux 7 Reporter: Andrew Beekhof <abeekhof>
Component: pacemakerAssignee: Ken Gaillot <kgaillot>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: high    
Version: 7.4CC: abeekhof, cluster-maint, lmiksik, mkrcmari, mnovacek, rsteiger
Target Milestone: rc   
Target Release: 7.5   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pacemaker-1.1.18-10.el7 Doc Type: No Doc Update
Doc Text:
The corresponding pcs bz should be documented instead.
Story Points: ---
Clone Of:
: 1508351 (view as bug list) Environment:
Last Closed: 2018-04-10 15:32:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1508351, 1541161    

Description Andrew Beekhof 2017-11-01 09:42:22 UTC
Description of problem:

The command creates a tonne of work for the cluster (reprobe all resources on all nodes) when most of the time the admin just wants to get rid of the "failed actions" section.

In large clusters, a cleanup can often cause more load induced operation failures than it removes.

The proposal is to move the existing functionality to crm_resource --refresh (a long standing alias for --cleanup) and create a new version of --cleanup that just operates on the failed_actions list.


Additionally, only --refresh nodes where a resource's state is known and require --force for "do it _everywhere_ dammit".  This matters when the config says some resources do NOT need to even be probed for on some nodes.

Comment 2 Andrew Beekhof 2017-11-01 10:46:22 UTC
http://github.com/beekhof/pacemaker

+ e3b825a: Fix: crm_resource: Ensure we wait for all messages before exiting
+ 047a661: Fix: crm_resource: Have cleanup operate only on failures

Comment 3 Ken Gaillot 2017-11-02 14:38:43 UTC
QA: To test:

1. Configure a cluster of at least two nodes and one resource.

2. Cause the resource to fail on one node, then run "crm_resource --cleanup -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, you will see the message for only one node.

3. Cause the resource to fail on one node again, then run "crm_resource --refresh -r <rsc-name>". Before and after the fix, you will "Waiting for ..." messages for both nodes.

4. Configure a -INFINITY constraint with resource-discovery=never for the resource on one node, then restart the cluster (to ensure it starts with a clean history).

5. Run "crm_resource --cleanup -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, there will be no "Waiting" messages.

6. Run "crm_resource --refresh -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, you will see the message for only one node (if you repeat the test adding --force, you will see the message for both nodes).

Comment 6 michal novacek 2017-11-06 08:26:26 UTC
qa-ack+: comment #4 have clear reproducer

Comment 7 Ken Gaillot 2017-12-12 17:17:34 UTC
QA: Due to the extent of the implementation changes, it would also be worthwhile to test a variety of typical cleanup scenarios. E.g. with and without --resource specified, with and without --node specified, with and without --operation and --interval specified; when there are multiple failures in the cluster, on the same node and/or on different nodes; etc.

Comment 10 Ken Gaillot 2018-01-26 16:44:37 UTC
With the latest build, crm_resource --refresh now ignores --operation and --interval. Additionally, another issue was fixed where crm_resource --cleanup specified with a clone resource name would not match any failures.

Comment 16 errata-xmlrpc 2018-04-10 15:32:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0860