RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1508350 - pcs resource cleanup is overkill in most scenarios
Summary: pcs resource cleanup is overkill in most scenarios
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: 7.5
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1508351 1541161
TreeView+ depends on / blocked
 
Reported: 2017-11-01 09:42 UTC by Andrew Beekhof
Modified: 2018-08-07 07:58 UTC (History)
6 users (show)

Fixed In Version: pacemaker-1.1.18-10.el7
Doc Type: No Doc Update
Doc Text:
The corresponding pcs bz should be documented instead.
Clone Of:
: 1508351 (view as bug list)
Environment:
Last Closed: 2018-04-10 15:32:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1612869 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Product Errata RHEA-2018:0860 0 None None None 2018-04-10 15:34:11 UTC

Internal Links: 1612869

Description Andrew Beekhof 2017-11-01 09:42:22 UTC
Description of problem:

The command creates a tonne of work for the cluster (reprobe all resources on all nodes) when most of the time the admin just wants to get rid of the "failed actions" section.

In large clusters, a cleanup can often cause more load induced operation failures than it removes.

The proposal is to move the existing functionality to crm_resource --refresh (a long standing alias for --cleanup) and create a new version of --cleanup that just operates on the failed_actions list.


Additionally, only --refresh nodes where a resource's state is known and require --force for "do it _everywhere_ dammit".  This matters when the config says some resources do NOT need to even be probed for on some nodes.

Comment 2 Andrew Beekhof 2017-11-01 10:46:22 UTC
http://github.com/beekhof/pacemaker

+ e3b825a: Fix: crm_resource: Ensure we wait for all messages before exiting
+ 047a661: Fix: crm_resource: Have cleanup operate only on failures

Comment 3 Ken Gaillot 2017-11-02 14:38:43 UTC
QA: To test:

1. Configure a cluster of at least two nodes and one resource.

2. Cause the resource to fail on one node, then run "crm_resource --cleanup -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, you will see the message for only one node.

3. Cause the resource to fail on one node again, then run "crm_resource --refresh -r <rsc-name>". Before and after the fix, you will "Waiting for ..." messages for both nodes.

4. Configure a -INFINITY constraint with resource-discovery=never for the resource on one node, then restart the cluster (to ensure it starts with a clean history).

5. Run "crm_resource --cleanup -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, there will be no "Waiting" messages.

6. Run "crm_resource --refresh -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, you will see the message for only one node (if you repeat the test adding --force, you will see the message for both nodes).

Comment 6 michal novacek 2017-11-06 08:26:26 UTC
qa-ack+: comment #4 have clear reproducer

Comment 7 Ken Gaillot 2017-12-12 17:17:34 UTC
QA: Due to the extent of the implementation changes, it would also be worthwhile to test a variety of typical cleanup scenarios. E.g. with and without --resource specified, with and without --node specified, with and without --operation and --interval specified; when there are multiple failures in the cluster, on the same node and/or on different nodes; etc.

Comment 10 Ken Gaillot 2018-01-26 16:44:37 UTC
With the latest build, crm_resource --refresh now ignores --operation and --interval. Additionally, another issue was fixed where crm_resource --cleanup specified with a clone resource name would not match any failures.

Comment 16 errata-xmlrpc 2018-04-10 15:32:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0860


Note You need to log in before you can comment on or make changes to this bug.