Bug 1508350
Summary: | pcs resource cleanup is overkill in most scenarios | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Andrew Beekhof <abeekhof> | |
Component: | pacemaker | Assignee: | Ken Gaillot <kgaillot> | |
Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | |
Severity: | medium | Docs Contact: | ||
Priority: | high | |||
Version: | 7.4 | CC: | abeekhof, cluster-maint, lmiksik, mkrcmari, mnovacek, rsteiger | |
Target Milestone: | rc | |||
Target Release: | 7.5 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | pacemaker-1.1.18-10.el7 | Doc Type: | No Doc Update | |
Doc Text: |
The corresponding pcs bz should be documented instead.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1508351 (view as bug list) | Environment: | ||
Last Closed: | 2018-04-10 15:32:51 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1508351, 1541161 |
Description
Andrew Beekhof
2017-11-01 09:42:22 UTC
http://github.com/beekhof/pacemaker + e3b825a: Fix: crm_resource: Ensure we wait for all messages before exiting + 047a661: Fix: crm_resource: Have cleanup operate only on failures QA: To test: 1. Configure a cluster of at least two nodes and one resource. 2. Cause the resource to fail on one node, then run "crm_resource --cleanup -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, you will see the message for only one node. 3. Cause the resource to fail on one node again, then run "crm_resource --refresh -r <rsc-name>". Before and after the fix, you will "Waiting for ..." messages for both nodes. 4. Configure a -INFINITY constraint with resource-discovery=never for the resource on one node, then restart the cluster (to ensure it starts with a clean history). 5. Run "crm_resource --cleanup -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, there will be no "Waiting" messages. 6. Run "crm_resource --refresh -r <rsc-name>". Before the fix, you will "Waiting for ..." messages for both nodes. After the fix, you will see the message for only one node (if you repeat the test adding --force, you will see the message for both nodes). qa-ack+: comment #4 have clear reproducer QA: Due to the extent of the implementation changes, it would also be worthwhile to test a variety of typical cleanup scenarios. E.g. with and without --resource specified, with and without --node specified, with and without --operation and --interval specified; when there are multiple failures in the cluster, on the same node and/or on different nodes; etc. With the latest build, crm_resource --refresh now ignores --operation and --interval. Additionally, another issue was fixed where crm_resource --cleanup specified with a clone resource name would not match any failures. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0860 |