Bug 1220512
| Summary: | pcs resource cleanup improvements | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | David Vossel <dvossel> | ||||
| Component: | pcs | Assignee: | Tomas Jelinek <tojeline> | ||||
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||
| Severity: | unspecified | Docs Contact: | |||||
| Priority: | medium | ||||||
| Version: | 7.2 | CC: | abeekhof, cchen, cluster-maint, fdinitto, idevat, mlisik, rsteiger, tojeline | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | pcs-0.9.151-1.el7 | Doc Type: | Bug Fix | ||||
| Doc Text: |
Cause:
User runs 'pcs resource cleanup' command in a cluster with high number of resources and/or nodes.
Consequence:
Cluster may get less responsive for a while.
Fix:
Display a warning describing the negative impact of the command if appropriate. Add options to the command to specify resource and/or node to run on.
Result:
User is informed about negative impacts and has options to reduce it while being able to perform desired operation.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2016-11-03 20:54:04 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
David Vossel
2015-05-11 17:47:17 UTC
Created attachment 1130861 [details]
proposed fix
Test:
Add nodes and/or resources to cluster so that number of resources times number of nodes exceeds 100.
[root@rh72-node1:~]# pcs status | grep configured
2 nodes and 53 resources configured
[root@rh72-node1:~]# pcs resource cleanup
Error: Cleaning up all resources on all nodes will execute more than 100 operations in the cluster, which may negatively impact the responsiveness of the cluster. Consider specifying resource and/or node, use --force to override
[root@rh72-node1:~]# echo $?
1
[root@rh72-node1:~]# pcs resource cleanup dummy
Waiting for 2 replies from the CRMd.. OK
Cleaning up dummy on rh72-node1, removing fail-count-dummy
Cleaning up dummy on rh72-node2, removing fail-count-dummy
[root@rh72-node1:~]# echo $?
0
[root@rh72-node1:~]# pcs resource cleanup --node rh72-node1
Waiting for 1 replies from the CRMd. OK
[root@rh72-node1:~]# echo $?
0
[root@rh72-node1:~]# pcs resource cleanup --node rh72-node1 dummy
Waiting for 1 replies from the CRMd. OK
Cleaning up dummy on rh72-node1, removing fail-count-dummy
[root@rh72-node1:~]# echo $?
0
[root@rh72-node1:~]# pcs resource cleanup --force
Waiting for 1 replies from the CRMd. OK
[root@rh72-node1:~]# echo $?
0
[root@rh72-node1:~]# pcs status | grep configured
2 nodes and 3 resources configured
[root@rh72-node1:~]# pcs resource cleanup
Waiting for 1 replies from the CRMd. OK
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions *** Bug 1323901 has been marked as a duplicate of this bug. *** Setup:
[vm-rhel72-1 ~] $ for i in {a..b}; do for j in {a..z}; do pcs resource create ${i}${j} Dummy; done ;done
[vm-rhel72-1 ~] $ pcs status | grep configured
2 nodes and 52 resources configured
Before fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.143-15.el7.x86_64
[vm-rhel72-1 ~] $ pcs resource cleanup
Waiting for 1 replies from the CRMd. OK
[vm-rhel72-1 ~] $ pcs resource cleanup --node vm-rhel72-1 aa
Waiting for 2 replies from the CRMd.. OK
Cleaning up aa on vm-rhel72-1, removing fail-count-aa
Cleaning up aa on vm-rhel72-3, removing fail-count-aa
After Fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.151-1.el7.x86_64
[vm-rhel72-1 ~] $ pcs resource cleanup
Error: Cleaning up all resources on all nodes will execute more than 100 operations in the cluster, which may negatively impact the responsiveness of the cluster. Consider specifying resource and/or node, use --force to override
[vm-rhel72-1 ~] $ echo $?
1
[vm-rhel72-1 ~] $ pcs resource cleanup --node vm-rhel72-1 aa
Waiting for 1 replies from the CRMd. OK
Cleaning up aa on vm-rhel72-1, removing fail-count-aa
[vm-rhel72-1 ~] $ for i in {a..b}; do for j in {a..z}; do pcs resource delete ${i}${j} Dummy; done ;done
[vm-rhel72-1 ~] $ for i in {a..z}; do pcs resource create ${i} Dummy; done
[vm-rhel72-1 ~] $ pcs status | grep configured
2 nodes and 52 resources configured
[vm-rhel72-1 ~] $ pcs resource cleanup
Waiting for 1 replies from the CRMd. OK
*** Bug 1366514 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-2596.html |