Bug 1758969
| Summary: | 'pcs resource description' could lead users to misunderstand 'cleanup' and 'refresh' | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Seunghwan Jung <jseunghw> | |
| Component: | pacemaker | Assignee: | Ken Gaillot <kgaillot> | |
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 7.6 | CC: | cluster-maint, msmazova, nwahl, ondrej-redhat-developer, phagara, sbradley | |
| Target Milestone: | rc | |||
| Target Release: | 7.9 | |||
| Hardware: | All | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | pacemaker-1.1.23-1.el7 | Doc Type: | No Doc Update | |
| Doc Text: |
The change will be self-documenting
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1759269 (view as bug list) | Environment: | ||
| Last Closed: | 2020-09-29 20:03:57 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1759269, 1805082 | |||
Hi, Thank you Hwanii for creating this bug report for us. I would like to add some information here: In short: This looks to me as either small documentation deficiency to me (which can be easily fixed so people don't wonder about what is happening) or it might be something (--force for `crm_resource`) missing from `pcs` that should be there when resource_id is specified. In long: Discrepancy here is between behaviour of `crm_resource` (which is part of `pacemaker` component I guess) and `pcs` (which seems to have `pcs` component here in BZ). `crm_resource --help` states following for cleanup/refresh: ... -C, --cleanup If resource has any past failures, clear its history and fail count. Optionally filtered by --resource, --node, --operation, and --interval (otherwise all). --operation and --interval apply to fail counts, but entire history is always cleared, to allow current state to be rechecked. -R, --refresh Delete resource's history (including failures) so its current state is rechecked. Optionally filtered by --resource and --node (otherwise all). ******-> Unless --force is specified, resource's group or clone (if any) will also be refreshed. <-**** ... **** - this part (about `--force`) I believe applies also to `--cleanup` based on testing it out so it ideally should be mentioned in `--cleanup` too, but `crm_resource --help` nor `man crm_resource` mentions this. == As the `pcs resource cleanup/refresh resource_id` calls the `crm_resource --cleanup/--refresh resource_id` it should ideally mention same information about refresh of all resources in the groups, unless the intention of `pcs resource --cleanup/--refresh resource_id` was to use `--force` which would not cause refresh/cleanup of all resources in resource group. -- Ondrej Hi all, I do think this is a man page documentation issue, for both crm_resource (pacemaker) and pcs. We can use this BZ for pacemaker, and I'll clone for pcs. The text I am planning to with is: "If the named resource is part of a group, or one numbered instance of a clone or bundled resource, the clean-up applies to the whole collective resource unless --force is given." If you feel that's not ideal, let me know -- there's still time to change it. Fixed upstream as of commit cceb7841 in the master branch (which will be in RHEL 8.2 via rebase), backported as commit d71d4d9 in the 1.1 branch (for RHEL 7). Thank you for the changed text Ken! (in other words: looks good to me) qa_ack+, help text clarification -- see description and comment#5 The latest build uses the word "refresh" instead of "clean-up" in the refresh help before fix ------------ > [root@virt-133 ~]# rpm -q pacemaker > pacemaker-1.1.21-4.el7.x86_64 The current man/help text for 'crm_resource --cleanup, -- refresh' states the following: > [root@virt-133 ~]# man crm_resource > [...] > Commands: > [...] > -C, --cleanup > If resource has any past failures, clear its history and fail count. Optionally filtered by > --resource, --node, --operation, and --interval (otherwise all). --operation and --interval apply to > fail counts, but entire history is always cleared, to allow current state to be rechecked. > -R, --refresh > Delete resource's history (including failures) so its current state is rechecked. Optionally fil‐ > tered by --resource and --node (otherwise all). Unless --force is specified, resource's group or > clone (if any) will also be refreshed. > [root@virt-133 ~]# crm_resource --help > crm_resource - Perform tasks related to cluster resources. > Allows resources to be queried (definition and location), modified, and moved around the cluster. > Usage: crm_resource (query|command) [options] > [...] > Commands: > [...] > -C, --cleanup If resource has any past failures, clear its history and fail count. > Optionally filtered by --resource, --node, --operation, and --interval (otherwise all). > --operation and --interval apply to fail counts, but entire history is always cleared, > to allow current state to be rechecked. > -R, --refresh Delete resource's history (including failures) so its current state is rechecked. > Optionally filtered by --resource and --node (otherwise all). > Unless --force is specified, resource's group or clone (if any) will also be refreshed. When a resource is cleaned-up or refreshed, named resource that is cloned or is part of resource group can also be cleaned-up /refreshed unless --force is specified. This behavior is stated in 'refresh' part of the 'crm_resource' man/help text, but it is not specified in 'cleanup' part of the 'crm_resource' man/help text, which might confuse users. after fix ------------ > [root@virt-039 ~]# rpm -q pacemaker > pacemaker-1.1.23-1.el7.x86_64 The man/help texts have been updated, as it is mentioned in comment#5 and comment#13. > [root@virt-039 ~]# man crm_resource > [...] > Commands: > [...] > -C, --cleanup > If resource has any past failures, clear its history and fail count. Optionally filtered by > --resource, --node, --operation, and --interval (otherwise all). --operation and --interval apply to > fail counts, but entire history is always cleared, to allow current state to be rechecked. If the > named resource is part of a group, or one numbered instance of a clone or bundled resource, the > clean-up applies to the whole collective resource unless --force is given. > -R, --refresh > Delete resource's history (including failures) so its current state is rechecked. Optionally fil‐ > tered by --resource and --node (otherwise all). If the named resource is part of a group, or one num‐ > bered instance of a clone or bundled resource, the refresh applies to the whole collective resource > unless --force is given. > > [root@virt-039 ~]# crm_resource --help > crm_resource - Perform tasks related to cluster resources. > Allows resources to be queried (definition and location), modified, and moved around the cluster. > Usage: crm_resource (query|command) [options] > [...] > Commands: > [...] > -C, --cleanup If resource has any past failures, clear its history and fail count. > Optionally filtered by --resource, --node, --operation, and --interval (otherwise all). > --operation and --interval apply to fail counts, but entire history is always cleared, > to allow current state to be rechecked. If the named resource is part of a group, or > one numbered instance of a clone or bundled resource, the clean-up applies to the > whole collective resource unless --force is given. > -R, --refresh Delete resource's history (including failures) so its current state is rechecked. > Optionally filtered by --resource and --node (otherwise all). If the named resource is > part of a group, or one numbered instance of a clone or bundled resource, the refresh > applies to the whole collective resource unless --force is given. marking verified in pacemaker-1.1.23-1.el7 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3951 |
Description of problem: When a pacemaker resource is cleaned up or refreshed, resources in the same resource group can also be cleaned up depending on constraints set by resource grouping. This could lead customer to confussion. Hence, the 'description' should be clear about it. Can we add some think like this to the Usage? When a resource is cleaned up, resources in the same resource group can also be cleaned up depending on constraints set by resource grouping. Version-Release number of selected component (if applicable): confirmed with the following versions, - pcs-0.9.165-6.el7.x86_64 - pacemaker-cluster-libs-1.1.19-8.el7.x86_64 - pacemaker-cli-1.1.19-8.el7.x86_64 al - pacemaker-1.1.19-8.el7.x86_64 - pacemaker-libs-1.1.19-8.el7.x86_64 How reproducible: 'pcs resource description' and see 'cleanup' and 'refresh' Steps to Reproduce: 1. run 'pcs resource description' 2. see 'cleanup' and 'refresh' 3. Actual results: 'pcs resource description' shows: cleanup [<resource id>] [--node <node>] Make the cluster forget failed operations from history of the resource and re-detect its current state. This can be useful to purge knowledge of past failures that have since been resolved. If a resource id is not specified then all resources / stonith devices will be cleaned up. If a node is not specified then resources / stonith devices on all nodes will be cleaned up. refresh [<resource id>] [--node <node>] [--full] Make the cluster forget the complete operation history (including failures) of the resource and re-detect its current state. If you are interested in forgetting failed operations only, use the 'pcs resource cleanup' command. If a resource id is not specified then all resources / stonith devices will be refreshed. If a node is not specified then resources / stonith devices on all nodes will be refreshed. Use --full to refresh a resource on all nodes, otherwise only nodes where the resource's state is known will be considered. Expected results: 'pcs resource description' shows: cleanup [<resource id>] [--node <node>] Make the cluster forget failed operations from history of the resource and re-detect its current state. This can be useful to purge knowledge of past failures that have since been resolved. If a resource id is not specified then all resources / stonith devices will be cleaned up. If a node is not specified then resources / stonith devices on all nodes will be cleaned up. When a resource is cleaned up, resources in the same resource group can also be cleaned up depending on constraints set by resource grouping. refresh [<resource id>] [--node <node>] [--full] Make the cluster forget the complete operation history (including failures) of the resource and re-detect its current state. If you are interested in forgetting failed operations only, use the 'pcs resource cleanup' command. If a resource id is not specified then all resources / stonith devices will be refreshed. If a node is not specified then resources / stonith devices on all nodes will be refreshed. Use --full to refresh a resource on all nodes, otherwise only nodes where the resource's state is known will be considered. When a resource is refreshed, resources in the same resource group can also be refreshed depending on constraints set by resource grouping. Additional info: