RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1758969 - 'pcs resource description' could lead users to misunderstand 'cleanup' and 'refresh'
Summary: 'pcs resource description' could lead users to misunderstand 'cleanup' and 'r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.6
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: rc
: 7.9
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1759269 1805082
TreeView+ depends on / blocked
 
Reported: 2019-10-07 04:34 UTC by Seunghwan Jung
Modified: 2023-12-15 16:49 UTC (History)
6 users (show)

Fixed In Version: pacemaker-1.1.23-1.el7
Doc Type: No Doc Update
Doc Text:
The change will be self-documenting
Clone Of:
: 1759269 (view as bug list)
Environment:
Last Closed: 2020-09-29 20:03:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4507611 0 None None None 2019-10-16 20:25:02 UTC
Red Hat Knowledge Base (Solution) 5357951 0 None None None 2020-08-29 06:54:34 UTC
Red Hat Product Errata RHEA-2020:3951 0 None None None 2020-09-29 20:04:18 UTC

Description Seunghwan Jung 2019-10-07 04:34:52 UTC
Description of problem:

When a pacemaker resource is cleaned up or refreshed, resources in the same
resource group can also be cleaned up depending on constraints set by 
resource grouping.

This could lead customer to confussion. Hence, the 'description' should 
be clear about it.


Can we add some think like this to the Usage?

   When a resource is cleaned up, resources in the same resource group can
   also be cleaned up depending on constraints set by resource grouping.

Version-Release number of selected component (if applicable):
 confirmed with the following versions, 
  - pcs-0.9.165-6.el7.x86_64
  - pacemaker-cluster-libs-1.1.19-8.el7.x86_64
  - pacemaker-cli-1.1.19-8.el7.x86_64 al
  - pacemaker-1.1.19-8.el7.x86_64
  - pacemaker-libs-1.1.19-8.el7.x86_64


How reproducible:

'pcs resource description' and see 'cleanup' and 'refresh'
  

Steps to Reproduce:
1. run 'pcs resource description'
2. see 'cleanup' and 'refresh' 
3.

Actual results:

'pcs resource description' shows:

    cleanup [<resource id>] [--node <node>]
        Make the cluster forget failed operations from history of the resource
        and re-detect its current state. This can be useful to purge knowledge
        of past failures that have since been resolved. If a resource id is not
        specified then all resources / stonith devices will be cleaned up. If a
        node is not specified then resources / stonith devices on all nodes will
        be cleaned up.

    refresh [<resource id>] [--node <node>] [--full]
        Make the cluster forget the complete operation history (including
        failures) of the resource and re-detect its current state. If you are
        interested in forgetting failed operations only, use the 'pcs resource
        cleanup' command. If a resource id is not specified then all resources
        / stonith devices will be refreshed. If a node is not specified then
        resources / stonith devices on all nodes will be refreshed. Use --full
        to refresh a resource on all nodes, otherwise only nodes where the
        resource's state is known will be considered.

Expected results:

'pcs resource description' shows:

    cleanup [<resource id>] [--node <node>]
        Make the cluster forget failed operations from history of the resource
        and re-detect its current state. This can be useful to purge knowledge
        of past failures that have since been resolved. If a resource id is not
        specified then all resources / stonith devices will be cleaned up. If a
        node is not specified then resources / stonith devices on all nodes will
        be cleaned up.

        When a resource is cleaned up, resources in the same resource group can
        also be cleaned up depending on constraints set by resource grouping.

    refresh [<resource id>] [--node <node>] [--full]
        Make the cluster forget the complete operation history (including
        failures) of the resource and re-detect its current state. If you are
        interested in forgetting failed operations only, use the 'pcs resource
        cleanup' command. If a resource id is not specified then all resources
        / stonith devices will be refreshed. If a node is not specified then
        resources / stonith devices on all nodes will be refreshed. Use --full
        to refresh a resource on all nodes, otherwise only nodes where the
        resource's state is known will be considered.

        When a resource is refreshed, resources in the same resource group can
        also be refreshed depending on constraints set by resource grouping.
  
Additional info:

Comment 2 Ondrej Faměra 2019-10-07 06:46:46 UTC
Hi,
Thank you Hwanii for creating this bug report for us.

I would like to add some information here:

In short: This looks to me as either small documentation deficiency to me (which can be easily fixed so people don't wonder about what is happening) or it might be something (--force for `crm_resource`) missing from `pcs` that should be there when resource_id is specified.

In long:
Discrepancy here is between behaviour of `crm_resource` (which is part of `pacemaker` component I guess) 
and `pcs` (which seems to have `pcs` component here in BZ).

`crm_resource --help` states following for cleanup/refresh:
...
 -C, --cleanup			If resource has any past failures, clear its history and fail count.
				Optionally filtered by --resource, --node, --operation, and --interval (otherwise all).
				--operation and --interval apply to fail counts, but entire history is always cleared,
				to allow current state to be rechecked.

 -R, --refresh			Delete resource's history (including failures) so its current state is rechecked.
				Optionally filtered by --resource and --node (otherwise all).
	******->		Unless --force is specified, resource's group or clone (if any) will also be refreshed.  <-****
...
**** - this part (about `--force`) I believe applies also to `--cleanup` based on testing it out so it ideally should be mentioned in `--cleanup` too, but `crm_resource --help` nor `man crm_resource` mentions this.
==
As the `pcs resource cleanup/refresh resource_id` calls the `crm_resource --cleanup/--refresh resource_id` it should ideally mention same information about refresh of all resources in the groups, unless the intention of `pcs resource --cleanup/--refresh resource_id` was to use `--force` which would not cause refresh/cleanup of all resources in resource group.

--
Ondrej

Comment 3 Ken Gaillot 2019-10-07 17:39:01 UTC
Hi all,

I do think this is a man page documentation issue, for both crm_resource (pacemaker) and pcs.

We can use this BZ for pacemaker, and I'll clone for pcs.

Comment 5 Ken Gaillot 2019-10-11 19:44:20 UTC
The text I am planning to with is: "If the named resource is part of a group, or one numbered instance of a clone or bundled resource, the clean-up applies to the whole collective resource unless --force is given."

If you feel that's not ideal, let me know -- there's still time to change it.

Comment 6 Ken Gaillot 2019-10-14 22:12:55 UTC
Fixed upstream as of commit cceb7841 in the master branch (which will be in RHEL 8.2 via rebase), backported as commit d71d4d9 in the 1.1 branch (for RHEL 7).

Comment 7 Ondrej Faměra 2019-10-15 00:00:21 UTC
Thank you for the changed text Ken! (in other words: looks good to me)

Comment 8 Patrik Hagara 2020-02-21 17:49:01 UTC
qa_ack+, help text clarification -- see description and comment#5

Comment 13 Ken Gaillot 2020-05-28 19:15:16 UTC
The latest build uses the word "refresh" instead of "clean-up" in the refresh help

Comment 14 Markéta Smazová 2020-06-24 19:00:10 UTC
before fix
------------

>   [root@virt-133 ~]# rpm -q pacemaker
>   pacemaker-1.1.21-4.el7.x86_64

The current man/help text for 'crm_resource --cleanup, -- refresh' states the following:

>   [root@virt-133 ~]# man crm_resource

>   [...]
>   Commands:
>   [...]
>   -C, --cleanup
>         If resource  has  any  past  failures,  clear  its  history  and fail count.  Optionally filtered by
>         --resource, --node, --operation, and --interval (otherwise all).  --operation and --interval apply to
>         fail counts, but entire history is always cleared, to allow current state to be rechecked.

>   -R, --refresh
>         Delete  resource's  history  (including failures) so its current state is rechecked.  Optionally fil‐
>         tered by --resource and --node (otherwise all).  Unless --force is  specified,  resource's  group  or
>         clone (if any) will also be refreshed.


>   [root@virt-133 ~]# crm_resource --help

>   crm_resource - Perform tasks related to cluster resources.
>   Allows resources to be queried (definition and location), modified, and moved around the cluster.

>   Usage: crm_resource (query|command) [options]

>   [...]
>   Commands:
>   [...]
>   -C, --cleanup	If resource has any past failures, clear its history and fail count.
>		        Optionally filtered by --resource, --node, --operation, and --interval (otherwise all).
>		        --operation and --interval apply to fail counts, but entire history is always cleared,
>		        to allow current state to be rechecked.

>   -R, --refresh	Delete resource's history (including failures) so its current state is rechecked.
>		        Optionally filtered by --resource and --node (otherwise all).
>		        Unless --force is specified, resource's group or clone (if any) will also be refreshed.


When a resource is cleaned-up or refreshed, named resource that is cloned or is part of resource group 
can also be cleaned-up /refreshed unless --force is specified. This behavior is stated in 'refresh' part 
of the 'crm_resource' man/help text, but it is not specified in 'cleanup' part of the 'crm_resource' man/help text, 
which might confuse users.


after fix
------------

>   [root@virt-039 ~]# rpm -q pacemaker
>   pacemaker-1.1.23-1.el7.x86_64

The man/help texts have been updated, as it is mentioned in comment#5 and comment#13.

>   [root@virt-039 ~]# man crm_resource

>   [...]
>   Commands:
>   [...]
>   -C, --cleanup
>         If  resource  has  any  past  failures,  clear  its  history  and fail count.  Optionally filtered by
>         --resource, --node, --operation, and --interval (otherwise all).  --operation and --interval apply to
>         fail  counts,  but  entire  history is always cleared, to allow current state to be rechecked. If the
>         named resource is part of a group, or one numbered instance of  a  clone  or  bundled  resource,  the
>         clean-up applies to the whole collective resource unless --force is given.

>   -R, --refresh
>         Delete  resource's  history  (including failures) so its current state is rechecked.  Optionally fil‐
>         tered by --resource and --node (otherwise all). If the named resource is part of a group, or one num‐
>         bered instance of a clone or bundled resource, the refresh applies to the whole collective resource 
>         unless --force is given.
>                 
>   [root@virt-039 ~]# crm_resource --help

>   crm_resource - Perform tasks related to cluster resources.
>   Allows resources to be queried (definition and location), modified, and moved around the cluster.

>   Usage: crm_resource (query|command) [options]

>   [...]
>   Commands:
>   [...]
>   -C, --cleanup	If resource has any past failures, clear its history and fail count.
>		        Optionally filtered by --resource, --node, --operation, and --interval (otherwise all).
>		        --operation and --interval apply to fail counts, but entire history is always cleared,
>		        to allow current state to be rechecked. If the named resource is part of a group, or
>		        one numbered instance of a clone or bundled resource, the clean-up applies to the
>		        whole collective resource unless --force is given.
>   -R, --refresh	Delete resource's history (including failures) so its current state is rechecked.
>		        Optionally filtered by --resource and --node (otherwise all). If the named resource is
>		        part of a group, or one numbered instance of a clone or bundled resource, the refresh
>                   applies to the whole collective resource unless --force is given.


marking verified in pacemaker-1.1.23-1.el7

Comment 16 errata-xmlrpc 2020-09-29 20:03:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3951


Note You need to log in before you can comment on or make changes to this bug.