RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1301204 - Some stonith resource changes require "pcs resource"
Summary: Some stonith resource changes require "pcs resource"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: pcs
Version: 9.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 9.1
Assignee: Ondrej Mular
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-22 21:08 UTC by Ken Gaillot
Modified: 2022-11-22 18:05 UTC (History)
16 users (show)

Fixed In Version: pcs-0.11.2-1.el9
Doc Type: Bug Fix
Doc Text:
.`pcs` now distinguishes between resources and stonith resources Previously, some `pcs` commands did not distinguish between resources and stonith resources. This allowed users to use `pcs resource` sub-commands for stonith resources, and to use `pcs stonith` sub-commands for resources that are not stonith resources. This could lead to user confusion or resource misconfiguration. With this update, `pcs` displays a warning when there is a resource type mismatch.
Clone Of: 1240330
Environment:
Last Closed: 2022-11-15 09:48:38 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CLUSTERQE-5742 0 None None None 2022-05-23 16:03:28 UTC
Red Hat Issue Tracker RHELPLAN-111997 0 None None None 2022-02-11 09:51:32 UTC
Red Hat Knowledge Base (Solution) 6987177 0 None None None 2022-11-22 18:05:47 UTC
Red Hat Product Errata RHSA-2022:7935 0 None None None 2022-11-15 09:48:58 UTC

Description Ken Gaillot 2016-01-22 21:08:04 UTC
+++ This bug was initially created as a clone of Bug #1240330 +++

--- Additional comment from David Vossel on 2015-07-06 12:15:04 EDT ---

Also, i'm surprised it's possible to disable a stonith device using pcs. There are no commmands under 'pcs stonith' capable of disabling a fencing device. Someone would have to use 'pcs resource disable' which isn't stonith specific. This may work, but I'm not sure this is supported.

--- Additional comment from Andrew Beekhof on 2015-07-15 19:06:51 EDT ---

Disable is supposed to mark it as unusable for fencing.

+++

As seen by the above exchange on another bug, there is some confusion among users and even developers as to when "pcs resource" should be used with stonith resources. (The original bug is otherwise irrelevant to this one.)

I recommend we either (1) make command aliases so that "pcs stonith" can be used for anything allowed with stonith resources (e.g. "pcs stonith disable"); or (2) update the pcs man page (and any other relevant documentation) to say what "pcs resource" commands are appropriate to use with stonith resources.

Comment 3 Tomas Jelinek 2016-10-19 12:13:57 UTC
From the original bz1240330 comment 20 we can see that pacemaker supports disabling stonith resources as well as using constraints on stonith resources.

Comment 4 Ivan Devat 2017-01-03 15:51:50 UTC
Here is another case where the stonith acts as a common resource.
With "pcs stonith create" there is currently possible to use flags --master and --group. This is not in the documentation but it works the same way as in "pcs resource create".

Note that the flag --clone (unlike flag --master) have no effect in "pcs stonith create".

There are two possibilities: to document these flags (if it makes sense) or remove this behavior (if it does not make sense).

Does it make sense to create stonith in group or as a master?

Comment 5 Ken Gaillot 2017-01-03 21:34:17 UTC
(In reply to Ivan Devat from comment #4)
> Here is another case where the stonith acts as a common resource.
> With "pcs stonith create" there is currently possible to use flags --master
> and --group. This is not in the documentation but it works the same way as
> in "pcs resource create".
> 
> Note that the flag --clone (unlike flag --master) have no effect in "pcs
> stonith create".
> 
> There are two possibilities: to document these flags (if it makes sense) or
> remove this behavior (if it does not make sense).
> 
> Does it make sense to create stonith in group or as a master?

I don't think there's anything in Pacemaker preventing that, but it's probably a bad idea.

There has been some discussion of cloning stonith devices -- for example, see Bug 1250314 and this mailing list thread:

  http://oss.clusterlabs.org/pipermail/pacemaker/2014-July/022157.html

but the conclusion has been that cloning a stonith resource is probably an undesirable model. A master/slave clone would be even less desirable. I think we would be fine removing the flags.

Comment 6 Moullé Alain 2017-01-04 09:39:33 UTC
Hi
Just a thought about "group of stonith resources": maybe it could be interesting to configure a group of stonith resources when using a configuration of two stonith resources for one node: 1 first stonith resource with agent fence_ipmilan (with diag action) and a 2nd stonith resource with agent fence_kdump, both at level1 and a fence_ipmilan (action reboot) at level2. As the three stonith resources are targeted to the same IP target and equipment, if one is Failed the two other should also be Failed at the same time, so it could be interesting to make a group of these three stonith resources just to easily locate them on the same node (I mean : ordering inside the group is not really interesting in this case, but it also does not matter).
This would be nice to group the three stonith resources for one node, for location constraint, and for the display order on pcs status. 
Just check the case 01539908, the Message of :
Ruemker, John on May 12 2016 at 03:58 PM +02:00
about this specific stonith configuration.

Alain Moullé

Comment 7 Ken Gaillot 2017-01-04 23:46:08 UTC
(In reply to Moullé Alain from comment #6)
> Hi
> Just a thought about "group of stonith resources": maybe it could be
> interesting to configure a group of stonith resources when using a
> configuration of two stonith resources for one node: 1 first stonith
> resource with agent fence_ipmilan (with diag action) and a 2nd stonith
> resource with agent fence_kdump, both at level1 and a fence_ipmilan (action
> reboot) at level2. As the three stonith resources are targeted to the same
> IP target and equipment, if one is Failed the two other should also be
> Failed at the same time, so it could be interesting to make a group of these
> three stonith resources just to easily locate them on the same node (I mean
> : ordering inside the group is not really interesting in this case, but it
> also does not matter).
> This would be nice to group the three stonith resources for one node, for
> location constraint, and for the display order on pcs status. 
> Just check the case 01539908, the Message of :
> Ruemker, John on May 12 2016 at 03:58 PM +02:00
> about this specific stonith configuration.
> 
> Alain Moullé

That is a good use case. We can keep --group and get rid of --clone/--master. I've never tried such a configuration, so we should test to make sure it behaves as expected.

On a related note, I suspect it's a bad idea to group a stonith resource with non-stonith resources. I'd have to investigate that to be sure.

Comment 8 Moullé Alain 2017-01-05 10:25:34 UTC
Hi,
I fully agree that a group with stonith and any "standard" resource would be meaningless.
Thanks
Regards
Alain Moullé

Comment 9 Klaus Wenninger 2017-01-05 10:45:10 UTC
Haven't tried it but thinking of more or less preferable ways of fencing
one could think of cases where a group even without a non-fencing-resource
might make sense - ordered then:
You might have some way of fencing that works in any case but has some
drawbacks - whatever they might be.
And you have a second way of fencing that needs certain preconditions
to be available - but if they are it would be preferable.
So these preconditions might be resources. Imagine an openvpn-tunnel to the
admin-network the physical stonith-device is in.

Comment 10 Ken Gaillot 2017-01-05 14:37:56 UTC
(In reply to Klaus Wenninger from comment #9)
> Haven't tried it but thinking of more or less preferable ways of fencing
> one could think of cases where a group even without a non-fencing-resource
> might make sense - ordered then:
> You might have some way of fencing that works in any case but has some
> drawbacks - whatever they might be.
> And you have a second way of fencing that needs certain preconditions
> to be available - but if they are it would be preferable.
> So these preconditions might be resources. Imagine an openvpn-tunnel to the
> admin-network the physical stonith-device is in.

While that's a theoretical possibility, I'm pretty sure pacemaker has built-in logic to always start stonith devices first (which makes sense), and I don't know how putting one in a group after a non-stonith resource would interact with that.

I don't think it's a strong use case, because if something goes wrong with the depended-on non-stonith resource. there may be no way to recover.

Comment 11 Klaus Wenninger 2017-01-05 14:51:24 UTC
(In reply to Ken Gaillot from comment #10)
> (In reply to Klaus Wenninger from comment #9)
> > Haven't tried it but thinking of more or less preferable ways of fencing
> > one could think of cases where a group even without a non-fencing-resource
> > might make sense - ordered then:
> > You might have some way of fencing that works in any case but has some
> > drawbacks - whatever they might be.
> > And you have a second way of fencing that needs certain preconditions
> > to be available - but if they are it would be preferable.
> > So these preconditions might be resources. Imagine an openvpn-tunnel to the
> > admin-network the physical stonith-device is in.
> 
> While that's a theoretical possibility, I'm pretty sure pacemaker has
> built-in logic to always start stonith devices first (which makes sense),
> and I don't know how putting one in a group after a non-stonith resource
> would interact with that.

Maybe there is that logic meanwhile - at least startup usually looks as if.
That's why I said I haven't tried it. Maybe the logic is not that hard.
At least I have a vague memory from very far in the past that there were
cases where a plain resource came up before completion of the stonith-device
start.

> 
> I don't think it's a strong use case, because if something goes wrong with
> the depended-on non-stonith resource. there may be no way to recover.

Idea was that the fallback stonith-device would be used to recover if
the node becomes unclean because of that resource. Might be watchdog
fencing as well here as fallback.

Comment 15 RHEL Program Management 2021-03-01 07:32:57 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 17 Ondrej Mular 2022-03-25 09:49:12 UTC
Upstream patch: https://github.com/ClusterLabs/pcs/commit/1f225199e02c8d20456bb386f4c913c3ff21ac78

Usage of `pcs resource` command for stonith resources and `pcs stonith` for resources have been deprecated. Either separate sub-commands or aliases of resource sub-commands were created as `pcs stonith` sub-commands in case stonith equivalent was missing.

Option of mismatching resource type with pcs commands will be removed in a future release.

Comment 19 Miroslav Lisik 2022-05-19 17:11:56 UTC
DevTestResults:

[root@r91-1 ~]# rpm -q pcs
pcs-0.11.2-1.el9.x86_64

[root@r91-1 ~]# pcs stonith
  * fence-r91-1 (stonith:fence_xvm):     Started r91-2
  * fence-r91-2 (stonith:fence_xvm):     Started r91-1
[root@r91-1 ~]# pcs resource
  * d1  (ocf:pacemaker:Dummy):   Started r91-1

[root@r91-1 ~]# pcs resource disable fence-r91-1
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith disable' instead.
[root@r91-1 ~]# pcs stonith
  * fence-r91-1 (stonith:fence_xvm):     Stopped (disabled)
  * fence-r91-2 (stonith:fence_xvm):     Started r91-1

Comment 23 Michal Mazourek 2022-07-26 14:13:12 UTC
BEFORE:
=======

[root@virt-535 ~]# rpm -q pcs
pcs-0.11.1-1.el9.x86_64


[root@virt-535 ~]# pcs resource disable fence-virt-533
[root@virt-535 ~]# echo $?
0
[root@virt-535 ~]# pcs stonith
  * fence-virt-533	(stonith:fence_xvm):	 Stopped (disabled)
  * fence-virt-535	(stonith:fence_xvm):	 Started virt-535


AFTER:
======

[root@virt-482 ~]# rpm -q pcs
pcs-0.11.3-1.el9.x86_64


## Testing that using pcs resource command on stonith resources either won't work or will print deprecation warning (the commands will still be in effect)

[root@virt-482 ~]# pcs resource --help | grep resource\ id
    [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
    config [--output-format text|cmd|json] [<resource id>]...
        Show options of all currently configured resources or if resource ids
        are specified show the options for the specified resource ids.
    create <resource id> [<standard>:[<provider>:]]<type> [resource options]
           --group <group id> [--before <resource id> | --after <resource id>] |
    delete <resource id|group id|bundle id|clone id>
    remove <resource id|group id|bundle id|clone id>
    enable <resource id | tag id>... [--wait[=n]]
    disable <resource id | tag id>... [--safe [--brief] [--no-strict]]
    safe-disable <resource id | tag id>... [--brief] [--no-strict]
    restart <resource id> [node] [--wait=n]
    debug-start <resource id> [--full]
    debug-stop <resource id> [--full]
    debug-promote <resource id> [--full]
    debug-demote <resource id> [--full]
    debug-monitor <resource id> [--full]
    move <resource id> [destination node] [--promoted] [--strict] [--wait[=n]]
        resource id).
    move-with-constraint <resource id> [destination node] [lifetime=<lifetime>]
        resource id).
    ban <resource id> [node] [--promoted] [lifetime=<lifetime>] [--wait[=n]]
        Prevent the resource id specified from running on the node (or on the
        resource id).
    clear <resource id> [node] [--promoted] [--expired] [--wait[=n]]
        resource id).
    update <resource id> [resource options] [op [<operation action> <operation
    op add <resource id> <operation action> [operation properties]
    op delete <resource id> <operation action> [<operation properties>...]
    op remove <resource id> <operation action> [<operation properties>...]
    meta <resource id | group id | clone id> <meta options> [--wait[=n]]
    group add <group id> <resource id> [resource id] ... [resource id]
              [--before <resource id> | --after <resource id>] [--wait[=n]]
    group delete <group id> [<resource id>]... [--wait[=n]]
    group remove <group id> [<resource id>]... [--wait[=n]]
    ungroup <group id> [<resource id>]... [--wait[=n]]
    clone <resource id | group id> [<clone id>] [clone options]... [--wait[=n]]
    promotable <resource id | group id> [<clone id>] [clone options]...
        an alias for 'pcs resource clone <resource id> promotable=true'.
    unclone <clone id | resource id | group id> [--wait[=n]]
        'pcs resource update' command instead and specify the resource id.
    manage <resource id | tag id>... [--monitor]
    unmanage <resource id | tag id>... [--monitor]
    cleanup [<resource id | stonith id>] [node=<node>] [operation=<operation>
        If a resource id / stonith id is not specified then all resources /
    refresh [<resource id | stonith id>] [node=<node>] [--strict]
        If a resource id / stonith id is not specified then all resources /
    failcount [show [<resource id | stonith id>] [node=<node>]
    utilization [<resource id> [<name>=<value> ...]]
    relations <resource id> [--full]


[root@virt-482 ~]# pcs stonith
  * fence-virt-482	(stonith:fence_xvm):	 Started virt-482
  * fence-virt-498	(stonith:fence_xvm):	 Started virt-498

[root@virt-482 ~]# pcs resource status fence-virt-482
  * fence-virt-482	(stonith:fence_xvm):	 Started virt-482

> After discussion with the developer, status can't be split due to technical design, the deprecation warning is not present here


[root@virt-482 ~]# pcs resource config fence-virt-482
Warning: Unable to find resource 'fence-virt-482'
Error: No resource found

> OK


[root@virt-482 ~]# pcs resource delete fence-virt-482
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith delete' instead.
Attempting to stop: fence-virt-482... Stopped
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith
  * fence-virt-498	(stonith:fence_xvm):	 Started virt-498

> OK


[root@virt-482 ~]# pcs resource disable fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith disable' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith
  * fence-virt-498	(stonith:fence_xvm):	 Stopped (disabled)

> OK


[root@virt-482 ~]# pcs resource enable fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith enable' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith
  * fence-virt-498	(stonith:fence_xvm):	 Started virt-482

> OK


[root@virt-482 ~]# pcs resource safe-disable fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith disable' instead.
Deprecation Warning: Ability of this command to accept stonith device is deprecated and will be removed in a future release.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith
  * fence-virt-498	(stonith:fence_xvm):	 Stopped (disabled)

> OK


[root@virt-482 ~]# pcs resource debug-start fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
Operation force-start for fence-virt-498 (stonith:fence_xvm) could not be executed (Error: Manual execution of the stonith standard is unsupported)
crm_resource: Error performing operation: Unimplemented
[root@virt-482 ~]# echo $?
3

> OK


[root@virt-482 ~]# pcs resource debug-stop fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
Operation force-stop for fence-virt-498 (stonith:fence_xvm) could not be executed (Error: Manual execution of the stonith standard is unsupported)
crm_resource: Error performing operation: Unimplemented
[root@virt-482 ~]# echo $?
3

> OK


[root@virt-482 ~]# pcs resource debug-promote fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
Operation force-promote for fence-virt-498 (stonith:fence_xvm) could not be executed (Error: Manual execution of the stonith standard is unsupported)
crm_resource: Error performing operation: Unimplemented
[root@virt-482 ~]# echo $?
3

> OK


[root@virt-482 ~]# pcs resource debug-demote fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
Operation force-demote for fence-virt-498 (stonith:fence_xvm) could not be executed (Error: Manual execution of the stonith standard is unsupported)
crm_resource: Error performing operation: Unimplemented
[root@virt-482 ~]# echo $?
3

> OK


[root@virt-482 ~]# pcs resource debug-monitor fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
Operation force-check for fence-virt-498 (stonith:fence_xvm) could not be executed (Error: Manual execution of the stonith standard is unsupported)
crm_resource: Error performing operation: Unimplemented
[root@virt-482 ~]# echo $?
3

> OK


[root@virt-482 ~]# pcs stonith enable fence-virt-498

[root@virt-482 ~]# pcs resource move fence-virt-498
Deprecation Warning: Ability of this command to accept stonith device is deprecated and will be removed in a future release.
Location constraint to move resource 'fence-virt-498' has been created
Waiting for the cluster to apply configuration changes...
Location constraint created to move resource 'fence-virt-498' has been removed
Waiting for the cluster to apply configuration changes...
resource 'fence-virt-498' is running on node 'virt-498'
[root@virt-482 ~]# echo $?
0

> OK


[root@virt-482 ~]# pcs resource ban fence-virt-498
Deprecation Warning: Ability of this command to accept stonith device is deprecated and will be removed in a future release.
Warning: Creating location constraint 'cli-ban-fence-virt-498-on-virt-498' with a score of -INFINITY for resource fence-virt-498 on virt-498.
	This will prevent fence-virt-498 from running on virt-498 until the constraint is removed
	This will be the case even if virt-498 is the last node in the cluster
[root@virt-482 ~]# echo $?
0

> OK


[root@virt-482 ~]# pcs resource clear fence-virt-498
Deprecation Warning: Ability of this command to accept stonith device is deprecated and will be removed in a future release.
Removing constraint: cli-ban-fence-virt-498-on-virt-498
[root@virt-482 ~]# echo $?
0

> OK


[root@virt-482 ~]# pcs resource update fence-virt-498 delay=1
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith update' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith config
Resource: fence-virt-498 (class=stonith type=fence_xvm)
  Attributes: fence-virt-498-instance_attributes
    delay=1
    pcmk_host_check=static-list
    pcmk_host_list=virt-498
    pcmk_host_map=virt-498:virt-498.cluster-qe.lab.eng.brq.redhat.com
  Operations:
    monitor: fence-virt-498-monitor-interval-60s
      interval=60s

> OK


[root@virt-482 ~]# pcs resource op add fence-virt-498 a interval=61s
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith op add' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith
  * fence-virt-498	(stonith:fence_xvm):	 FAILED virt-482
[root@virt-482 ~]# pcs stonith config
Resource: fence-virt-498 (class=stonith type=fence_xvm)
  Attributes: fence-virt-498-instance_attributes
    delay=1
    pcmk_host_check=static-list
    pcmk_host_list=virt-498
    pcmk_host_map=virt-498:virt-498.cluster-qe.lab.eng.brq.redhat.com
  Operations:
    monitor: fence-virt-498-monitor-interval-60s
      interval=60s
    a: fence-virt-498-a-interval-61s
      interval=61s

> OK


[root@virt-482 ~]# pcs resource op remove fence-virt-498 a interval=61s
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith op delete' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith config
Resource: fence-virt-498 (class=stonith type=fence_xvm)
  Attributes: fence-virt-498-instance_attributes
    delay=1
    pcmk_host_check=static-list
    pcmk_host_list=virt-498
    pcmk_host_map=virt-498:virt-498.cluster-qe.lab.eng.brq.redhat.com
  Operations:
    monitor: fence-virt-498-monitor-interval-60s
      interval=60s

> OK


[root@virt-482 ~]# pcs resource meta fence-virt-498 resource-stickiness=4
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release. Please use 'pcs stonith meta' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith config
Resource: fence-virt-498 (class=stonith type=fence_xvm)
  Attributes: fence-virt-498-instance_attributes
    delay=1
    pcmk_host_check=static-list
    pcmk_host_list=virt-498
    pcmk_host_map=virt-498:virt-498.cluster-qe.lab.eng.brq.redhat.com
  Meta Attributes: fence-virt-498-meta_attributes
    resource-stickiness=4
  Operations:
    monitor: fence-virt-498-monitor-interval-60s
      interval=60s

> OK


[root@virt-482 ~]# pcs resource group add g1 fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resource is deprecated and will be removed in a future release.
[root@virt-482 ~]# echo $?
0

> OK


[root@virt-482 ~]# pcs resource group remove g1 fence-virt-498
[root@virt-482 ~]# echo $?
0

> After discussion with the developer, this is ok since the command for adding group will be deprecated, thus this command will be unreachable


[root@virt-482 ~]# pcs resource group add g1 fence-virt-498   
Deprecation Warning: Ability of this command to accept stonith resource is deprecated and will be removed in a future release.

[root@virt-482 ~]# pcs resource ungroup g1 fence-virt-498   
[root@virt-482 ~]# echo $?
0

> The same as 'pcs resource group remove'


[root@virt-482 ~]# pcs resource clone fence-virt-498   
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
Error: No need to clone stonith resource 'fence-virt-498', any node can use a stonith resource (unless specifically banned) regardless of whether the stonith resource is running on that node or not, use --force to override
[root@virt-482 ~]# echo $?
1

> OK


[root@virt-482 ~]# pcs resource promotable fence-virt-498   
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
Error: No need to clone stonith resource 'fence-virt-498', any node can use a stonith resource (unless specifically banned) regardless of whether the stonith resource is running on that node or not, use --force to override
[root@virt-482 ~]# echo $?
1

> OK


[root@virt-482 ~]# pcs resource manage fence-virt-498   
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
[root@virt-482 ~]# echo $?
0

> OK


[root@virt-482 ~]# pcs resource unmanage fence-virt-498   
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs stonith
  * fence-virt-498	(stonith:fence_xvm):	 Started virt-482 (unmanaged)

> OK

[root@virt-482 ~]# pcs resource manage fence-virt-498   
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.


[root@virt-482 ~]# pcs resource cleanup fence-virt-498   
Cleaned up fence-virt-498 on virt-498
Cleaned up fence-virt-498 on virt-482
[root@virt-482 ~]# echo $?
0

[root@virt-482 ~]# pcs resource cleanup --help

Usage: pcs resource cleanup...
    cleanup [<resource id | stonith id>] [node=<node>] [operation=<operation>
            [interval=<interval>]] [--strict]
{...}
[root@virt-482 ~]# pcs stonith cleanup --help

Usage: pcs stonith cleanup...
    cleanup [<resource id | stonith id>] [node=<node>] [operation=<operation>
            [interval=<interval>]] [--strict]
        This command is an alias of 'resource cleanup' command.
{...}

> This is ok, since 'pcs stonith cleanup' is an alias of resource cleanup, according to the documentation, both resource id and stonith id are accepted


[root@virt-482 ~]# pcs resource refresh fence-virt-498   
Cleaned up fence-virt-498 on virt-498
Cleaned up fence-virt-498 on virt-482
Waiting for 2 replies from the controller
... got reply
... got reply (done)
[root@virt-482 ~]# echo $?
0

[root@virt-482 ~]# pcs resource refresh --help

Usage: pcs resource refresh...
    refresh [<resource id | stonith id>] [node=<node>] [--strict]
{...}
[root@virt-482 ~]# pcs stonith refresh --help

Usage: pcs stonith refresh...
    refresh [<resource id | stonith id>] [node=<node>] [--strict]
        This command is an alias of 'resource refresh' command.
{...}

> This is ok, since 'pcs stonith refresh' is an alias of resource refresh, according to the documentation, both resource id and stonith id are accepted


[root@virt-482 ~]# pcs resource failcount show fence-virt-498   
No failcounts for resource 'fence-virt-498'
[root@virt-482 ~]# echo $?
0

[root@virt-482 ~]# pcs resource failcount show --help

Usage: pcs resource failcount...
    failcount [show [<resource id | stonith id>] [node=<node>]
            [operation=<operation> [interval=<interval>]]] [--full]
{...}
[root@virt-482 ~]# pcs stonith failcount show --help

Usage: pcs stonith failcount...
    failcount [show [<resource id | stonith id>] [node=<node>]
            [operation=<operation> [interval=<interval>]]] [--full]
        This command is an alias of 'resource failcount show' command.
{...}

> This is ok, since 'pcs stonith failcount' is an alias of resource failcount, according to the documentation, both resource id and stonith id are accepted


[root@virt-482 ~]# pcs resource utilization fence-virt-498 cpu=2
Deprecation Warning: Ability of this command to accept stonith resources is deprecated and will be removed in a future release.
[root@virt-482 ~]# echo $?
0

> OK


[root@virt-482 ~]# pcs resource relations fence-virt-498
Deprecation Warning: Ability of this command to accept stonith resource is deprecated and will be removed in a future release.
fence-virt-498
[root@virt-482 ~]# echo $?
0

> OK


## Testing that using pcs stonith command on non-stonith resources either won't work or will print deprecation warning (the commands will still be in effect)

[root@virt-482 ~]# pcs stonith --help | grep stonith\ id
    config [--output-format text|cmd|json] [<stonith id>]...
    create <stonith id> <stonith device type> [stonith device options]
           [--group <group id> [--before <stonith id> | --after <stonith id>]]
    update <stonith id> [stonith options] [op [<operation action> <operation
    update-scsi-devices <stonith id> (set <device-path> [<device-path>...])
    delete <stonith id>
        Remove stonith id from configuration.
    remove <stonith id>
        Remove stonith id from configuration.
    op add <stonith id> <operation action> [operation properties]
    op delete <stonith id> <operation action> [<operation properties>...]
    op remove <stonith id> <operation action> [<operation properties>...]
    meta <stonith id> <meta options> [--wait[=n]]
    cleanup [<resource id | stonith id>] [node=<node>] [operation=<operation>
        If a resource id / stonith id is not specified then all resources /
    refresh [<resource id | stonith id>] [node=<node>] [--strict]
        If a resource id / stonith id is not specified then all resources /
    failcount [show [<resource id | stonith id>] [node=<node>]
    enable <stonith id>... [--wait[=n]]
    disable <stonith id>... [--wait[=n]]
    level add <level> <target> <stonith id> [stonith id]...
    level delete <level> [target <target>] [stonith <stonith id>...]
    level remove <level> [target <target>] [stonith <stonith id>...]
    level clear [target <target> | stonith <stonith id>...]
        Clears the fence levels on the target (or stonith id) specified or
        clears all fence levels if a target/stonith id is not specified.

[root@virt-482 ~]# pcs resource create dummy1 ocf:heartbeat:Dummy
[root@virt-482 ~]# pcs resource create dummy2 ocf:heartbeat:Dummy


[root@virt-482 ~]# pcs stonith status dummy1
  * dummy1	(ocf:heartbeat:Dummy):	 Started virt-498

> After discussion with the developer, status can't be split due to technical design, the deprecation warning is not present here


[root@virt-482 ~]# pcs stonith config dummy1
Warning: Unable to find stonith device 'dummy1'
Error: No stonith device found
[root@virt-482 ~]# echo $?
1

> OK


[root@virt-482 ~]# pcs stonith update dummy1 fake=1
Deprecation Warning: Ability of this command to accept resources is deprecated and will be removed in a future release. Please use 'pcs resource update' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs resource config | grep fake
    fake=1

> OK


[root@virt-482 ~]# pcs stonith update-scsi-devices dummy1 set invalidpath
Error: Resource 'dummy1' is not a stonith resource or its type 'Dummy' is not supported for devices update. Supported types: 'fence_mpath', 'fence_scsi'
Error: Errors have occurred, therefore pcs is unable to continue
[root@virt-482 ~]# echo $?
1

> OK


[root@virt-482 ~]# pcs stonith delete dummy2
Deprecation Warning: Ability of this command to accept resources is deprecated and will be removed in a future release. Please use 'pcs resource delete' instead.
Attempting to stop: dummy2... Stopped
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs resource config dummy2
Warning: Unable to find resource 'dummy2'
Error: No resource found

> OK


[root@virt-482 ~]# pcs stonith op add dummy1 a interval=1s
Deprecation Warning: Ability of this command to accept resources is deprecated and will be removed in a future release. Please use 'pcs resource op add' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs resource config | grep interval=1s -B 1
    a: dummy1-a-interval-1s
      interval=1s

> OK


[root@virt-482 ~]# pcs stonith op remove dummy1 a interval=1s
Deprecation Warning: Ability of this command to accept resources is deprecated and will be removed in a future release. Please use 'pcs resource op delete' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs resource config | grep interval=1s -B 1
[root@virt-482 ~]# echo $?
1

> OK


[root@virt-482 ~]# pcs stonith meta dummy1 resource-stickiness=4
Deprecation Warning: Ability of this command to accept resources is deprecated and will be removed in a future release. Please use 'pcs resource meta' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs resource config | grep stickiness
    resource-stickiness=4

> OK


[root@virt-482 ~]# pcs stonith failcount show dummy1
Failcounts for resource 'dummy1'
  virt-498: 445

[root@virt-482 ~]# pcs stonith failcount show --help

Usage: pcs stonith failcount...
    failcount [show [<resource id | stonith id>] [node=<node>]
            [operation=<operation> [interval=<interval>]]] [--full]
        This command is an alias of 'resource failcount show' command.
{...}

> This is ok, since 'pcs stonith failcount' is an alias of resource failcount, according to the documentation, both resource id and stonith id are accepted


[root@virt-482 ~]# pcs stonith disable dummy1
Deprecation Warning: Ability of this command to accept resources is deprecated and will be removed in a future release. Please use 'pcs resource disable' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs resource status dummy1
  * dummy1	(ocf:heartbeat:Dummy):	 Stopped (disabled)

> OK


[root@virt-482 ~]# pcs stonith enable dummy1
Deprecation Warning: Ability of this command to accept resources is deprecated and will be removed in a future release. Please use 'pcs resource enable' instead.
[root@virt-482 ~]# echo $?
0
[root@virt-482 ~]# pcs resource status dummy1
  * dummy1	(ocf:heartbeat:Dummy):	 Started virt-498

> OK


[root@virt-482 ~]# pcs stonith cleanup dummy1
Cleaned up dummy1 on virt-498
Cleaned up dummy1 on virt-482
[root@virt-482 ~]# echo $?
0

[root@virt-482 ~]# pcs stonith cleanup --help

Usage: pcs stonith cleanup...
    cleanup [<resource id | stonith id>] [node=<node>] [operation=<operation>
            [interval=<interval>]] [--strict]
        This command is an alias of 'resource cleanup' command.
{...}

> This is ok, since 'pcs stonith cleanup' is an alias of resource cleanup, according to the documentation, both resource id and stonith id are accepted


[root@virt-482 ~]# pcs stonith refresh dummy1
Cleaned up dummy1 on virt-498
Cleaned up dummy1 on virt-482
Waiting for 2 replies from the controller
... got reply
... got reply (done)
[root@virt-482 ~]# echo $?
0

[root@virt-482 ~]# pcs stonith refresh --help

Usage: pcs stonith refresh...
    refresh [<resource id | stonith id>] [node=<node>] [--strict]
        This command is an alias of 'resource refresh' command.
{...}

> This is ok, since 'pcs stonith refresh' is an alias of resource refresh, according to the documentation, both resource id and stonith id are accepted



[root@virt-482 ~]# pcs stonith level add 1 virt-482 dummy1
Deprecation Warning: Ability of this command to accept resources is deprecated and will be removed in a future release.
Error: Stonith resource(s) 'dummy1' do not exist, use --force to override
Error: Errors have occurred, therefore pcs is unable to continue
[root@virt-482 ~]# echo $?
1

> OK


Marking as VERIFIED for pcs-0.11.3-1.el9

Comment 30 errata-xmlrpc 2022-11-15 09:48:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: pcs security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7935


Note You need to log in before you can comment on or make changes to this bug.