RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1748139 - [RFE] crm_mon: Show "(disabled)" next to group when entire group is disabled
Summary: [RFE] crm_mon: Show "(disabled)" next to group when entire group is disabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.1
Hardware: All
OS: Linux
high
low
Target Milestone: pre-dev-freeze
: 8.4
Assignee: Chris Lumens
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1752538 1885645
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-02 23:59 UTC by Reid Wahl
Modified: 2024-10-01 16:20 UTC (History)
4 users (show)

Fixed In Version: pacemaker-2.0.5-1.el8
Doc Type: No Doc Update
Doc Text:
This is a small self-evident change not worth a release note.
Clone Of:
Environment:
Last Closed: 2021-05-18 15:26:41 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4388171 0 None None None 2019-09-03 00:12:30 UTC

Internal Links: 1875632

Description Reid Wahl 2019-09-02 23:59:28 UTC
Description of problem:

When a resource group is disabled, `crm_mon` shows individual resources as disabled rather than showing the group as disabled.

# cibadmin -l -Q --scope=configuration
...
    <group id="test_grp">
      <meta_attributes id="test_grp-meta_attributes">
        <nvpair id="test_grp-meta_attributes-target-role" name="target-role" value="Stopped"/>
      </meta_attributes>
      <primitive class="ocf" id="test_rsc" provider="heartbeat" type="Dummy">
        <operations>
          <op id="test_rsc-migrate_from-interval-0s" interval="0s" name="migrate_from" timeout="20s"/>
          <op id="test_rsc-migrate_to-interval-0s" interval="0s" name="migrate_to" timeout="20s"/>
          <op id="test_rsc-monitor-interval-10s" interval="10s" name="monitor" timeout="20s"/>
          <op id="test_rsc-reload-interval-0s" interval="0s" name="reload" timeout="20s"/>
          <op id="test_rsc-start-interval-0s" interval="0s" name="start" timeout="20s"/>
          <op id="test_rsc-stop-interval-0s" interval="0s" name="stop" timeout="20s"/>
        </operations>
      </primitive>
    </group>


# crm_mon --one-shot --inactive
...
 Resource Group: test_grp
     test_rsc	(ocf::heartbeat:Dummy):	Stopped (disabled)

-----

Version-Release number of selected component (if applicable):

pacemaker-2.0.1-4.el8_0.3.x86_64

-----

How reproducible:

always

-----

Steps to Reproduce:
1. Create a resource group with at least one resource and disable the group.
2. Run `crm_mon --oneshot --inactive`.

-----

Actual results:

# crm_mon --one-shot --inactive
...
 Resource Group: test_grp
     test_rsc	(ocf::heartbeat:Dummy):	Stopped (disabled)

-----

Expected results:

# crm_mon --one-shot --inactive
...
 Resource Group: test_grp (disabled)
     test_rsc	(ocf::heartbeat:Dummy):	Stopped

-----

Additional info:

Customer proposed this change in support case 02462176 for clarity of output. Cosmetic only. The CIB clearly shows at what level resources are disabled.

Comment 3 Chris Lumens 2020-09-01 20:18:48 UTC
Patch in the works:  https://github.com/ClusterLabs/pacemaker/pull/2158

Comment 4 Ken Gaillot 2020-09-01 20:51:49 UTC
Would it be better to show "(disabled)" by the group name only, or by the group name and each resource? I'm thinking the latter in case someone is (for example) grepping the output for one of the group member names. In other words, like this:

   Resource Group: test_grp (disabled)
        test_rsc	(ocf::heartbeat:Dummy):	Stopped (disabled)

Comment 5 Reid Wahl 2020-09-01 20:59:30 UTC
My inclination was the former when I opened this BZ: "(disabled)" next to group name if whole group is disabled, "(disabled)" next to primitive name if just the primitive is disabled, and "(disabled)" next to both if they're disabled at both the group and the primitive level (disabled twice for whatever reason and thus needing to be enabled twice to revert). That shows the reader exactly at what level(s) elements are disabled.

OTOH, your thought is a good reason for printing "(disabled)" by both the group name and the primitive name if only the group is disabled. Grepping the primitive name shows at a glance that it is disabled, even if the disable is *only inherited.*


I'm good with either approach. The first paragraph was my original rationale though. It makes the output a one-to-one mapping with the target-role configuration.

Comment 6 Ken Gaillot 2020-09-01 21:03:33 UTC
(In reply to Reid Wahl from comment #5)
> My inclination was the former when I opened this BZ: "(disabled)" next to
> group name if whole group is disabled, "(disabled)" next to primitive name
> if just the primitive is disabled, and "(disabled)" next to both if they're
> disabled at both the group and the primitive level (disabled twice for
> whatever reason and thus needing to be enabled twice to revert). That shows
> the reader exactly at what level(s) elements are disabled.

That wouldn't strictly be the case with that approach -- both the group and its members inherit from rsc_defaults, so it could be set there as well. If it's a cloned group, they inherit from the clone meta-data block as well.

Comment 7 Reid Wahl 2020-09-01 21:07:31 UTC
(In reply to Ken Gaillot from comment #6)
> That wouldn't strictly be the case with that approach -- both the group and
> its members inherit from rsc_defaults, so it could be set there as well. If
> it's a cloned group, they inherit from the clone meta-data block as well.

Ah, that's right. Yeah, I'm not gonna argue in favor of not displaying "(disabled)" **at all** if it's only inherited from rsc_defaults, which is where that path leads. So it's probably better we go with the latter approach.

Comment 8 Reid Wahl 2020-09-03 00:39:36 UTC
I had another thought related to this discussion. This is the first time I've considered target-role=Stopped as a resource default.

Let's say that after this PR is merged, we end up with the following crm_mon output when target-role=Stopped is set as a resource default and target-role is **not set** on any individual group or primitive.

 Resource Group: dumgrp (disabled)
     dummy1	(ocf::heartbeat:Dummy):	Stopped (disabled)
     dummy2	(ocf::heartbeat:Dummy):	Stopped (disabled)


There are two issues of counter-intuitive behavior IMO, and neither of them is new:
  - The `crm_mon` output does not mention the fact that these disables are only inherited from rsc_defaults.
    - We can fix that within this PR if you agree that it's worth noting somehow.

  - If a user runs `pcs resource enable dumgrp` or `pcs resource enable dummy1`, nothing happens.
    - The user doesn't know (without viewing the config) that the disable is only inherited, so it's natural to try to enable them this way.
    - `pcs resource enable` simply removes the target-role meta attribute. It **assumes** that the default is "target-role=Started". That's probably worth a pcs RFE. What do you think?

Comment 9 Reid Wahl 2020-09-03 01:53:45 UTC
(In reply to Reid Wahl from comment #8)
> There are two issues of counter-intuitive behavior IMO, and neither of them
> is new:
>   - The `crm_mon` output does not mention the fact that these disables are
> only inherited from rsc_defaults.
>     - We can fix that within this PR if you agree that it's worth noting
> somehow.
> 
>   - If a user runs `pcs resource enable dumgrp` or `pcs resource enable
> dummy1`, nothing happens.
>     - The user doesn't know (without viewing the config) that the disable is
> only inherited, so it's natural to try to enable them this way.
>     - `pcs resource enable` simply removes the target-role meta attribute.
> It **assumes** that the default is "target-role=Started". That's probably
> worth a pcs RFE. What do you think?

Point #1 would be a "nice-to-fix", but come to think of it, if we make the pcs change in point #2 then #1 makes little practical difference for folks who use pcs (i.e., almost all of RH's customers).

Comment 10 Ken Gaillot 2020-09-03 14:12:39 UTC
(In reply to Reid Wahl from comment #8)
> I had another thought related to this discussion. This is the first time
> I've considered target-role=Stopped as a resource default.
> 
> Let's say that after this PR is merged, we end up with the following crm_mon
> output when target-role=Stopped is set as a resource default and target-role
> is **not set** on any individual group or primitive.
> 
>  Resource Group: dumgrp (disabled)
>      dummy1	(ocf::heartbeat:Dummy):	Stopped (disabled)
>      dummy2	(ocf::heartbeat:Dummy):	Stopped (disabled)
> 
> 
> There are two issues of counter-intuitive behavior IMO, and neither of them
> is new:
>   - The `crm_mon` output does not mention the fact that these disables are
> only inherited from rsc_defaults.
>     - We can fix that within this PR if you agree that it's worth noting
> somehow.

My inclination is that it's only important to indicate that the resource is disabled, not how it is disabled. It's enough of an indicator to point the admin to the config for more details.

I wouldn't be opposed to making it "(disabled by configuration)" but crm_mon screen space is a scarce commodity.

It might be nice if it said "(disabled by rsc_defaults)", "(disabled by group meta-attributes)", or "(disabled by primitive meta-attributes)" but that would be significantly more complicated to do and not worth the effort in my opinion. Also there's the question of what to say if it's disabled in multiple places. And then another question of whether "(enabled by configuration)" should be shown if target-role=Stopped is in rsc_defaults.

>   - If a user runs `pcs resource enable dumgrp` or `pcs resource enable
> dummy1`, nothing happens.
>     - The user doesn't know (without viewing the config) that the disable is
> only inherited, so it's natural to try to enable them this way.
>     - `pcs resource enable` simply removes the target-role meta attribute.
> It **assumes** that the default is "target-role=Started". That's probably
> worth a pcs RFE. What do you think?

Definitely. If the user has target-role=Stopped in rsc_defaults, then an explicit target-role=Started will override that, but deleting the target-role won't.

To add another twist, there is also the stop-all-resources cluster property, which cannot be overridden by target-role. That is intended as a quick hammer for a sysadmin. That's not technically "disabled", so I wouldn't display that, but it probably should be shown somehow.

Comment 11 Reid Wahl 2020-09-03 18:49:18 UTC
(In reply to Ken Gaillot from comment #10)
> I wouldn't be opposed to making it "(disabled by configuration)" but crm_mon
> screen space is a scarce commodity.
> 
> It might be nice if it said "(disabled by rsc_defaults)", "(disabled by
> group meta-attributes)", or "(disabled by primitive meta-attributes)" but
> that would be significantly more complicated to do and not worth the effort
> in my opinion. Also there's the question of what to say if it's disabled in
> multiple places. And then another question of whether "(enabled by
> configuration)" should be shown if target-role=Stopped is in rsc_defaults.

Given the limited return on investment, you're probably right to leave it alone.


> Definitely. If the user has target-role=Stopped in rsc_defaults, then an
> explicit target-role=Started will override that, but deleting the
> target-role won't.

Right, and cool, I'll open the RFE soon.


> To add another twist, there is also the stop-all-resources cluster property,
> ... That's not technically "disabled", so I wouldn't
> display that, but it probably should be shown somehow.

Agreed, maybe as a notice at the top of crm_mon as we have done for things like "no stonith devices and stonith-enabled is not false".

--------------------

It occurred to me last night that we have almost this exact same "disabled" issue with "unmanaged".

[root@fastvm-rhel-8-0-23 pacemaker]# pcs resource unmanage dum_grp
[root@fastvm-rhel-8-0-23 pacemaker]# pcs status
...
  * Resource Group: dum_grp:
    * d1	(ocf::heartbeat:Dummy):	 Started node1 (unmanaged)
    * d2	(ocf::heartbeat:Dummy):	 Started node1 (unmanaged)

Groups vs. primitives:
  - "(unmanaged)": shows next to the primitives but not the group name when the group is unmanaged.
  - "(disabled)" shows next to the primitives but not the group name when the group is disabled.

Clones vs. primitives:
  - "(unmanaged)" shows **only** next to the primitives and not next to the clone name when the clone is unmanaged.
  - "(disabled)" shows **only** next to the primitives and not next to the clone name when the clone is disabled.

Resource defaults:
  - when resource defaults is-managed=false, "(unmanaged)" shows next to:
    - all primitives
    - no groups
    - all clones
  - when resource defaults target-role=Stopped, "(disabled)" shows next to:
    - all primitives
    - no groups
    - no clones

Properties:
  - when maintenance-mode=true:
    - "(unmanaged)" shows next to:
      - all primitives
      - no groups
      - all clones
    - A notice at the top of crm_mon says "*** Resource management is DISABLED ***.\nThe cluster will not attempt to start, stop or recover services".
  - when stop-all-resources=true:
    - "(disabled)" shows next to:
      - no primitives
      - no groups
      - no clones
    - There is no notice at the top of crm_mon.


Since the requests are so similar, would you like to consolidate them into this BZ and aim to make the display consistent for these two meta attributes? Or shall I open a separate BZ for unmanaged?

Comment 12 Ken Gaillot 2020-09-03 20:50:50 UTC
(In reply to Reid Wahl from comment #11)
> It occurred to me last night that we have almost this exact same "disabled"
> issue with "unmanaged".
> 
> [root@fastvm-rhel-8-0-23 pacemaker]# pcs resource unmanage dum_grp
> [root@fastvm-rhel-8-0-23 pacemaker]# pcs status
> ...
>   * Resource Group: dum_grp:
>     * d1	(ocf::heartbeat:Dummy):	 Started node1 (unmanaged)
>     * d2	(ocf::heartbeat:Dummy):	 Started node1 (unmanaged)
> 
> Groups vs. primitives:
>   - "(unmanaged)": shows next to the primitives but not the group name when
> the group is unmanaged.
>   - "(disabled)" shows next to the primitives but not the group name when
> the group is disabled.
> 
> Clones vs. primitives:
>   - "(unmanaged)" shows **only** next to the primitives and not next to the
> clone name when the clone is unmanaged.
>   - "(disabled)" shows **only** next to the primitives and not next to the
> clone name when the clone is disabled.

Instances, not primitives ... the clone name and primitive name are shown on one line, then the instances are shown (by node). That's comparable to how solitary primitives are shown, "(disabled)" is after the location not the name. On the other hand, if we're going for greppability, putting it after the clone/primitive names as well might make sense.

> Resource defaults:
>   - when resource defaults is-managed=false, "(unmanaged)" shows next to:
>     - all primitives
>     - no groups
>     - all clones

It's interesting clones are treated differently in this case. I think this is probably the right thing for both groups and clones, i.e. show it on the collective line and on each instance line.

>   - when resource defaults target-role=Stopped, "(disabled)" shows next to:
>     - all primitives
>     - no groups
>     - no clones
>
> Properties:
>   - when maintenance-mode=true:
>     - "(unmanaged)" shows next to:
>       - all primitives
>       - no groups
>       - all clones
>     - A notice at the top of crm_mon says "*** Resource management is
> DISABLED ***.\nThe cluster will not attempt to start, stop or recover
> services".
>   - when stop-all-resources=true:
>     - "(disabled)" shows next to:
>       - no primitives
>       - no groups
>       - no clones
>     - There is no notice at the top of crm_mon.
> 
> 
> Since the requests are so similar, would you like to consolidate them into
> this BZ and aim to make the display consistent for these two meta
> attributes? Or shall I open a separate BZ for unmanaged?

I think it makes sense to roll it into this BZ.

Other possible flags:

* ORPHANED: This will always be instance-only because when there is no configuration, there is no way to know that it used to be in a collective resource.

* FAILED and failure ignored: These make sense staying instance only. Would a collective resource be considered failed if all of its children failed, or if any one of them failed? Since failures are specific to both a primitive and node, I don't think it makes sense to collectivize them even if all instances happen to be failed.

* UNCLEAN and LOCKED: I could see an argument for showing these next to group names since all the instances are on the same node (which has the given state), though not for clones and bundles since each instance can be on a different node. I lean to leaving these as they are (instance-only).

* A pending action (e.g. "Starting"): This is actually something I did want to change eventually, because I do think it's suboptimal currently, especially for bundles. But it's a different (and complicated) enough question that we can leave it off this BZ.

* target-role:Slave: This could be worth showing by clones, but not a big deal either way.

* blocked: This should only happen for config options on-fail=block, multiple-active=block, and ticket loss-policy=freeze. I think these are node-specific enough to remain instance-only.

So, unmanaged makes sense to include here, but the rest can pretty much be ignored.

Comment 13 Reid Wahl 2020-09-03 21:16:28 UTC
(In reply to Ken Gaillot from comment #12)
> Instances, not primitives ... the clone name and primitive name are shown on
> one line, then the instances are shown (by node).

Ah, yes that's right. Then for all of the rest of comment 11, 's/primitives/solitary primitives and clone instances/g' and 's/clones/clone collective lines/g', or something similar.


> That's comparable to how solitary primitives are shown, "(disabled)" is after
> the location not the name. On the other hand, if we're going for greppability,
> putting it after the clone/primitive names as well might make sense.

Seems like it. Though since you put it that way, I can see the logic behind the current choice.


> I think it makes sense to roll it into this BZ.

Cool, sounds good.


> Other possible flags:
> 
> * ORPHANED: This will always be instance-only because when there is no
> configuration, there is no way to know that it used to be in a collective
> resource.
> 
> * FAILED and failure ignored: These make sense staying instance only. Would
> a collective resource be considered failed if all of its children failed, or
> if any one of them failed? Since failures are specific to both a primitive
> and node, I don't think it makes sense to collectivize them even if all
> instances happen to be failed.

Agreed.


> * UNCLEAN and LOCKED: I could see an argument for showing these next to
> group names since all the instances are on the same node (which has the
> given state), though not for clones and bundles since each instance can be
> on a different node. I lean to leaving these as they are (instance-only).

Due to the lack of clarity on how it "should" be, and to IMO a lack of benefit from collectivizing these state descriptors, I second leaving these as they are.


> * A pending action (e.g. "Starting"): This is actually something I did want
> to change eventually, because I do think it's suboptimal currently,
> especially for bundles. But it's a different (and complicated) enough
> question that we can leave it off this BZ.

Agreed.


> * target-role:Slave: This could be worth showing by clones, but not a big
> deal either way.

Showing it by clones would be more consistent with what we're doing for target-role=Stopped, but I don't think it adds a lot of clarity that we don't already have.

I **was** surprised to find that pe__resource_is_disabled() considers "target-role=Stopped" as disabled and returns true. That could cause a problem if we used the return value of that command to show "disabled" next to a clone, but we don't. Instead, we check the target_role directly via configured_role().


> * blocked: This should only happen for config options on-fail=block,
> multiple-active=block, and ticket loss-policy=freeze. I think these are
> node-specific enough to remain instance-only.
> 
> So, unmanaged makes sense to include here, but the rest can pretty much be
> ignored.

Agreed, sounds good.

Comment 14 Ken Gaillot 2020-09-09 23:07:55 UTC
There is another good reason to show "disabled"/"unmanaged" by both the collective resource and its individual members: the members can override the collective setting. For example, a group can have is-managed=false, but then one of its members can explicitly have is-managed=true to override it for that one member.

That also raises the question of whether the collective resource should show "disabled"/"unmanaged" based solely on the collective resource's settings, or whether it should require that every member be disabled/unmanaged. The former would be much simpler to implement, and is already used for showing unmanaged clones, so I'm thinking we should go with that.

For example, if group1 with target-role=Stopped contains rsc1 and rsc2, and rsc1 has an explicit target-role=Started, I think it should be shown as:

* Resource Group: group1 (disabled):
    * rsc1	(ocf::pacemaker:Dummy):	 Started
    * rsc2	(ocf::pacemaker:Dummy):	 Stopped (disabled)

where the group is listed as disabled even though one of its members is not.

Comment 15 Ken Gaillot 2020-09-15 17:39:19 UTC
Fix merged upstream as of https://github.com/ClusterLabs/pacemaker/pull/2158

QA: The final design was to show "(disabled)" and/or "(unmanaged)" by group and clone names if the group or clone itself was modified. Individual group members or clone instances will show the labels if they were directly modified or if they inherited it from their parent. If all group members are individually disabled or unmanaged, but the group itself is not, the group name will not show the labels.

Separately, crm_mon will now show a banner ("The cluster will keep all resources stopped") if the stop-all-resources cluster option is set to true.

Comment 20 Markéta Smazová 2020-11-09 13:32:44 UTC
before fix
-----------

>   [root@virt-142 ~]# rpm -q pacemaker
>   pacemaker-2.0.4-6.el8.x86_64

When a resource group or clone set is disabled, `crm_mon` shows individual resources/ clones as "disabled" rather than 
showing the group/clone set as disabled.

Some examples:

>   1. There are no resource defaults set and group and/or clone set is disabled directly.

>   [root@virt-142 ~]# pcs resource defaults
>   No defaults set
>   [root@virt-142 ~]# pcs resource disable dummy3-clone dummy-group
>   [root@virt-142 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-153 (2) (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Mon Nov  9 11:27:26 2020
>     * Last change:  Mon Nov  9 11:27:20 2020 by root via cibadmin on virt-142
>     * 2 nodes configured
>     * 6 resource instances configured (4 DISABLED)

>   Node List:
>     * Online: [ virt-142 (1) virt-153 (2) ]

>   Full List of Resources:
>     * fence-virt-142	(stonith:fence_xvm):	 Started virt-142
>     * fence-virt-153	(stonith:fence_xvm):	 Started virt-153
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>     * Clone Set: dummy3-clone [dummy3]:
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled)

>   2. The `target-role=Stopped` option is set as default for resources.

>   [root@virt-142 ~]# pcs resource defaults
>   Meta Attrs: rsc_defaults-meta_attributes
>     target-role=Stopped
>   [root@virt-142 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-153 (2) (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Mon Nov  9 11:41:21 2020
>     * Last change:  Mon Nov  9 11:40:51 2020 by root via cibadmin on virt-142
>     * 2 nodes configured
>     * 6 resource instances configured (6 DISABLED)

>   Node List:
>     * Online: [ virt-142 (1) virt-153 (2) ]

>   Full List of Resources:
>     * fence-virt-142	(stonith:fence_xvm):	 Stopped (disabled)
>     * fence-virt-153	(stonith:fence_xvm):	 Stopped (disabled)
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>     * Clone Set: dummy3-clone [dummy3]:
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled)

>   3. The `target-role=Stopped` option is set as default for resources (or for specific resource group) and 
>   resource "dummy1" has an explicit option `target-role=Started`.

>   [root@virt-142 ~]# pcs resource defaults
>   Meta Attrs: rsc_defaults-meta_attributes
>     target-role=Stopped
>   [root@virt-142 ~]# pcs resource meta dummy1 target-role=Started
>   [root@virt-142 ~]# pcs resource config dummy1 | grep Meta
>     Meta Attrs: target-role=Started
>   [root@virt-142 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-153 (2) (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Mon Nov  9 11:47:25 2020
>     * Last change:  Mon Nov  9 11:47:05 2020 by root via cibadmin on virt-142
>     * 2 nodes configured
>     * 4 resource instances configured (3 DISABLED)

>   Node List:
>     * Online: [ virt-142 (1) virt-153 (2) ]

>   Full List of Resources:
>     * fence-virt-142	(stonith:fence_xvm):	 Stopped (disabled)
>     * fence-virt-153	(stonith:fence_xvm):	 Stopped (disabled)
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-142
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled)


In cases below the label "unmanaged" is displayed by clone set name as well as next to the individual clones or group members.
Also the label "disabled" is displayed next to individual group members but it is not displayed next to the group name.


>   4. The `target-role=Stopped` option is set as default for resources and the cluster is in "maintenance mode".

>   [root@virt-142 ~]# pcs resource defaults
>   Meta Attrs: rsc_defaults-meta_attributes
>     target-role=Stopped
>   [root@virt-142 ~]# pcs property set maintenance-mode=true
>   [root@virt-142 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-153 (2) (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Mon Nov  9 11:41:59 2020
>     * Last change:  Mon Nov  9 11:41:57 2020 by root via cibadmin on virt-142
>     * 2 nodes configured
>     * 6 resource instances configured (6 DISABLED)

>                 *** Resource management is DISABLED ***
>     The cluster will not attempt to start, stop or recover services

>   Node List:
>     * Online: [ virt-142 (1) virt-153 (2) ]

>   Full List of Resources:
>     * fence-virt-142	(stonith:fence_xvm):	 Stopped (disabled, unmanaged)
>     * fence-virt-153	(stonith:fence_xvm):	 Stopped (disabled, unmanaged)
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled, unmanaged)
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled, unmanaged)
>     * Clone Set: dummy3-clone [dummy3] (unmanaged):
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled, unmanaged)
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled, unmanaged)


>   5. The `is-managed=false` option is set as default for resources.

>   [root@virt-142 ~]# pcs resource defaults
>   Meta Attrs: rsc_defaults-meta_attributes
>     is-managed=false
>   [root@virt-142 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-153 (2) (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Mon Nov  9 11:36:43 2020
>     * Last change:  Mon Nov  9 11:36:25 2020 by root via cibadmin on virt-142
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-142 (1) virt-153 (2) ]

>   Full List of Resources:
>     * fence-virt-142	(stonith:fence_xvm):	 Started virt-142 (unmanaged)
>     * fence-virt-153	(stonith:fence_xvm):	 Started virt-153 (unmanaged)
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-142 (unmanaged)
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-142 (unmanaged)
>     * Clone Set: dummy3-clone [dummy3] (unmanaged):
>       * dummy3	(ocf::pacemaker:Dummy):	 Started virt-153 (unmanaged)
>       * dummy3	(ocf::pacemaker:Dummy):	 Started virt-142 (unmanaged)


>   6. There are no resource defaults set and the cluster is in "maintenance mode".

>   [root@virt-142 ~]# pcs resource defaults
>   No defaults set
>   [root@virt-142 ~]# pcs property set maintenance-mode=true
>   [root@virt-142 ~]# crm_mon --one-shot --show-detail --inactive 
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-153 (2) (version 2.0.4-6.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Mon Nov  9 11:25:45 2020
>     * Last change:  Mon Nov  9 11:24:25 2020 by root via cibadmin on virt-142
>     * 2 nodes configured
>     * 6 resource instances configured

>                 *** Resource management is DISABLED ***
>     The cluster will not attempt to start, stop or recover services

>   Node List:
>     * Online: [ virt-142 (1) virt-153 (2) ]

>   Full List of Resources:
>     * fence-virt-142	(stonith:fence_xvm):	 Started virt-142 (unmanaged)
>     * fence-virt-153	(stonith:fence_xvm):	 Started virt-153 (unmanaged)
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-153 (unmanaged)
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-153 (unmanaged)
>     * Clone Set: dummy3-clone [dummy3] (unmanaged):
>       * dummy3	(ocf::pacemaker:Dummy):	 Started virt-153 (unmanaged)
>       * dummy3	(ocf::pacemaker:Dummy):	 Started virt-142 (unmanaged)




after fix
----------

>   [root@virt-245 ~]# rpm -q pacemaker
>   pacemaker-2.0.5-2.el8.x86_64


Some examples, where "disabled"/"unmanaged" is now displayed by group and/or clone set names and by individual group
members and/or clone instances:

>   1. There are no resource defaults set and group and/or clone set is disabled directly.

>   [root@virt-126 ~]# pcs resource defaults
>   No defaults set
>   [root@virt-126 ~]# pcs resource disable dummy1-clone dummy-group
>   [root@virt-126 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 16:37:16 2020
>     * Last change:  Fri Nov  6 16:36:38 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 6 resource instances configured (4 DISABLED)

>   Node List:
>     * Online: [ virt-126 (1) virt-128 (2) ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Started virt-126
>     * fence-virt-128	(stonith:fence_xvm):	 Started virt-128
>     * Clone Set: dummy1-clone [dummy1] (disabled):
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>     * Resource Group: dummy-group (disabled):
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled)


>   2. The `target-role=Stopped` option is set as default for resources.

>   [root@virt-126 ~]# pcs resource defaults
>   Meta Attrs: rsc_defaults-meta_attributes
>     target-role=Stopped
>   [root@virt-126 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 13:51:15 2020
>     * Last change:  Fri Nov  6 13:50:24 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 6 resource instances configured (6 DISABLED)

>   Node List:
>     * Online: [ virt-126 (1) virt-128 (2) ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Stopped (disabled)
>     * fence-virt-128	(stonith:fence_xvm):	 Stopped (disabled)
>     * Clone Set: dummy1-clone [dummy1] (disabled):
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>     * Resource Group: dummy-group (disabled):
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled)


>   3. The `target-role=Stopped` option is set as default for resources (or for specific resource group) and 
>   resource "dummy1" has an explicit option `target-role=Started`.

>   [root@virt-245 ~]# pcs resource defaults
>   Meta Attrs: rsc_defaults-meta_attributes
>     target-role=Stopped
>   [root@virt-245 ~]# pcs resource meta dummy1 target-role=Started
>   [root@virt-245 ~]# pcs resource config dummy1 | grep Meta
>     Meta Attrs: target-role=Started
>   [root@virt-245 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-246 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 16:19:29 2020
>     * Last change:  Fri Nov  6 16:19:18 2020 by root via cibadmin on virt-245
>     * 2 nodes configured
>     * 4 resource instances configured (3 DISABLED)

>   Node List:
>     * Online: [ virt-245 (1) virt-246 (2) ]

>   Full List of Resources:
>     * fence-virt-245	(stonith:fence_xvm):	 Stopped (disabled)
>     * fence-virt-246	(stonith:fence_xvm):	 Stopped (disabled)
>     * Resource Group: dummy-group (disabled):
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-245
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled)


>   4. The `target-role=Stopped` option is set as default for resources and the cluster is in "maintenance mode".

>   [root@virt-126 ~]# pcs resource defaults
>   Meta Attrs: rsc_defaults-meta_attributes
>     target-role=Stopped
>   [root@virt-126 ~]# pcs property set maintenance-mode=true
>   [root@virt-126 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 13:53:10 2020
>     * Last change:  Fri Nov  6 13:53:08 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 6 resource instances configured (6 DISABLED)

>                 *** Resource management is DISABLED ***
>     The cluster will not attempt to start, stop or recover services

>   Node List:
>     * Online: [ virt-126 (1) virt-128 (2) ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Stopped (disabled, unmanaged)
>     * fence-virt-128	(stonith:fence_xvm):	 Stopped (disabled, unmanaged)
>     * Clone Set: dummy1-clone [dummy1] (unmanaged) (disabled):
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled, unmanaged)
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped (disabled, unmanaged)
>     * Resource Group: dummy-group (unmanaged) (disabled):
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled, unmanaged)
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled, unmanaged)


>   5. The `is-managed=false` option is set as default for resources.

>   [root@virt-126 ~]# pcs resource defaults
>   Meta Attrs: rsc_defaults-meta_attributes
>     is-managed=false
>   [root@virt-126 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 17:55:27 2020
>     * Last change:  Fri Nov  6 17:55:24 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-126 (1) virt-128 (2) ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Started virt-126 (unmanaged)
>     * fence-virt-128	(stonith:fence_xvm):	 Started virt-128 (unmanaged)
>     * Clone Set: dummy3-clone [dummy3] (unmanaged):
>       * dummy3	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)
>       * dummy3	(ocf::pacemaker:Dummy):	 Started virt-128 (unmanaged)
>     * Resource Group: dummy-group (unmanaged):
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)


>   6. There are no resource defaults set and the cluster is in "maintenance mode".

>   [root@virt-126 ~]# pcs resource defaults
>   No defaults set
>   [root@virt-126 ~]# pcs property set maintenance-mode=true
>   [root@virt-126 ~]# crm_mon --one-shot --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 17:06:12 2020
>     * Last change:  Fri Nov  6 17:06:08 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 6 resource instances configured

>                 *** Resource management is DISABLED ***
>     The cluster will not attempt to start, stop or recover services

>   Node List:
>     * Online: [ virt-126 virt-128 ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Started virt-126 (unmanaged)
>     * fence-virt-128	(stonith:fence_xvm):	 Started virt-128 (unmanaged)
>     * Clone Set: dummy1-clone [dummy1] (unmanaged):
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-128 (unmanaged)
>     * Resource Group: dummy-group (unmanaged):
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)
>       * dummy3	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)


Some examples, where "disabled"/"unmanaged" is displayed only by individual group members or clone instances:

>   1. When single group members or clone instances are directly modified.

>   [root@virt-126 ~]# pcs resource disable dummy2
>   [root@virt-126 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 14:47:37 2020
>     * Last change:  Fri Nov  6 14:47:29 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 4 resource instances configured (1 DISABLED)

>   Node List:
>     * Online: [ virt-126 (1) virt-128 (2) ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Started virt-126
>     * fence-virt-128	(stonith:fence_xvm):	 Started virt-128
>     * Resource Group: dummy-group:
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped


>   2. When all group members or clone instances are individually disabled or unmanaged, but the group/clone set itself is not.

>   [root@virt-126 ~]# pcs resource disable dummy2 dummy3
>   [root@virt-126 ~]# pcs resource unmanage dummy1
>   [root@virt-126 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 14:47:37 2020
>     * Last change:  Fri Nov  6 14:47:29 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 6 resource instances configured (2 DISABLED)

>   Node List:
>     * Online: [ virt-126 (1) virt-128 (2) ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Started virt-126
>     * fence-virt-128	(stonith:fence_xvm):	 Started virt-128
>     * Clone Set: dummy1-clone [dummy1]:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-128 (unmanaged)
>     * Resource Group: dummy-group:
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped (disabled)
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped (disabled) 


>   3. When `pcs resource unmanage` is used on a whole group or clone set. In this case pcs configures each member of the 
>   group or clone individually instead of configuring the group or clone set as a whole.

>   [root@virt-126 ~]# pcs resource unmanage dummy1-clone dummy-group
>   [root@virt-126 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 16:39:54 2020
>     * Last change:  Fri Nov  6 16:39:33 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-126 (1) virt-128 (2) ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Started virt-126
>     * fence-virt-128	(stonith:fence_xvm):	 Started virt-128
>     * Clone Set: dummy1-clone [dummy1]:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-128 (unmanaged)
>     * Resource Group: dummy-group:
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)
>       * dummy3	(ocf::pacemaker:Dummy):	 Started virt-126 (unmanaged)


Additionally `crm_mon` now shows a banner ("The cluster will keep all resources stopped") if the "stop-all-resources" 
cluster property is set to true.

>   [root@virt-126 ~]# pcs property set stop-all-resources=true
>   [root@virt-126 ~]# crm_mon --one-shot --show-detail --inactive
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-128 (2) (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
>     * Last updated: Fri Nov  6 16:18:11 2020
>     * Last change:  Fri Nov  6 16:17:03 2020 by root via cibadmin on virt-126
>     * 2 nodes configured
>     * 6 resource instances configured

>       *** Resource management is DISABLED ***
>     The cluster will keep all resources stopped

>   Node List:
>     * Online: [ virt-126 (1) virt-128 (2) ]

>   Full List of Resources:
>     * fence-virt-126	(stonith:fence_xvm):	 Stopped
>     * fence-virt-128	(stonith:fence_xvm):	 Stopped
>     * Clone Set: dummy1-clone [dummy1]:
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped
>       * dummy1	(ocf::pacemaker:Dummy):	 Stopped
>     * Resource Group: dummy-group:
>       * dummy2	(ocf::pacemaker:Dummy):	 Stopped
>       * dummy3	(ocf::pacemaker:Dummy):	 Stopped


marking verified in pacemaker-2.0.5-2.el8

Comment 22 errata-xmlrpc 2021-05-18 15:26:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:1782


Note You need to log in before you can comment on or make changes to this bug.