RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1686426 - Add option to crm_simulate to display additional info about cluster status, like node attributes
Summary: Add option to crm_simulate to display additional info about cluster status, l...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.0
Hardware: Unspecified
OS: Linux
low
low
Target Milestone: pre-dev-freeze
: 8.5
Assignee: Chris Lumens
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-07 12:53 UTC by Frank Danapfel
Modified: 2024-03-06 16:06 UTC (History)
7 users (show)

Fixed In Version: pacemaker-2.1.0-3.el8
Doc Type: Enhancement
Doc Text:
(It is questionable whether we need to document this, since there is no pcs interface.) Feature: Pacemaker's crm_simulate command-line tool now accept a --show-attrs option to display node attributes in simulation output, and --show-failcounts to display resource fail counts. Reason: Node attribute and resource fail count information was previously available by running crm_mon separately using a CIB_file environment variable, but that was inconvenient. Result: Users can easily display additional information that factors into simulation results.
Clone Of:
Environment:
Last Closed: 2021-11-09 18:44:49 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2021:4267 0 None None None 2021-11-09 18:45:25 UTC

Description Frank Danapfel 2019-03-07 12:53:22 UTC
Description of problem:
When using crm_simulate to depug cluster issues using the 'pe-input' files generated by pacemkaker (as documented in http://blog.clusterlabs.org/blog/2013/debugging-pengine), the output produced only contains a basic view of  the cluster status.

Since some resource agents (like the ones used in the Red Hat HA solution for SAP HANA System Replication) use additional features like node attributes to store status information about the resources they mange, it would be helpful if this kind of information could also be displayed by crm_simulate when viewing the cluster status from 'pe-input' files.

Version-Release number of selected component (if applicable):
pacemaker-1.1.20-1.el7

How reproducible:
always

Steps to Reproduce:
1. run 'crm_simulate -x pe-input-0.bz2'
2. 
3.

Actual results:
$ crm_simulate -x pe-input-0.bz2 

Current cluster status:
Online: [ sapha013hb0 sapha014hb0 ]

 ipmi_sapha013hb0       (stonith:fence_ipmilan):        Started sapha014hb0
 ipmi_sapha014hb0       (stonith:fence_ipmilan):        Started sapha013hb0
 Clone Set: TopologyP22-clone [TopologyP22]
     Started: [ sapha013hb0 sapha014hb0 ]
 Master/Slave Set: MasterSlaveP22 [SAPHanaP22]
     Masters: [ sapha013hb0 ]
     Stopped: [ sapha014hb0 ]
 Resource Group: ipP22
     pkgp224d   (ocf::heartbeat:IPaddr2):       Started sapha013hb0
     pkgp224b   (ocf::heartbeat:IPaddr2):       Started sapha013hb0

Expected results:
Output similar to the following:

Current cluster status:
Online: [ sapha013hb0 sapha014hb0 ]

 ipmi_sapha013hb0       (stonith:fence_ipmilan):        Started sapha014hb0
 ipmi_sapha014hb0       (stonith:fence_ipmilan):        Started sapha013hb0
 Clone Set: TopologyP22-clone [TopologyP22]
     Started: [ sapha013hb0 sapha014hb0 ]
 Master/Slave Set: MasterSlaveP22 [SAPHanaP22]
     Masters: [ sapha013hb0 ]
     Stopped: [ sapha014hb0 ]
 Resource Group: ipP22
     pkgp224d   (ocf::heartbeat:IPaddr2):       Started sapha013hb0
     pkgp224b   (ocf::heartbeat:IPaddr2):       Started sapha013hb0
Node Attributes:
* Node sapha013hb0:
    + hana_p22_clone_state              : PROMOTED  
    + hana_p22_op_mode                  : logreplay 
    + hana_p22_remoteHost               : sapha014  
    + hana_p22_roles                    : 4:P:master1:master:worker:master
    + hana_p22_site                     : NBG       
    + hana_p22_srmode                   : syncmem   
    + hana_p22_sync_state               : PRIM      
    + hana_p22_vhost                    : sapha013  
    + lpa_p22_lpt                       : 1551450250
    + master-SAPHanaP22                 : 150       
* Node sapha014hb0:
    + hana_p22_clone_state              : UNDEFINED 
    + hana_p22_op_mode                  : logreplay 
    + hana_p22_remoteHost               : sapha013  
    + hana_p22_roles                    : 4:S:master1:master:worker:master
    + hana_p22_site                     : FTH       
    + hana_p22_srmode                   : syncmem   
    + hana_p22_sync_state               : SOK       
    + hana_p22_vhost                    : sapha014  
    + lpa_p22_lpt                       : 30        
    + master-SAPHanaP22                 : -9000     
Migration Summary:
* Node sapha014hb0:
   SAPHanaP22: migration-threshold=5000 fail-count=1000000 last-failure='Fri Mar  1 02:05:05 2019'
* Node sapha013hb0:

Failed Actions:
* SAPHanaP22_start_0 on sapha014hb0 'not running' (7): call=55, status=complete, exitreason='none',
    last-rc-change='Fri Mar  1 02:04:59 2019', queued=0ms, exec=5687ms


Additional info:
On a live cluster it is already possible to view this information by running 'crm_mon  -1Arf' or 'pcs status --full'

Comment 2 Ken Gaillot 2019-03-07 14:58:16 UTC
Moving to RHEL 8 because RHEL 7.7 is the last RHEL 7 feature release.

Related with respect to the user interface, Bug 1330774 covers adding a pcs interface to crm_simulate.

Comment 3 Frank Danapfel 2019-08-28 08:37:14 UTC
(In reply to Ken Gaillot from comment #2)
> Moving to RHEL 8 because RHEL 7.7 is the last RHEL 7 feature release.
> 
> Related with respect to the user interface, Bug 1330774 covers adding a pcs
> interface to crm_simulate.

I don't think these two bugs are really related. The feature requested in Bug 1330774 seems to be to allow predictions about how the cluster will behave when certain pcs commands are called, without actually performing the resulting actions (sort of doing a 'dry-run'). 

Whereas the intention of this bug is more to improve the capabilities to analyse cluster events that happened in the past (for example to do root cause analysis in support cases).

Comment 4 Ken Gaillot 2019-08-28 14:40:04 UTC
I just realized that you may not be aware you can do:

    pcs -f <cib-file> status --full

to get extended status info from a file. Is that sufficient for what you're interested in, or is being able to incorporate that info in a simulation the main goal?

Comment 5 Frank Danapfel 2019-08-28 15:41:34 UTC
I'm aware of using the '-f' option with 'pcs status full', but the main goal is to be able to incorporate the additional information into a simulation.

Comment 6 Frank Danapfel 2019-11-29 11:05:54 UTC
Ken, any chance that this might still get fixed in RHEL8.2?

Comment 8 Ken Gaillot 2019-12-03 23:01:30 UTC
Definitely not 8.2. There's a good chance for 8.3 but higher-priority items might intervene

Comment 11 Chris Lumens 2020-05-19 18:35:49 UTC
From the initial comment, it looks like node attrs, migration summary, and failed actions are what's being asked for here.  I could potentially add anything that crm_mon can output (though I haven't double checked that), though adding everything would be a lot of work that I won't be able to finish for 8.3.  If those are the only things, I'll get to adding command line options for them.  Is there anything else, though?

Comment 12 Frank Danapfel 2020-05-20 10:02:58 UTC
(In reply to Chris Lumens from comment #11)
> From the initial comment, it looks like node attrs, migration summary, and
> failed actions are what's being asked for here.  I could potentially add
> anything that crm_mon can output (though I haven't double checked that),
> though adding everything would be a lot of work that I won't be able to
> finish for 8.3.  If those are the only things, I'll get to adding command
> line options for them.  Is there anything else, though?

As mentioned in my initial description for the bug I'd like to see the same information in crm_simulate output that is also provided by 'crm_mon -1Arf' on a live cluster.

Comment 13 Ken Gaillot 2020-05-20 16:50:43 UTC
(In reply to Frank Danapfel from comment #12)
> (In reply to Chris Lumens from comment #11)
> > From the initial comment, it looks like node attrs, migration summary, and
> > failed actions are what's being asked for here.  I could potentially add
> > anything that crm_mon can output (though I haven't double checked that),
> > though adding everything would be a lot of work that I won't be able to
> > finish for 8.3.  If those are the only things, I'll get to adding command
> > line options for them.  Is there anything else, though?
> 
> As mentioned in my initial description for the bug I'd like to see the same
> information in crm_simulate output that is also provided by 'crm_mon -1Arf'
> on a live cluster.

Some possibilities for the user interface:

* We could reuse the existing --verbose/-V option for this. Multiple -V's currently enable debug logging, but a single -V is currently used only to change the action labels on the dot graph if --save-dotfile/-D is used. I don't think combining those features would bother anyone.

* We could add a single new "extended cluster information" option.

* We could borrow the --include/--exclude idea from crm_mon, encompassing this, --show-scores, and --show-utilization (overkill unless existing crm_mon code can be reused without much extra effort).

Comment 21 Libor Miksik 2021-01-18 16:34:56 UTC
due to typo in date (2020 vs 2021) in BRE rule "RHEL SySc Dev ITM-to-Deadline (8.5)" was incorrectly run ITR strip.
reset the BZ values back.

Comment 24 Chris Lumens 2021-03-23 20:27:36 UTC
I'm getting close to being able to implement this, so it's worth spending some time thinking about the interface.

> * We could reuse the existing --verbose/-V option for this. Multiple -V's
> currently enable debug logging, but a single -V is currently used only to
> change the action labels on the dot graph if --save-dotfile/-D is used. I
> don't think combining those features would bother anyone.
> 
> * We could add a single new "extended cluster information" option.
>
> * We could borrow the --include/--exclude idea from crm_mon, encompassing
> this, --show-scores, and --show-utilization (overkill unless existing
> crm_mon code can be reused without much extra effort).

I think it might be possible to make the include/exclude code from crm_mon more generic so it could be shared among all command line tools.  The most difficult stuff appears to be using mon_output_format_t for figuring out the default includes and the ban-related stuff in apply_include.  But, maybe this could be handled by making the sections type more like how glib command line stuff does - flags for whether the action is a function call or setting a value, etc.  That might be vastly overthinking it, but it would also be the most flexible approach and would likely be more useful elsewhere later.

On the other hand, if we think people aren't going to want more than this extra crm_simulate information, we could get away with introducing a new option.  I just don't want to add an option we have to support for years, and then later get more requests for controlling the output elsewhere.

I don't especially like just reusing -V.

Comment 25 Ken Gaillot 2021-03-29 21:13:24 UTC
(In reply to Chris Lumens from comment #24)
> I'm getting close to being able to implement this, so it's worth spending
> some time thinking about the interface.
> 
> > * We could reuse the existing --verbose/-V option for this. Multiple -V's
> > currently enable debug logging, but a single -V is currently used only to
> > change the action labels on the dot graph if --save-dotfile/-D is used. I
> > don't think combining those features would bother anyone.
> > 
> > * We could add a single new "extended cluster information" option.
> >
> > * We could borrow the --include/--exclude idea from crm_mon, encompassing
> > this, --show-scores, and --show-utilization (overkill unless existing
> > crm_mon code can be reused without much extra effort).
> 
> I think it might be possible to make the include/exclude code from crm_mon
> more generic so it could be shared among all command line tools.  The most
> difficult stuff appears to be using mon_output_format_t for figuring out the
> default includes and the ban-related stuff in apply_include.  But, maybe
> this could be handled by making the sections type more like how glib command
> line stuff does - flags for whether the action is a function call or setting
> a value, etc.  That might be vastly overthinking it, but it would also be
> the most flexible approach and would likely be more useful elsewhere later.

I don't see any other tools that would really benefit from it, though a few could be shoehorned into that model. I think it would just be crm_mon and crm_simulate.

> On the other hand, if we think people aren't going to want more than this
> extra crm_simulate information, we could get away with introducing a new
> option.  I just don't want to add an option we have to support for years,
> and then later get more requests for controlling the output elsewhere.

Good question.

It would only make sense to show things that might affect the simulation, so I would think that dc, stack, times, summary, failures, fencing history, and operations would never be needed. The current display is effectively nodes and resources (including inactive). That leaves attributes, bans, fail counts, options, and tickets as maybes. Most cluster options can affect the simulation, so I could even imagine users wanting to see more options than "options" currently shows in crm_mon, but we don't have to go that far.

We already have --show-utilization and --show-scores that are in line with the idea. We could just add --show-attributes and --show-failcounts for what's requested here, and if more is desired in the future we just add more --show-* options.

Or we borrow --include with just nodes,resources,utilization,scores,attributes,failcounts for now and add more later if desired. Either way we can expand pretty easily.

I'm fine with either approach.

I just noticed that crm_mon -A doesn't show utilization attributes, so there's no way to show those there currently. (Scores doesn't make sense for crm_mon.)

> I don't especially like just reusing -V.

That makes sense since different users might want different combinations of output.

Comment 26 Chris Lumens 2021-04-06 17:08:29 UTC
Fix merged upstream - https://github.com/ClusterLabs/pacemaker/pull/2335

Comment 31 Markéta Smazová 2021-06-16 14:14:16 UTC
Pacemaker's `crm_simulate` command-line tool now accept a `--show-attrs` option to display 
node attributes in simulation output, and `--show-failcounts` to display resource fail counts.

>   [root@virt-539 ~]# rpm -q pacemaker
>   pacemaker-2.1.0-2.el8.x86_64

Check new options in man/help.

>   [root@virt-539 ~]# man crm_simulate | grep show-attrs -A4
>          -A, --show-attrs
>                 Show node attributes

>          -c, --show-failcounts
>                 Show resource fail counts


>   [root@virt-539 ~]# crm_simulate --help-operations | grep show-attrs -A1
>     -A, --show-attrs                 Show node attributes
>     -c, --show-failcounts            Show resource fail counts


Have a cluster with resources and attributes:

>   [root@virt-539 ~]# pcs status --full
>   Cluster name: STSRHTS20356
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-548 (2) (version 2.1.0-2.el8-7c3f660707) - partition with quorum
>     * Last updated: Wed Jun 16 13:57:46 2021
>     * Last change:  Wed Jun 16 13:57:36 2021 by root via cibadmin on virt-539
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-539 (1) virt-548 (2) ]

>   Full List of Resources:
>     * fence-virt-539	(stonith:fence_xvm):	 Started virt-539
>     * fence-virt-548	(stonith:fence_xvm):	 Started virt-548
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-548
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-548
>     * Clone Set: dummy-clone [dummy]:
>       * dummy	(ocf::pacemaker:Dummy):	 Started virt-539
>       * dummy	(ocf::pacemaker:Dummy):	 Started virt-548

>   Node Attributes:
>     * Node: virt-539 (1):
>       * location                        	: office    
>       * order                           	: primary   
>       * shortname                       	: node1     
>     * Node: virt-548 (2):
>       * location                        	: office    
>       * order                           	: secondary 
>       * shortname                       	: node2     

>   Migration Summary:

>   Tickets:

>   PCSD Status:
>     virt-539: Online
>     virt-548: Online

>   Daemon Status:
>     corosync: active/disabled
>     pacemaker: active/disabled
>     pcsd: active/enabled

Fail resource and check cluster status:

>   [root@virt-539 ~]# crm_resource --fail --resource dummy1 --node virt-548

>   [root@virt-539 ~]# crm_mon  -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-548 (version 2.1.0-2.el8-7c3f660707) - partition with quorum
>     * Last updated: Wed Jun 16 14:23:24 2021
>     * Last change:  Wed Jun 16 13:57:36 2021 by root via cibadmin on virt-539
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-539 virt-548 ]

>   Full List of Resources:
>     * fence-virt-539	(stonith:fence_xvm):	 Started virt-539
>     * fence-virt-548	(stonith:fence_xvm):	 Started virt-548
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-548
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-548
>     * Clone Set: dummy-clone [dummy]:
>       * Started: [ virt-539 virt-548 ]

>   Node Attributes:
>     * Node: virt-539:
>       * location                        	: office    
>       * order                           	: primary   
>       * shortname                       	: node1     
>     * Node: virt-548:
>       * location                        	: office    
>       * order                           	: secondary 
>       * shortname                       	: node2     

>   Migration Summary:
>     * Node: virt-548:
>       * dummy1: migration-threshold=1000000 fail-count=1 last-failure='Wed Jun 16 14:23:09 2021'

>   Failed Resource Actions:
>     * dummy1_asyncmon_0 on virt-548 'error' (1): call=27, status='complete', exitreason='Simulated failure', last-rc-change='2021-06-16 14:23:09 +02:00', queued=0ms, exec=0ms

Save CIB file:

>   [root@virt-539 ~]# pcs cluster cib > cib-copy.xml

Run crm_simulate on CIB file:

>   [root@virt-539 ~]# crm_simulate -x cib-copy.xml
>   Current cluster status:
>     * Node List:
>       * Online: [ virt-539 virt-548 ]

>     * Full List of Resources:
>       * fence-virt-539	(stonith:fence_xvm):	 Started virt-539
>       * fence-virt-548	(stonith:fence_xvm):	 Started virt-548
>       * Resource Group: dummy-group:
>         * dummy1	(ocf::pacemaker:Dummy):	 Started virt-548
>         * dummy2	(ocf::pacemaker:Dummy):	 Started virt-548
>       * Clone Set: dummy-clone [dummy]:
>         * Started: [ virt-539 virt-548 ]

Run crm_simulate on CIB file with the new options:

>   [root@virt-539 ~]# crm_simulate -x cib-copy.xml --show-attrs --show-failcounts
>   Current cluster status:
>     * Node List:
>       * Online: [ virt-539 virt-548 ]

>     * Full List of Resources:
>       * fence-virt-539	(stonith:fence_xvm):	 Started virt-539
>       * fence-virt-548	(stonith:fence_xvm):	 Started virt-548
>       * Resource Group: dummy-group:
>         * dummy1	(ocf::pacemaker:Dummy):	 Started virt-548
>         * dummy2	(ocf::pacemaker:Dummy):	 Started virt-548
>       * Clone Set: dummy-clone [dummy]:
>         * Started: [ virt-539 virt-548 ]

>     * Node Attributes:
>       * Node: virt-539:
>         * location                        	: office    
>         * order                           	: primary   
>         * shortname                       	: node1     
>       * Node: virt-548:
>         * location                        	: office    
>         * order                           	: secondary 
>         * shortname                       	: node2     

>     * Failed Resource Actions:
>       * dummy1_asyncmon_0 on virt-548 'error' (1): call=27, status='complete', exitreason='Simulated failure', last-rc-change='2021-06-16 14:23:09 +02:00', queued=0ms, exec=0ms

Node Attributes and Failed Resource Actions are displayed, but Migration Summary is missing.

Comment 32 Markéta Smazová 2021-06-16 14:36:25 UTC
I tested those new options and found out that "Migration Summary" section is not displayed when running `crm_simulate -x <cib-file> --show-failcounts --show-attrs `. Please see comment#31 for details.

Comment 33 Chris Lumens 2021-06-16 17:14:56 UTC
Could you please attach the CIB file you're using for testing to this bug report?  Thanks!

Comment 35 Chris Lumens 2021-06-17 15:39:27 UTC
I've made a new PR that adds this functionality.  See https://github.com/ClusterLabs/pacemaker/pull/2416.  I think I previously assumed the failed-action-list message would cover the needs for this bug, but obviously that is incorrect.

Comment 36 Ken Gaillot 2021-06-22 21:06:46 UTC
Additional fixes merged upstream

Comment 40 Markéta Smazová 2021-07-12 16:24:56 UTC
>   [root@virt-547 ~]# rpm -q pacemaker
>   pacemaker-2.1.0-3.el8.x86_64

>   [root@virt-547 ~]# man crm_simulate | grep show-attrs -A4
>          -A, --show-attrs
>                 Show node attributes

>          -c, --show-failcounts
>                 Show resource fail counts
>   [root@virt-547 ~]# crm_simulate --help-operations | grep show-attrs -A1
>     -A, --show-attrs                 Show node attributes
>     -c, --show-failcounts            Show resource fail counts


Have a cluster with resources and attributes:

>   [root@virt-547 ~]# pcs status --full
>   Cluster name: STSRHTS15914
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-548 (2) (version 2.1.0-3.el8-7c3f660707) - partition with quorum
>     * Last updated: Mon Jul 12 17:29:12 2021
>     * Last change:  Mon Jul 12 17:28:21 2021 by root via crm_attribute on virt-547
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-547 (1) virt-548 (2) ]

>   Full List of Resources:
>     * fence-virt-547	(stonith:fence_xvm):	 Started virt-547
>     * fence-virt-548	(stonith:fence_xvm):	 Started virt-548
>     * Clone Set: stateful-clone [stateful] (promotable):
>       * stateful	(ocf::pacemaker:Stateful):	 Master virt-548
>       * stateful	(ocf::pacemaker:Stateful):	 Slave virt-547
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-547
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-547

>   Node Attributes:
>     * Node: virt-547 (1):
>       * location                        	: office    
>       * master-stateful                 	: 10        
>       * order                           	: secondary 
>     * Node: virt-548 (2):
>       * location                        	: office    
>       * master-stateful                 	: 10        
>       * order                           	: primary   

>   Migration Summary:

>   Tickets:

>   PCSD Status:
>     virt-547: Online
>     virt-548: Online

>   Daemon Status:
>     corosync: active/disabled
>     pacemaker: active/disabled
>     pcsd: active/enabled

Fail resource and check cluster status:

>   [root@virt-547 ~]# crm_resource --fail --resource stateful --node virt-547
>   Waiting for 1 reply from the controller
>   ... got reply (done)

>   [root@virt-547 ~]# crm_mon  -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-548 (version 2.1.0-3.el8-7c3f660707) - partition with quorum
>     * Last updated: Mon Jul 12 17:30:20 2021
>     * Last change:  Mon Jul 12 17:28:21 2021 by root via crm_attribute on virt-547
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-547 virt-548 ]

>   Full List of Resources:
>     * fence-virt-547	(stonith:fence_xvm):	 Started virt-547
>     * fence-virt-548	(stonith:fence_xvm):	 Started virt-548
>     * Clone Set: stateful-clone [stateful] (promotable):
>       * Masters: [ virt-548 ]
>       * Slaves: [ virt-547 ]
>     * Resource Group: dummy-group:
>       * dummy1	(ocf::pacemaker:Dummy):	 Started virt-547
>       * dummy2	(ocf::pacemaker:Dummy):	 Started virt-547

>   Node Attributes:
>     * Node: virt-547:
>       * location                        	: office    
>       * master-stateful                 	: 5         
>       * order                           	: secondary 
>     * Node: virt-548:
>       * location                        	: office    
>       * master-stateful                 	: 10        
>       * order                           	: primary   

>   Migration Summary:
>     * Node: virt-547:
>       * stateful: migration-threshold=1000000 fail-count=1 last-failure='Mon Jul 12 17:30:11 2021'

>   Failed Resource Actions:
>     * stateful_asyncmon_0 on virt-547 'error' (1): call=49, status='complete', exitreason='Simulated failure', last-rc-change='2021-07-12 17:30:11 +02:00', queued=0ms, exec=0ms

Save CIB file:

>   [root@virt-547 ~]# pcs cluster cib > cib-copy.xml

Run crm_simulate on the saved CIB file:

>   [root@virt-547 ~]# crm_simulate -x cib-copy.xml
>   Current cluster status:
>     * Node List:
>       * Online: [ virt-547 virt-548 ]

>     * Full List of Resources:
>       * fence-virt-547	(stonith:fence_xvm):	 Started virt-547
>       * fence-virt-548	(stonith:fence_xvm):	 Started virt-548
>       * Clone Set: stateful-clone [stateful] (promotable):
>         * Masters: [ virt-548 ]
>         * Slaves: [ virt-547 ]
>       * Resource Group: dummy-group:
>         * dummy1	(ocf::pacemaker:Dummy):	 Started virt-547
>         * dummy2	(ocf::pacemaker:Dummy):	 Started virt-547

Run crm_simulate with the new options on the saved CIB file:

>   [root@virt-547 ~]# crm_simulate -x cib-copy.xml --show-attrs --show-failcounts
>   Current cluster status:
>     * Node List:
>       * Online: [ virt-547 virt-548 ]

>     * Full List of Resources:
>       * fence-virt-547	(stonith:fence_xvm):	 Started virt-547
>       * fence-virt-548	(stonith:fence_xvm):	 Started virt-548
>       * Clone Set: stateful-clone [stateful] (promotable):
>         * Masters: [ virt-548 ]
>         * Slaves: [ virt-547 ]
>       * Resource Group: dummy-group:
>         * dummy1	(ocf::pacemaker:Dummy):	 Started virt-547
>         * dummy2	(ocf::pacemaker:Dummy):	 Started virt-547

>     * Node Attributes:
>       * Node: virt-547:
>         * location                        	: office    
>         * master-stateful                 	: 5         
>         * order                           	: secondary 
>       * Node: virt-548:
>         * location                        	: office    
>         * master-stateful                 	: 10        
>         * order                           	: primary   

>     * Migration Summary:
>       * Node: virt-547:
>         * stateful: migration-threshold=1000000 fail-count=1 last-failure='Mon Jul 12 17:30:11 2021'

>     * Failed Resource Actions:
>       * stateful_asyncmon_0 on virt-547 'error' (1): call=49, status='complete', exitreason='Simulated failure', last-rc-change='2021-07-12 17:30:11 +02:00', queued=0ms, exec=0ms



Output of `crm_simulate -x cib-copy.xml --show-attrs --show-failcounts` now shows also Node Attributes, Migration Summary and Failed Resource Actions.

marking verified in pacemaker-2.1.0-3.el8

Comment 42 errata-xmlrpc 2021-11-09 18:44:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:4267


Note You need to log in before you can comment on or make changes to this bug.