RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1936696 - Support version 1.1 of the OCF Resource Agent API standard
Summary: Support version 1.1 of the OCF Resource Agent API standard
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.5
Hardware: All
OS: All
high
high
Target Milestone: rc
: 8.5
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1936833 2018969
TreeView+ depends on / blocked
 
Reported: 2021-03-08 22:46 UTC by Ken Gaillot
Modified: 2021-11-10 01:06 UTC (History)
2 users (show)

Fixed In Version: pacemaker-2.1.0-1.el8
Doc Type: Enhancement
Doc Text:
Feature: Pacemaker supports the OCF Resource Agent API 1.1 standard. Reason: The OCF 1.1 standard was recently released, with useful new features (backward compatible with OCF 1.0). Result: Pacemaker accepts "Promoted" and "Unpromoted" as role names in configuration, and supports reloadable parameters and the reload-all action in resource agents.
Clone Of:
: 1936833 (view as bug list)
Environment:
Last Closed: 2021-11-09 18:44:49 UTC
Type: Feature Request
Target Upstream Version: 2.1.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2021:4267 0 None None None 2021-11-09 18:45:25 UTC

Description Ken Gaillot 2021-03-08 22:46:30 UTC
The upstream ClusterLabs community is preparing to release version 1.1 of the OCF Resource Agent API standard, which will include a number of new features and clarifications compared to the previous version.

The RHEL High-Availability components will need to support this for RHEL 9.0, and a subset of support could be added as of 8.5 (any of the things mentioned as optional below).

Some key aspects that might require changes (agents refers to all OCF agents whether supplied by resource-agents, pacemaker, or some other package) are listed below. Some of these might warrant (or already have) their own BZs.

* The version number changes to 1.1
** In RHEL 9, agent meta-data should advertise 1.1 support. In RHEL 8, agents that use the old role names should continue to advertise 1.0 support, while agents that don't use role names could (but don't have to) advertise 1.1 support.
** pacemaker should set the OCF_RA_VERSION_MINOR environment variable to 1 instead of 0 in RHEL 9 and optionally 8

* The role names are now "promoted" and "unpromoted" instead of "Master" and "Slave".
** pacemaker should use the new names in help, logs, and output in 9 but not 8. In 9 and optionally 8, all names should be supported in user configurations; the crm_resource --master option should be renamed to --promoted, with the old option accepted for backward compatibility; the crm_master command should be renamed to pcmk_promotion, with the old name symlinked for backward compatibility; and relevant clone notification environment variables (OCF_RESKEY_CRM_meta_notify_master_resource etc.) should be provided with both the old and new names.
** Agents should use the new names in meta-data, help, etc., in 9 but not 8. Agents should use the new crm_resource --promoted option, crm_promotion command, and clone notification variables names in 9 and optionally 8 if supported by pacemaker. If agents parse pacemaker output for role names, they should look for either set of names in 9 and optionally 8.
** pcs should support all names in user input in 9 and optionally 8. The new names should be used in help and output in 9 but not 8. Any commands, options, etc., named after the old names should be renamed to the new ones with the old ones accepted for backward compatibility, in 9 and optionally 8.
** Note: promotion score node attribute names (master-*) are not part of the standard and are not changing at this time. However, anything outside pacemaker should use the crm_master or crm_promotion command instead of dealing with these attributes directly.

* The "unique" agent meta-data field has been deprecated in favor of two new fields, "unique-group" and "reloadable". Agents that support "reloadable" should support the new "reload-params" action.
** Pacemaker should support reloadable if present, otherwise unique if present, and support the reload-params action if present, otherwise reload if present, in 9 and optionally 8.
** Agents should provide both the old and new meta-data names, and the reload-params action if appropriate, in 9 and optionally 8.

* A number of new agent meta-data fields ("required", "deprecated", etc.) give additional hints for user interfaces.
** Agents should provide these in 9 and optionally 8.
** pcs can support these as desired.

* The new OCF_OUTPUT_FORMAT environment variable may be supported to indicate that the agent should output text or XML.
** Pacemaker's crm_resource and stonith_admin commands could set this appropriate to user-specified options before calling agents (at least for validate-all, which is the target use case).
** Agents may support this as desired.

* The OCF_CHECK_LEVEL environment variable may be supported for the validate-all action, to select host-independent or host-specific validation.
** Agents may support this as desired.
** Pacemaker could add an option to crm_resource and stonith_admin for check level when performing validation or monitoring.
** pcs could use the new pacemaker tool options if supported.

* Agent exit statuses have been clarified and expanded.
** Agents may support the new usage as desired. (Pacemaker already does.)

I may have missed some other places changes are needed, but those should be the most important.

Once adopted, the standard will be available at https://github.com/ClusterLabs/OCF-spec/tree/master/ra/1.1

This has been filed against 8.5 in case a subset will be implemented there, but may be cloned for or reassigned to 9.0, and some items could get their own bzs if separate tracking is desired

Comment 1 Ken Gaillot 2021-04-08 22:11:32 UTC
Here are all the Pacemaker changes related to role names expected for RHEL 8.5:

* "Promoted" and "Unpromoted" are now accepted (in addition to "Master" and "Slave") anywhere roles are specified in Pacemaker configuration (e.g. role or target-role)

* Pacemaker now provides resource agents with new environment variables (in addition to the existing ones) for promotable clone notifications, with "master" replaced with "promoted" and "slave" replaced with "unpromoted", for example OCF_RESKEY_CRM_meta_notify_unpromoted_resource will be identical to OCF_RESKEY_CRM_meta_notify_slave_resource

* Pacemaker uses the new role names in tool help, most log messages, the constraints created by crm_resource --ban, documentation, and internal code

* The crm_resource --master option has been deprecated (in help only) and replaced with a new --promoted option

* The crm_master command has been deprecated (in help only) and replaced with a new crm_attribute --promotion option that defaults to --lifetime=reboot (example: "crm_master -l reboot -v 10" becomes "crm_attribute --promotion -v 10")

* When showing ban constraints, crm_mon --output-as=xml (and --as-xml) will now show promoted-only=true/false in addition to master_only=true/false, which is now deprecated (via schema comment only)

* The ocf:pacemaker:Stateful resource advertises the new names in its meta-data for monitor actions

* A variety of public C APIs were deprecated and replaced (code using the old APIs will continue to work)

For RHEL 9.0, Pacemaker will additionally use the new names in all tool output and log messages.

Comment 2 Ken Gaillot 2021-04-08 22:27:51 UTC
(In reply to Ken Gaillot from comment #1)
> * "Promoted" and "Unpromoted" are now accepted (in addition to "Master" and
> "Slave") anywhere roles are specified in Pacemaker configuration (e.g. role
> or target-role)

Using the new names in "role" in <op>, <rsc_location>, or <resource_set>, or in "rsc-role" or "with-rsc-role" in <rsc_ticket> or <rsc_colocation>, requires CIB schema 3.7 (i.e. "cibadmin --upgrade" or equivalent must be run on an existing cluster to use the new names).

Comment 3 Ken Gaillot 2021-04-09 20:06:35 UTC
The role name changes have been merged in the upstream master branch as of commit 48d3778.

Supporting OCF 1.1 reloadable parameters is the only remaining change required for this BZ. Also planned but not required for this BZ are new build options for configuring OCF resource agent directories and new crm_resource options for OCF_CHECK_LEVEL.

Comment 4 Ken Gaillot 2021-04-22 16:24:34 UTC
Feature merged upstream as of https://github.com/ClusterLabs/pacemaker/pull/2349

Comment 6 Ken Gaillot 2021-05-12 17:04:48 UTC
For future reference, support for build-time configuration of OCF resource agent directories was merged upstream as of commit 2d57f0ae.

Comment 12 Markéta Smazová 2021-08-30 16:19:53 UTC
>   [root@virt-520 ~]# rpm -q pacemaker
>   pacemaker-2.1.0-6.el8.x86_64

>   [root@virt-520 ~]# rpm -q pcs
>   pcs-0.10.8-4.el8.x86_64


1. The version number of the OCF Resource Agent API standard changes to 1.1

    The `ocf:pacemaker:Dummy`, `ocf:pacemaker:Stateful`, and `ocf:pacemaker:remote` resource agents now support OCF 1.1 
    (reloadable parameters, UI parameter hints, and advertising OCF 1.1 role names in monitor action meta-data).

>   [root@virt-542 ~]# cd /usr/lib/ocf/resource.d/
>   [root@virt-542 resource.d]# grep -r "<version>1.1"
>   pacemaker/Dummy:<version>1.1</version>
>   pacemaker/Stateful:<version>1.1</version>
>   pacemaker/remote:  <version>1.1</version>
>   heartbeat/Filesystem:<version>1.1</version>
>   heartbeat/VirtualDomain:<version>1.1</version>
>   heartbeat/symlink:<version>1.1</version>


    Check agents metadata:

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Dummy | grep "<version>"
>   <version>1.1</version>
>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Stateful | grep "<version>"
>   <version>1.1</version>
>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:remote | grep "<version>"
>     <version>1.1</version>
>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:heartbeat:Filesystem | grep "<version>"
>   <version>1.1</version>
>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:heartbeat:VirtualDomain | grep "<version>"
>   <version>1.1</version>
>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:heartbeat:symlink | grep "<version>"
>   <version>1.1</version>

    Pacemaker should set the `OCF_RA_VERSION_MINOR` environment variable to 1 instead of 0:

>   [root@virt-542 ~]# pcs resource create dummy ocf:pacemaker:Dummy
>   [root@virt-542 ~]# pcs resource debug-monitor --full dummy | grep OCF_RA_VERSION
>   OCF_RA_VERSION_MAJOR=1
>   OCF_RA_VERSION_MINOR=1

>   [root@virt-542 ~]# pcs resource create state_1 ocf:pacemaker:Stateful
>   [root@virt-542 ~]# pcs resource debug-monitor --full state_1 | grep OCF_RA_VERSION
>   OCF_RA_VERSION_MAJOR=1
>   OCF_RA_VERSION_MINOR=1



2. "Promoted" and "Unpromoted" are now accepted (in addition to "Master" and "Slave") anywhere roles are specified 
in Pacemaker configuration (e.g. role or target-role).

    Note: This bz was tested with older version of pcs (pcs-0.10.8-4.el8) that did not accepted new role names 
    (Promoted/Unpromoted) in pcs commands yet. See bz1885293#c20 for more information on pcs supporting the new role names.


2.1. The `ocf:pacemaker:Stateful` resource advertises the new names in its meta-data for monitor actions:

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Stateful | grep -i promot
>   This is an example resource agent that implements Promoted and Unpromoted roles
>   <action name="monitor" depth="0" timeout="20s" interval="10s" role="Promoted"/>
>   <action name="monitor" depth="0" timeout="20s" interval="11s" role="Unpromoted"/>
>   <action name="promote" timeout="10s" />


    Setup cluster:

>   [root@virt-542 ~]# pcs status
>   Cluster name: STSRHTS30566
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-543 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Tue Aug 24 11:41:16 2021
>     * Last change:  Thu Aug 20 12:17:37 2021 by root via crm_attribute on virt-542
>     * 2 nodes configured
>     * 2 resource instances configured

>   Node List:
>     * Online: [ virt-542 virt-543 ]

>   Full List of Resources:
>     * fence-virt-542	(stonith:fence_xvm):	 Started virt-542
>     * fence-virt-543	(stonith:fence_xvm):	 Started virt-543

>   Daemon Status:
>     corosync: active/disabled
>     pacemaker: active/disabled
>     pcsd: active/enabled


    Create resource:

>   [root@virt-542 ~]# pcs resource create state_3 ocf:pacemaker:Stateful promotable meta target-role=Promoted

    Check `role` and `target-role` names in resource configuration:

    With older version of pcs:

>   [root@virt-542 ~]# rpm -q pcs
>   pcs-0.10.8-4.el8.x86_64
>   [root@virt-542 ~]# pcs resource config state_3 | grep -i role
>     Meta Attrs: target-role=Promoted
>                 monitor interval=10s role=Promoted timeout=20s (state_3-monitor-interval-10s)
>                 monitor interval=11s role=Unpromoted timeout=20s (state_3-monitor-interval-11s)


    With new version of pcs (after new role names support update in bz1885293#c20):

>   [root@virt-246 ~]# rpm -q pcs
>   pcs-0.10.10-1.el8.x86_64
>   [root@virt-246 ~]# pcs resource config state_3 | grep -i role
>     Meta Attrs: target-role=Promoted
>                 monitor interval=10s role=Master timeout=20s (state_3-monitor-interval-10s)
>                 monitor interval=11s role=Slave timeout=20s (state_3-monitor-interval-11s)

    With new version of pcs role names are still shown as Master/Slave in pcs resource configuration, but Promoted/Unpromoted are accepted too.
    Parameter `target-role` accepts both old and new role names. See bz1885293#c20 for more details.


2.2. Colocation set example

>   [root@virt-520 ~]# rpm -q pacemaker
>   pacemaker-2.1.0-6.el8.x86_64

    Create resources:

>   [root@virt-520 ~]# pcs resource create A ocf:pacemaker:Stateful promotable
>   [root@virt-520 ~]# pcs resource create B ocf:pacemaker:Stateful promotable
>   [root@virt-520 ~]# pcs resource create C ocf:pacemaker:Dummy
>   [root@virt-520 ~]# pcs resource create D ocf:pacemaker:Dummy

    Check cluster status:

>   [root@virt-520 ~]# crm_mon -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-522 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Mon Aug 30 14:41:01 2021
>     * Last change:  Mon Aug 30 14:40:51 2021 by root via cibadmin on virt-520
>     * 2 nodes configured
>     * 8 resource instances configured

>   Node List:
>     * Online: [ virt-520 virt-522 ]

>   Full List of Resources:
>     * fence-virt-520	(stonith:fence_xvm):	 Started virt-522
>     * fence-virt-522	(stonith:fence_xvm):	 Started virt-520
>     * Clone Set: A-clone [A] (promotable):
>       * Masters: [ virt-520 ]
>       * Slaves: [ virt-522 ]
>     * Clone Set: B-clone [B] (promotable):
>       * Masters: [ virt-520 ]
>       * Slaves: [ virt-522 ]
>     * C	(ocf::pacemaker:Dummy):	 Started virt-520
>     * D	(ocf::pacemaker:Dummy):	 Started virt-522

>   Node Attributes:
>     * Node: virt-520:
>       * master-A                        	: 10        
>       * master-B                        	: 10        
>     * Node: virt-522:
>       * master-A                        	: 5         
>       * master-B                        	: 5         

>   Migration Summary:


    Create colocation set. Since this is tested on older version of pcs (pcs-0.10.8-4.el8), that does not accept new 
    role names in pcs commands yet, we have to edit CIB. See bz 1885293#c20 for more information on pcs supporting new role names.

    Edit CIB:

>   [root@virt-520 ~]# pcs cluster cib > cib-original.xml
>   [root@virt-520 ~]# cp cib-original.xml cib-copy.xml
>   [root@virt-520 ~]# vim cib-copy.xml
>   [root@virt-520 ~]# pcs cluster cib-push cib-copy.xml diff-against=cib-original.xml
>   CIB updated

    Display the colocation updates:

>   [root@virt-520 ~]# diff cib-original.xml cib-copy.xml
>   93c93,104
>   <     <constraints/>
>   ---
>   >     <constraints>
>   >       <rsc_colocation score="INFINITY" id="colocation_example">
>   >         <resource_set role="Promoted" sequential="true" id="colocation_set_example_1">
>   >           <resource_ref id="A-clone"/>
>   >           <resource_ref id="B-clone"/>
>   >         </resource_set>
>   >         <resource_set sequential="true" id="colocation_set_example_2">
>   >           <resource_ref id="C"/>
>   >           <resource_ref id="D"/>
>   >         </resource_set>
>   >       </rsc_colocation>
>   >     </constraints>


    Check the colocation constraints:

>   [root@virt-520 ~]# pcs constraint colocation --full
>   Colocation Constraints:
>     Resource Sets:
>       set A-clone B-clone role=Promoted sequential=true (id:colocation_set_example_1) set C D sequential=true (id:colocation_set_example_2) setoptions score=INFINITY (id:colocation_example)

    New role name "Promoted" is accepted in the colocation set configuration.


    Resources C and D should be located on a node where both A and B are promoted. Check cluster status:

>   [root@virt-520 ~]# crm_mon -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-522 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Mon Aug 30 14:43:41 2021
>     * Last change:  Mon Aug 30 14:43:21 2021 by root via cibadmin on virt-520
>     * 2 nodes configured
>     * 8 resource instances configured

>   Node List:
>     * Online: [ virt-520 virt-522 ]

>   Full List of Resources:
>     * fence-virt-520	(stonith:fence_xvm):	 Started virt-522
>     * fence-virt-522	(stonith:fence_xvm):	 Started virt-520
>     * Clone Set: A-clone [A] (promotable):
>       * Masters: [ virt-520 ]
>       * Slaves: [ virt-522 ]
>     * Clone Set: B-clone [B] (promotable):
>       * Masters: [ virt-520 ]
>       * Slaves: [ virt-522 ]
>     * C	(ocf::pacemaker:Dummy):	 Started virt-520
>     * D	(ocf::pacemaker:Dummy):	 Started virt-520

>   Node Attributes:
>     * Node: virt-520:
>       * master-A                        	: 10        
>       * master-B                        	: 10        
>     * Node: virt-522:
>       * master-A                        	: 5         
>       * master-B                        	: 5         

>   Migration Summary:


2.3. Location constraint example:

    Create resource:

>   [root@virt-520 ~]# pcs resource create E ocf:pacemaker:Stateful promotable

    Check cluster status:

>   [root@virt-520 ~]# crm_mon -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-522 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Mon Aug 30 15:36:19 2021
>     * Last change:  Mon Aug 30 15:36:14 2021 by root via cibadmin on virt-520
>     * 2 nodes configured
>     * 4 resource instances configured

>   Node List:
>     * Online: [ virt-520 virt-522 ]

>   Full List of Resources:
>     * fence-virt-520	(stonith:fence_xvm):	 Started virt-522
>     * fence-virt-522	(stonith:fence_xvm):	 Started virt-520
>     * Clone Set: E-clone [E] (promotable):
>       * Masters: [ virt-520 ]
>       * Slaves: [ virt-522 ]

>   Node Attributes:
>     * Node: virt-520:
>       * master-E                        	: 10        
>     * Node: virt-522:
>       * master-E                        	: 5         

>   Migration Summary:

    Promotable resource "E" is currently promoted on node virt-520.

    Create a location constraint so the resource "E" is promoted on the other node virt-522.
    Edit CIB:

>   [root@virt-520 ~]# pcs cluster cib > cib-original.xml
>   [root@virt-520 ~]# cp cib-original.xml cib-copy.xml
>   [root@virt-520 ~]# vim cib-copy.xml 
>   [root@virt-520 ~]# pcs cluster cib-push cib-copy.xml diff-against=cib-original.xml
>   CIB updated

    Display the location constraint update:

>   [root@virt-520 ~]# diff cib-original.xml cib-copy.xml
>   54c54,60
>   <     <constraints/>
>   ---
>   >     <constraints>
>   >         <rsc_location id="promoted-location" rsc="E-clone">
>   >             <rule id="promoted-rule" score="100" role="Promoted">
>   >                 <expression id="promoted-exp" attribute="#uname" operation="eq" value="virt-522"/>
>   >             </rule>
>   >         </rsc_location>
>   >     </constraints>

    Check constraints:

>   [root@virt-520 ~]# pcs constraint --full
>   Location Constraints:
>     Resource: E-clone
>       Constraint: promoted-location
>         Rule: role=Promoted score=100 (id:promoted-rule)
>           Expression: #uname eq virt-522 (id:promoted-exp)
>   Ordering Constraints:
>   Colocation Constraints:
>   Ticket Constraints:

    New role name "Promoted" is accepted in the configuration.

    Check that resource "E" is now promoted on node virt-522:

>   [root@virt-520 ~]# crm_mon -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-522 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Mon Aug 30 15:38:41 2021
>     * Last change:  Mon Aug 30 15:38:11 2021 by root via cibadmin on virt-520
>     * 2 nodes configured
>     * 4 resource instances configured

>   Node List:
>     * Online: [ virt-520 virt-522 ]

>   Full List of Resources:
>     * fence-virt-520	(stonith:fence_xvm):	 Started virt-522
>     * fence-virt-522	(stonith:fence_xvm):	 Started virt-520
>     * Clone Set: E-clone [E] (promotable):
>       * Masters: [ virt-522 ]
>       * Slaves: [ virt-520 ]

>   Node Attributes:
>     * Node: virt-520:
>       * master-E                        	: 5         
>     * Node: virt-522:
>       * master-E                        	: 10        

>   Migration Summary:


2.4. Colocation constraint example:

    Create resources:

>   [root@virt-520 ~]# pcs resource create dummy_db ocf:pacemaker:Stateful promotable
>   [root@virt-520 ~]# pcs resource create dummy_1 ocf:pacemaker:Dummy
>   [root@virt-520 ~]# pcs resource create dummy_2 ocf:pacemaker:Dummy

    Check cluster status:

>   [root@virt-520 ~]# crm_mon -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-522 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Mon Aug 30 16:03:22 2021
>     * Last change:  Mon Aug 30 16:03:15 2021 by root via cibadmin on virt-520
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-520 virt-522 ]

>   Full List of Resources:
>     * fence-virt-520	(stonith:fence_xvm):	 Started virt-522
>     * fence-virt-522	(stonith:fence_xvm):	 Started virt-520
>     * Clone Set: dummy_db-clone [dummy_db] (promotable):
>       * Masters: [ virt-520 ]
>       * Slaves: [ virt-522 ]
>     * dummy_1	(ocf::pacemaker:Dummy):	 Started virt-520
>     * dummy_2	(ocf::pacemaker:Dummy):	 Started virt-522

>   Node Attributes:
>     * Node: virt-520:
>       * master-dummy_db                 	: 10        
>     * Node: virt-522:
>       * master-dummy_db                 	: 5         

>   Migration Summary:


    Resource "dummy_1" is currently running on the node virt-520, where resource "dummy_db" is promoted. Create constraint 
    so the resource "dummy_2" is colocated with Promoted "dummy_db" and "dummy_1" is colocated with the Unpromoted "dummy_db".
    Edit CIB:

>   [root@virt-520 ~]# pcs cluster cib > cib-original.xml
>   [root@virt-520 ~]# cp cib-original.xml cib-copy.xml
>   [root@virt-520 ~]# vim cib-copy.xml
>   [root@virt-520 ~]# pcs cluster cib-push cib-copy.xml diff-against=cib-original.xml
>   CIB updated

    Display updated colocation constraints:

>   [root@virt-520 ~]# diff cib-original.xml cib-copy.xml
>   76c76,79
>   <     <constraints/>
>   ---
>   >     <constraints>
>   >       <rsc_colocation id="colocation-dummy_2-dummy_db-clone-INFINITY" rsc="dummy_2" rsc-role="Started" score="INFINITY" with-rsc="dummy_db-clone" with-rsc-role="Promoted"/>
>   >       <rsc_colocation id="colocation-dummy_1-dummy_db-clone-INFINITY" rsc="dummy_1" rsc-role="Started" score="INFINITY" with-rsc="dummy_db-clone" with-rsc-role="Unpromoted"/>
>   >     </constraints>


>   [root@virt-520 ~]# pcs constraint colocation --full
>   Colocation Constraints:
>     dummy_2 with dummy_db-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Promoted) (id:colocation-dummy_2-dummy_db-clone-INFINITY)
>     dummy_1 with dummy_db-clone (score:INFINITY) (rsc-role:Started) (with-rsc-role:Unpromoted) (id:colocation-dummy_1-dummy_db-clone-INFINITY)

    The parameter `with-rsc-role` uses the new role names (Promoted/Unpromoted).


    Check that resources are correctly colocated:

>   [root@virt-520 ~]# crm_mon -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-522 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Mon Aug 30 16:06:41 2021
>     * Last change:  Mon Aug 30 16:06:13 2021 by root via cibadmin on virt-520
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-520 virt-522 ]

>   Full List of Resources:
>     * fence-virt-520	(stonith:fence_xvm):	 Started virt-522
>     * fence-virt-522	(stonith:fence_xvm):	 Started virt-520
>     * Clone Set: dummy_db-clone [dummy_db] (promotable):
>       * Masters: [ virt-520 ]
>       * Slaves: [ virt-522 ]
>     * dummy_1	(ocf::pacemaker:Dummy):	 Started virt-522
>     * dummy_2	(ocf::pacemaker:Dummy):	 Started virt-520

>   Node Attributes:
>     * Node: virt-520:
>       * master-dummy_db                 	: 10        
>     * Node: virt-522:
>       * master-dummy_db                 	: 5         

>   Migration Summary:

    Resource "dummy_2" is now colocated with Promoted "dummy_db" resource.



3. Pacemaker now provides resource agents with new environment variables (in addition to the existing ones) for promotable 
   clone notifications, with "master" replaced with "promoted" and "slave" replaced with "unpromoted", 
   for example OCF_RESKEY_CRM_meta_notify_unpromoted_resource will be identical to OCF_RESKEY_CRM_meta_notify_slave_resource

   Documented in https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Explained/html/advanced-resources.html#clone-notifications



4. The `crm_resource --master` option has been deprecated (in help only) and replaced with a new `--promoted` option.

    Check man/help:

>   [root@virt-542 ~]# man crm_resource | grep Locations -A37
>      Locations:
>          -M, --move
>                 Create  a constraint to move resource. If --node is specified, the constraint will be to
>                 move to that node, otherwise it will be to ban the current node. Unless --force is spec‐
>                 ified  this  will  return  an  error if the resource is already running on the specified
>                 node. If --force is specified, this will always ban the current node.  Optional: --life‐
>                 time, --promoted. NOTE: This may prevent the resource from running on its previous loca‐
>                 tion until the implicit constraint expires or is removed with --clear.

>          -B, --ban
>                 Create a constraint to keep resource off a node.  Optional: --node,  --lifetime,  --pro‐
>                 moted.  NOTE: This will prevent the resource from running on the affected node until the
>                 implicit constraint expires or is removed with --clear. If --node is not  specified,  it
>                 defaults  to  the  node currently running the resource for primitives and groups, or the
>                 promoted instance of promotable clones with promoted-max=1 (all other situations  result
>                 in an error as there is no sane default).

>          -U, --clear
>                 Remove   all  constraints  created  by  the  --ban  and/or  --move  commands.  Requires:
>                 --resource. Optional: --node, --promoted, --expired. If --node  is  not  specified,  all
>                 constraints  created  by  --ban  and  --move  will be removed for the named resource. If
>                 --node and --force are specified, any constraint created by --move will be cleared, even
>                 if  it  is not for the specified node. If --expired is specified, only those constraints
>                 whose lifetimes have expired will be removed.

>          -e, --expired
>                 Modifies the --clear argument to remove constraints with expired lifetimes.

>          -u, --lifetime=TIMESPEC
>                 Lifespan  (as  ISO  8601  duration)  of  created   constraints   (with   -B,   -M)   see
>                 https://en.wikipedia.org/wiki/ISO_8601#Durations)

>          --promoted
>                 Limit  scope  of  command  to promoted role (with -B, -M, -U). For -B and -M, previously
>                 promoted instances may remain active in the unpromoted role.

>          --master
>                 Deprecated: Use --promoted instead


>   [root@virt-542 ~]# crm_resource --help-locations
>   Usage:
>     crm_resource [OPTION?]

>   crm_resource - perform tasks related to Pacemaker cluster resources

>   Locations:
>     -M, --move                        Create a constraint to move resource. If --node is specified,
>                                       the constraint will be to move to that node, otherwise it
>                                       will be to ban the current node. Unless --force is specified
>                                       this will return an error if the resource is already running
>                                       on the specified node. If --force is specified, this will
>                                       always ban the current node.
>                                       Optional: --lifetime, --promoted. NOTE: This may prevent the
>                                       resource from running on its previous location until the
>                                       implicit constraint expires or is removed with --clear.
>     -B, --ban                         Create a constraint to keep resource off a node.
>                                       Optional: --node, --lifetime, --promoted.
>                                       NOTE: This will prevent the resource from running on the
>                                       affected node until the implicit constraint expires or is
>                                       removed with --clear. If --node is not specified, it defaults
>                                       to the node currently running the resource for primitives
>                                       and groups, or the promoted instance of promotable clones with
>                                       promoted-max=1 (all other situations result in an error as
>                                       there is no sane default).
>     -U, --clear                       Remove all constraints created by the --ban and/or --move
>                                       commands. Requires: --resource. Optional: --node, --promoted,
>                                       --expired. If --node is not specified, all constraints created
>                                       by --ban and --move will be removed for the named resource. If
>                                       --node and --force are specified, any constraint created by
>                                       --move will be cleared, even if it is not for the specified
>                                       node. If --expired is specified, only those constraints whose
>                                       lifetimes have expired will be removed.
>     -e, --expired                     Modifies the --clear argument to remove constraints with
>                                       expired lifetimes.
>     -u, --lifetime=TIMESPEC           Lifespan (as ISO 8601 duration) of created constraints (with
>                                       -B, -M) see https://en.wikipedia.org/wiki/ISO_8601#Durations)
>     --promoted                        Limit scope of command to promoted role (with -B, -M, -U). For
>                                       -B and -M, previously promoted instances may remain
>                                       active in the unpromoted role.
>     --master                          Deprecated: Use --promoted instead


    Test the `--promoted` option:

>   [root@virt-542 ~]# pcs resource create state_1 ocf:pacemaker:Stateful promotable

>   [root@virt-542 ~]# pcs resource
>     * Clone Set: state_1-clone [state_1] (promotable):
>       * Masters: [ virt-542 ]
>       * Slaves: [ virt-543 ]

>   [root@virt-542 ~]# crm_resource --ban --resource state_1 --node virt-542 --promoted
>   WARNING: Creating rsc_location constraint 'cli-ban-state_1-on-virt-542' with a score of -INFINITY for resource state_1 on virt-542.
>    This will prevent state_1 from being promoted on virt-542 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool
>    This will be the case even if virt-542 is the last node in the cluster

>   [root@virt-542 ~]# pcs resource
>     * Clone Set: state_1-clone [state_1] (promotable):
>       * Masters: [ virt-543 ]
>       * Slaves: [ virt-542 ]



5. Pacemaker should have used the new role names in the constraints created by `crm_resource --ban`, but this change 
    was not implemented in RHEL8.5, it will be in RHEL9.

    Ban constraint uses the old role name:

>   [root@virt-542 ~]# cibadmin --query --scope constraints
>   <constraints>
>     <rsc_location id="cli-ban-state_1-on-virt-542" rsc="state_1" role="Master" node="virt-542" score="-INFINITY"/>
>   </constraints>



6. When showing ban constraints, `crm_mon --output-as=xml` (and `--as-xml`) will now show `promoted-only=true/false` 
   in addition to `master_only=true/false`, which is now deprecated (via schema comment only):

>   [root@virt-542 ~]# crm_mon -1 --output-as=xml | grep -i master
>         <resource id="state_1" resource_agent="ocf::pacemaker:Stateful" role="Master" active="true" orphaned="false" blocked="false" managed="true" failed="false" failure_ignored="false" nodes_running_on="1">
>         <attribute name="master-state_1" value="5"/>
>         <attribute name="master-state_1" value="10"/>
>       <ban id="cli-ban-state_1-on-virt-542" resource="state_1-clone" node="virt-542" weight="-1000000" promoted-only="true" master_only="true"/>

    Clear ban constraint:

>   [root@virt-542 ~]# crm_resource --clear --resource state_1
>   Removing constraint: cli-ban-state_1-on-virt-542




7. The `crm_master` command has been deprecated (in help only) and replaced with a new `crm_attribute --promotion` option 
    that defaults to `--lifetime=reboot`.

    Check man/help:

>   [root@virt-543 ~]# man crm_master | grep "master" -A2
>          crm_master <command> [<options>]

>   DESCRIPTION
>          crm_master - Query, update, or delete a resource's promotion score

>          This command is deprecated. Use crm_attribute with the --promotion option instead.

>   [root@virt-543 ~]# crm_master --help
>   crm_master - Query, update, or delete a resource's promotion score

>   Usage: crm_master <command> [<options>]

>   This command is deprecated. Use crm_attribute with the --promotion option
>   instead.

>   [root@virt-543 ~]# man crm_attribute | grep "promotion=" -A3
>          -p, --promotion=RESOURCE
>                 Operate  on  node  attribute used as promotion score for specified resource, or resource
>                 given in OCF_RESOURCE_INSTANCE environment variable if  none  is  specified;  this  also
>                 defaults -l/--lifetime to reboot (normally invoked from an OCF resource agent)


>   [root@virt-543 ~]# crm_attribute --help-all | grep promotion -A3
>     -p, --promotion=RESOURCE     Operate on node attribute used as promotion score for specified
>                                 resource, or resource given in OCF_RESOURCE_INSTANCE environment
>                                 variable if none is specified; this also defaults -l/--lifetime
>                                 to reboot (normally invoked from an OCF resource agent)

    Create promotable resource:

>   [root@virt-543 ~]# pcs resource create state_1 ocf:pacemaker:Stateful promotable
>   [root@virt-543 ~]# crm_mon -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-542 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Fri Aug 27 13:18:13 2021
>     * Last change:  Fri Aug 27 13:18:06 2021 by root via cibadmin on virt-543
>     * 2 nodes configured
>     * 4 resource instances configured

>   Node List:
>     * Online: [ virt-542 virt-543 ]

>   Full List of Resources:
>     * fence-virt-542	(stonith:fence_xvm):	 Started virt-542
>     * fence-virt-543	(stonith:fence_xvm):	 Started virt-543
>     * Clone Set: state_1-clone [state_1] (promotable):
>       * Masters: [ virt-543 ]
>       * Slaves: [ virt-542 ]

>   Node Attributes:
>     * Node: virt-542:
>       * master-state_1                  	: 5         
>     * Node: virt-543:
>       * master-state_1                  	: 10        

>   Migration Summary:

    Test `--promotion` option:

>   [root@virt-543 ~]# crm_attribute --promotion state_1 --update 11 --node virt-542
>   [root@virt-543 ~]# crm_mon -1Arf
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-542 (version 2.1.0-6.el8-7c3f660707) - partition with quorum
>     * Last updated: Fri Aug 27 13:19:13 2021
>     * Last change:  Fri Aug 27 13:18:06 2021 by root via cibadmin on virt-543
>     * 2 nodes configured
>     * 4 resource instances configured

>   Node List:
>     * Online: [ virt-542 virt-543 ]

>   Full List of Resources:
>     * fence-virt-542	(stonith:fence_xvm):	 Started virt-542
>     * fence-virt-543	(stonith:fence_xvm):	 Started virt-543
>     * Clone Set: state_1-clone [state_1] (promotable):
>       * Masters: [ virt-542 ]
>       * Slaves: [ virt-543 ]

>   Node Attributes:
>     * Node: virt-542:
>       * master-state_1                  	: 10        
>     * Node: virt-543:
>       * master-state_1                  	: 5         

>   Migration Summary:

    Bypass the cluster and check the state of a resource on the local node. The operation should result in new exit status "promoted":

>   [root@virt-543 ~]# crm_resource --resource state_1 --force-check
>   Operation force-check for state_1 (ocf:pacemaker:Stateful) returned: 'promoted' (8)
>   crm_resource: Error performing operation: Unknown exit status



8. A variety of public C APIs were deprecated and replaced (code using the old APIs will continue to work).

    Documented in https://wiki.clusterlabs.org/wiki/Pacemaker_2.1_Changes#Public_C_API_changes



9. Supporting OCF 1.1 reloadable parameters

    OCF 1.1 created the `reload-agent` action and `reloadable` parameter attribute for the Pacemaker usage.
    The `unique` agent meta-data field has been deprecated in favor of two new fields `unique-group` and `reloadable`. 

    Check agents meta-data for `unique-group` and `reloadable` parameters:

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Dummy | grep unique-group
>   <parameter name="state" unique-group="state">

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Stateful | grep unique-group
>   <parameter name="state" unique-group="state">

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:remote | grep unique-group
>       <parameter name="server" unique-group="address">
>       <parameter name="port" unique-group="address">

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Dummy | grep reloadable
>   <parameter name="passwd" reloadable="1">
>   <parameter name="fake" reloadable="1">
>   <parameter name="op_sleep" reloadable="1">
>   <parameter name="fail_start_on" reloadable="1">
>   <parameter name="envfile" reloadable="1">

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Stateful | grep reloadable
>   <parameter name="envfile" reloadable="true">
>   <parameter name="notify_delay" reloadable="true">

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:remote | grep reloadable
>       <parameter name="reconnect_interval" reloadable="1">


    Check agents meta-data for `reload-agent` action:

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Dummy | grep reload-agent
>   Start, migrate_from, and reload-agent actions will return failure if running on
>   <action name="reload-agent" timeout="20s" />

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:Stateful | grep reload-agent
>   <action name="reload-agent"  timeout="10s" />

>   [root@virt-542 ~]# crm_resource --show-metadata=ocf:pacemaker:remote | grep reload-agent
>       <action name="reload-agent"  timeout="60s" />



10. New environment variable `OCF_OUTPUT_FORMAT`:

>   [root@virt-542 ~]# pcs resource debug-monitor --full dummy | grep -i OCF_OUTPUT_FORMAT
>   OCF_OUTPUT_FORMAT=text

    More testing done in bz1644628#c32



11. New environment variable `OCF_CHECK_LEVEL`:

    More testing done in bz1955792#c11 and bz1644628#c32



12. Agent exit statuses have been clarified and expanded.
    
    Documented in https://wiki.clusterlabs.org/wiki/Update_Resource_Agent_for_OCF_1.1#Exit_statuses

    Tested also in bz1644628#c32




marking verified in pacemaker-2.1.0-6.el8

Comment 14 errata-xmlrpc 2021-11-09 18:44:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:4267


Note You need to log in before you can comment on or make changes to this bug.