Bug 2218218 - Cluster does not move resource group when colocation constraint exists for individual group member
Summary: Cluster does not move resource group when colocation constraint exists for in...
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: pacemaker
Version: 9.3
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Ken Gaillot
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks: 2218232
TreeView+ depends on / blocked
 
Reported: 2023-06-28 13:15 UTC by Patrik Hagara
Modified: 2023-08-10 15:41 UTC (History)
2 users (show)

Fixed In Version: pacemaker-2.1.6-4.el9
Doc Type: Bug Fix
Doc Text:
Cause: When assigning groups to a node, Pacemaker did not consider constraints that were configured explicitly with a group member instead of the group itself. Consequence: A group could be assigned to a node where some of its members were unable to run. Fix: Pacemaker now considers member colocations when assigning groups. Result: Groups run on the best available node.
Clone Of:
: 2218232 (view as bug list)
Environment:
Last Closed:
Type: Bug
Target Upstream Version: 2.1.7
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CLUSTERQE-6793 0 None None None 2023-06-28 15:16:21 UTC
Red Hat Issue Tracker RHELPLAN-161084 0 None None None 2023-06-28 13:16:44 UTC

Comment 4 Markéta Smazová 2023-07-25 14:28:43 UTC
after fix:
----------

>   [root@virt-511 ~]# rpm -q pacemaker
>   pacemaker-2.1.6-5.el9.x86_64

Create two collocated and ordered resources:
>   [root@virt-511 ~]# pcs resource create blue1 ocf:pacemaker:Dummy
>   [root@virt-511 ~]# pcs resource create blue2 ocf:pacemaker:Dummy
>   [root@virt-511 ~]# pcs constraint order start blue1 then blue2
>   Adding blue1 blue2 (kind: Mandatory) (Options: first-action=start then-action=start)
>   [root@virt-511 ~]# pcs constraint colocation add blue2 with blue1 score=INFINITY

Create group with two resources:
>   [root@virt-511 ~]# pcs resource create green1 ocf:pacemaker:Dummy --group green-group
>   [root@virt-511 ~]# pcs resource create green2 ocf:pacemaker:Dummy --group green-group

The collocated and ordered resources "blue1" and "blue2" run on node "virt-511" and the group "green-group"
runs on node "virt-513":
>   [root@virt-511 ~]# pcs status
>   Cluster name: STSRHTS2474
>   Cluster Summary:
>     * Stack: corosync (Pacemaker is running)
>     * Current DC: virt-511 (version 2.1.6-5.el9-6fdc9deea29) - partition with quorum
>     * Last updated: Tue Jul 25 14:16:40 2023 on virt-511
>     * Last change:  Tue Jul 25 14:16:15 2023 by root via cibadmin on virt-511
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-511 virt-513 ]

>   Full List of Resources:
>     * fence-virt-511	(stonith:fence_xvm):	 Started virt-511
>     * fence-virt-513	(stonith:fence_xvm):	 Started virt-513
>     * blue1	(ocf:pacemaker:Dummy):	 Started virt-511
>     * blue2	(ocf:pacemaker:Dummy):	 Started virt-511
>     * Resource Group: green-group:
>       * green1	(ocf:pacemaker:Dummy):	 Started virt-513
>       * green2	(ocf:pacemaker:Dummy):	 Started virt-513

>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled

>   [root@virt-511 ~]# pcs constraint --full
>   Colocation Constraints:
>     resource 'blue2' with resource 'blue1' (id: colocation-blue2-blue1-INFINITY)
>       score=INFINITY
>   Order Constraints:
>     start resource 'blue1' then start resource 'blue2' (id: order-blue1-blue2-mandatory)

Add resource "blue2" to the "green-group":
>   [root@virt-511 ~]# pcs resource group add green-group blue2
>   [root@virt-511 ~]# pcs status
>   Cluster name: STSRHTS2474
>   Cluster Summary:
>     * Stack: corosync (Pacemaker is running)
>     * Current DC: virt-511 (version 2.1.6-5.el9-6fdc9deea29) - partition with quorum
>     * Last updated: Tue Jul 25 14:17:11 2023 on virt-511
>     * Last change:  Tue Jul 25 14:17:01 2023 by root via cibadmin on virt-511
>     * 2 nodes configured
>     * 6 resource instances configured

>   Node List:
>     * Online: [ virt-511 virt-513 ]

>   Full List of Resources:
>     * fence-virt-511	(stonith:fence_xvm):	 Started virt-511
>     * fence-virt-513	(stonith:fence_xvm):	 Started virt-513
>     * blue1	(ocf:pacemaker:Dummy):	 Started virt-511
>     * Resource Group: green-group:
>       * green1	(ocf:pacemaker:Dummy):	 Started virt-511
>       * green2	(ocf:pacemaker:Dummy):	 Started virt-511
>       * blue2	(ocf:pacemaker:Dummy):	 Started virt-511

>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled

RESULT: Resource group "green-group" moved to the node "virt-511" where the resources "blue1" and "blue2" originally started.

marking VERIFIED in pacemaker-2.1.6-5.el9


Note You need to log in before you can comment on or make changes to this bug.