RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1760669 - Colocation constraint scores on resource groups do not work correctly with default resource-stickiness
Summary: Colocation constraint scores on resource groups do not work correctly with de...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.7
Hardware: All
OS: Linux
high
medium
Target Milestone: rc
: 7.9
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-11 05:29 UTC by Reid Wahl
Modified: 2023-09-07 20:47 UTC (History)
3 users (show)

Fixed In Version: pacemaker-1.1.22-1.el7
Doc Type: Bug Fix
Doc Text:
Cause: When a group was colocated with some resource, the resource would incorporate the node allocations of each member of the group when being allocated to a node itself. However each group member would do the same for its own dependencies, which includes all later members of the group. Consequence: Allocation scores of later group members would be counted multiple times, distorting the relative weight of the colocation score, potentially resulting in less desirable placement of resources (such as moving the main resource to its dependency's node). Fix: A resource that has a group colocated with it will now incorporate the allocation scores of only the first member, because that first member will incorporate the scores of all later members. Result: Non-infinite colocation scores now have the desired effect relative to other scores such as resource stickiness.
Clone Of:
Environment:
Last Closed: 2020-09-29 20:03:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4493731 0 None None None 2019-10-11 05:56:53 UTC
Red Hat Product Errata RHEA-2020:3951 0 None None None 2020-09-29 20:04:18 UTC

Description Reid Wahl 2019-10-11 05:29:12 UTC
Description of problem:

When a default resource-stickiness is configured, colocation constraint scores on resource groups are not considered correctly when making placement decisions.

Given a `dummy2 with dummy1` constraint, after a certain number of resources are present in the group, dummy1 moves to dummy2's location (NOT the other way around). This happens even when the colocation score is less than the default resource-stickiness.

The issue does not seem to be reproducible without a default resource-stickiness configured, and is not reproducible with location constraints; only with colocation.

There are three test scenarios below.


### BEGIN no resource-stickiness ###

[root@node1_hb ~]# pcs resource defaults
No defaults set
[root@node1_hb ~]# for i in {1,2}; do pcs resource create dummy${i}a ocf:heartbeat:Dummy --group dummy${i}; done
[root@node1_hb ~]# pcs constraint colocation add dummy2 with dummy1 0
[root@node1_hb ~]# pcs constraint show
Location Constraints:
Ordering Constraints:
Colocation Constraints:
  dummy2 with dummy1 (score:0)
Ticket Constraints:
[root@node1_hb ~]# for i in {b..d}; do for j in {1,2}; do pcs resource create dummy${j}${i} ocf:heartbeat:Dummy --group dummy${j}; done; sleep 2; crm_simulate -Ls; done
... Skipping repetitive output ...
 Resource Group: dummy1
     dummy1a    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1b    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1c    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1d    (ocf::heartbeat:Dummy): Started node2_hb
 Resource Group: dummy2
     dummy2a    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2b    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2c    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2d    (ocf::heartbeat:Dummy): Started node1_hb

Allocation scores:
native_color: kdump allocation score on node1_hb: 0
native_color: kdump allocation score on node2_hb: 0
group_color: dummy1 allocation score on node1_hb: 0
group_color: dummy1 allocation score on node2_hb: 0
group_color: dummy1a allocation score on node1_hb: 0
group_color: dummy1a allocation score on node2_hb: 0
group_color: dummy1b allocation score on node1_hb: 0
group_color: dummy1b allocation score on node2_hb: 0
group_color: dummy1c allocation score on node1_hb: 0
group_color: dummy1c allocation score on node2_hb: 0
group_color: dummy1d allocation score on node1_hb: 0
group_color: dummy1d allocation score on node2_hb: 0
native_color: dummy1a allocation score on node1_hb: 0
native_color: dummy1a allocation score on node2_hb: 0
native_color: dummy1b allocation score on node1_hb: -INFINITY
native_color: dummy1b allocation score on node2_hb: 0
native_color: dummy1c allocation score on node1_hb: -INFINITY
native_color: dummy1c allocation score on node2_hb: 0
native_color: dummy1d allocation score on node1_hb: -INFINITY
native_color: dummy1d allocation score on node2_hb: 0
group_color: dummy2 allocation score on node1_hb: 0
group_color: dummy2 allocation score on node2_hb: 0
group_color: dummy2a allocation score on node1_hb: 0
group_color: dummy2a allocation score on node2_hb: 0
group_color: dummy2b allocation score on node1_hb: 0
group_color: dummy2b allocation score on node2_hb: 0
group_color: dummy2c allocation score on node1_hb: 0
group_color: dummy2c allocation score on node2_hb: 0
group_color: dummy2d allocation score on node1_hb: 0
group_color: dummy2d allocation score on node2_hb: 0
native_color: dummy2a allocation score on node1_hb: 0
native_color: dummy2a allocation score on node2_hb: 0
native_color: dummy2b allocation score on node1_hb: 0
native_color: dummy2b allocation score on node2_hb: -INFINITY
native_color: dummy2c allocation score on node1_hb: 0
native_color: dummy2c allocation score on node2_hb: -INFINITY
native_color: dummy2d allocation score on node1_hb: 0
native_color: dummy2d allocation score on node2_hb: -INFINITY

### END no resource-stickiness ###


### BEGIN resource-stickiness=1000 and colocation constraint score=0 ###

[root@node1_hb ~]# pcs resource defaults
resource-stickiness=1000
[root@node1_hb ~]# for i in {1,2}; do pcs resource create dummy${i}a ocf:heartbeat:Dummy --group dummy${i}; done
[root@node1_hb ~]# pcs constraint colocation add dummy2 with dummy1 0
[root@node1_hb ~]# pcs constraint --full
Location Constraints:
Ordering Constraints:
Colocation Constraints:
  dummy2 with dummy1 (score:0) (id:colocation-dummy2-dummy1-0)
Ticket Constraints:
[root@node1_hb ~]# for i in {b..d}; do for j in {1,2}; do pcs resource create dummy${j}${i} ocf:heartbeat:Dummy --group dummy${j}; done; sleep 2; crm_simulate -Ls; done

Current cluster status:
Online: [ node1_hb node2_hb ]

 kdump  (stonith:fence_kdump):  Started node1_hb
 Resource Group: dummy1
     dummy1a    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1b    (ocf::heartbeat:Dummy): Started node2_hb
 Resource Group: dummy2
     dummy2a    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2b    (ocf::heartbeat:Dummy): Started node1_hb

Allocation scores:
native_color: kdump allocation score on node1_hb: 1000
native_color: kdump allocation score on node2_hb: 0
group_color: dummy1 allocation score on node1_hb: 0
group_color: dummy1 allocation score on node2_hb: 0
group_color: dummy1a allocation score on node1_hb: 0
group_color: dummy1a allocation score on node2_hb: 1000
group_color: dummy1b allocation score on node1_hb: 0
group_color: dummy1b allocation score on node2_hb: 1000
native_color: dummy1a allocation score on node1_hb: 1000
native_color: dummy1a allocation score on node2_hb: 2000
native_color: dummy1b allocation score on node1_hb: -INFINITY
native_color: dummy1b allocation score on node2_hb: 1000
group_color: dummy2 allocation score on node1_hb: 0
group_color: dummy2 allocation score on node2_hb: 0
group_color: dummy2a allocation score on node1_hb: 1000
group_color: dummy2a allocation score on node2_hb: 0
group_color: dummy2b allocation score on node1_hb: 1000
group_color: dummy2b allocation score on node2_hb: 0
native_color: dummy2a allocation score on node1_hb: 2000
native_color: dummy2a allocation score on node2_hb: 0
native_color: dummy2b allocation score on node1_hb: 1000
native_color: dummy2b allocation score on node2_hb: -INFINITY

Transition Summary:

Current cluster status:
Online: [ node1_hb node2_hb ]

 kdump  (stonith:fence_kdump):  Started node1_hb
 Resource Group: dummy1
     dummy1a    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1b    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1c    (ocf::heartbeat:Dummy): Started node2_hb
 Resource Group: dummy2
     dummy2a    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2b    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2c    (ocf::heartbeat:Dummy): Started node1_hb

Allocation scores:
native_color: kdump allocation score on node1_hb: 1000
native_color: kdump allocation score on node2_hb: 0
group_color: dummy1 allocation score on node1_hb: 0
group_color: dummy1 allocation score on node2_hb: 0
group_color: dummy1a allocation score on node1_hb: 0
group_color: dummy1a allocation score on node2_hb: 1000
group_color: dummy1b allocation score on node1_hb: 0
group_color: dummy1b allocation score on node2_hb: 1000
group_color: dummy1c allocation score on node1_hb: 0
group_color: dummy1c allocation score on node2_hb: 1000
native_color: dummy1a allocation score on node1_hb: 3000
native_color: dummy1a allocation score on node2_hb: 3000
native_color: dummy1b allocation score on node1_hb: -INFINITY
native_color: dummy1b allocation score on node2_hb: 2000
native_color: dummy1c allocation score on node1_hb: -INFINITY
native_color: dummy1c allocation score on node2_hb: 1000
group_color: dummy2 allocation score on node1_hb: 0
group_color: dummy2 allocation score on node2_hb: 0
group_color: dummy2a allocation score on node1_hb: 1000
group_color: dummy2a allocation score on node2_hb: 0
group_color: dummy2b allocation score on node1_hb: 1000
group_color: dummy2b allocation score on node2_hb: 0
group_color: dummy2c allocation score on node1_hb: 1000
group_color: dummy2c allocation score on node2_hb: 0
native_color: dummy2a allocation score on node1_hb: 3000
native_color: dummy2a allocation score on node2_hb: 0
native_color: dummy2b allocation score on node1_hb: 2000
native_color: dummy2b allocation score on node2_hb: -INFINITY
native_color: dummy2c allocation score on node1_hb: 1000
native_color: dummy2c allocation score on node2_hb: -INFINITY

Transition Summary:

Current cluster status:
Online: [ node1_hb node2_hb ]

 kdump  (stonith:fence_kdump):  Started node1_hb
 Resource Group: dummy1
     dummy1a    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1b    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1c    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1d    (ocf::heartbeat:Dummy): Started node2_hb
 Resource Group: dummy2
     dummy2a    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2b    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2c    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2d    (ocf::heartbeat:Dummy): Started node1_hb

Allocation scores:
native_color: kdump allocation score on node1_hb: 1000
native_color: kdump allocation score on node2_hb: 0
group_color: dummy1 allocation score on node1_hb: 0
group_color: dummy1 allocation score on node2_hb: 0
group_color: dummy1a allocation score on node1_hb: 0
group_color: dummy1a allocation score on node2_hb: 1000
group_color: dummy1b allocation score on node1_hb: 0
group_color: dummy1b allocation score on node2_hb: 1000
group_color: dummy1c allocation score on node1_hb: 0
group_color: dummy1c allocation score on node2_hb: 1000
group_color: dummy1d allocation score on node1_hb: 0
group_color: dummy1d allocation score on node2_hb: 1000
native_color: dummy1a allocation score on node1_hb: 6000
native_color: dummy1a allocation score on node2_hb: 4000
native_color: dummy1b allocation score on node1_hb: 6000
native_color: dummy1b allocation score on node2_hb: -INFINITY
native_color: dummy1c allocation score on node1_hb: 6000
native_color: dummy1c allocation score on node2_hb: -INFINITY
native_color: dummy1d allocation score on node1_hb: 3000
native_color: dummy1d allocation score on node2_hb: -INFINITY
group_color: dummy2 allocation score on node1_hb: 0
group_color: dummy2 allocation score on node2_hb: 0
group_color: dummy2a allocation score on node1_hb: 1000
group_color: dummy2a allocation score on node2_hb: 0
group_color: dummy2b allocation score on node1_hb: 1000
group_color: dummy2b allocation score on node2_hb: 0
group_color: dummy2c allocation score on node1_hb: 1000
group_color: dummy2c allocation score on node2_hb: 0
group_color: dummy2d allocation score on node1_hb: 1000
group_color: dummy2d allocation score on node2_hb: 0
native_color: dummy2a allocation score on node1_hb: 4000
native_color: dummy2a allocation score on node2_hb: 0
native_color: dummy2b allocation score on node1_hb: 3000
native_color: dummy2b allocation score on node2_hb: -INFINITY
native_color: dummy2c allocation score on node1_hb: 2000
native_color: dummy2c allocation score on node2_hb: -INFINITY
native_color: dummy2d allocation score on node1_hb: 1000
native_color: dummy2d allocation score on node2_hb: -INFINITY

Transition Summary:
 * Move       dummy1a     ( node2_hb -> node1_hb )  
 * Move       dummy1b     ( node2_hb -> node1_hb )  
 * Move       dummy1c     ( node2_hb -> node1_hb )  
 * Move       dummy1d     ( node2_hb -> node1_hb )  

### END resource-stickiness=1000 and colocation constraint score=0 ###


### BEGIN resource-stickiness=1000 and colocation constraint score=2000 ###
## In this test, I waited until four resources were in each group before adding the colocation constraint.
## Adding the colocation constraint prior to four resources per group causes the resources to be colocated
## properly according to the score instead of highlighting the bug.

[root@node1_hb ~]# pcs resource defaults 
resource-stickiness=1000

[root@node1_hb ~]# crm_simulate -Ls
...
 kdump  (stonith:fence_kdump):  Started node1_hb
 Resource Group: dummy1
     dummy1a    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1b    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1c    (ocf::heartbeat:Dummy): Started node2_hb
     dummy1d    (ocf::heartbeat:Dummy): Started node2_hb
 Resource Group: dummy2
     dummy2a    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2b    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2c    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2d    (ocf::heartbeat:Dummy): Started node1_hb

Allocation scores:
native_color: kdump allocation score on node1_hb: 1000
native_color: kdump allocation score on node2_hb: 0
group_color: dummy1 allocation score on node1_hb: 0
group_color: dummy1 allocation score on node2_hb: 0
group_color: dummy1a allocation score on node1_hb: 0
group_color: dummy1a allocation score on node2_hb: 1000
group_color: dummy1b allocation score on node1_hb: 0
group_color: dummy1b allocation score on node2_hb: 1000
group_color: dummy1c allocation score on node1_hb: 0
group_color: dummy1c allocation score on node2_hb: 1000
group_color: dummy1d allocation score on node1_hb: 0
group_color: dummy1d allocation score on node2_hb: 1000
native_color: dummy1a allocation score on node1_hb: 0
native_color: dummy1a allocation score on node2_hb: 4000
native_color: dummy1b allocation score on node1_hb: -INFINITY
native_color: dummy1b allocation score on node2_hb: 3000
native_color: dummy1c allocation score on node1_hb: -INFINITY
native_color: dummy1c allocation score on node2_hb: 2000
native_color: dummy1d allocation score on node1_hb: -INFINITY
native_color: dummy1d allocation score on node2_hb: 1000
group_color: dummy2 allocation score on node1_hb: 0
group_color: dummy2 allocation score on node2_hb: 0
group_color: dummy2a allocation score on node1_hb: 1000
group_color: dummy2a allocation score on node2_hb: 0
group_color: dummy2b allocation score on node1_hb: 1000
group_color: dummy2b allocation score on node2_hb: 0
group_color: dummy2c allocation score on node1_hb: 1000
group_color: dummy2c allocation score on node2_hb: 0
group_color: dummy2d allocation score on node1_hb: 1000
group_color: dummy2d allocation score on node2_hb: 0
native_color: dummy2a allocation score on node1_hb: 4000
native_color: dummy2a allocation score on node2_hb: 0
native_color: dummy2b allocation score on node1_hb: 3000
native_color: dummy2b allocation score on node2_hb: -INFINITY
native_color: dummy2c allocation score on node1_hb: 2000
native_color: dummy2c allocation score on node2_hb: -INFINITY
native_color: dummy2d allocation score on node1_hb: 1000
native_color: dummy2d allocation score on node2_hb: -INFINITY

Transition Summary:

[root@node1_hb ~]# pcs constraint colocation add dummy2 with dummy1 2000
[root@node1_hb ~]# crm_simulate -Ls
...
 Resource Group: dummy1
     dummy1a    (ocf::heartbeat:Dummy): Started node1_hb
     dummy1b    (ocf::heartbeat:Dummy): Started node1_hb
     dummy1c    (ocf::heartbeat:Dummy): Started node1_hb
     dummy1d    (ocf::heartbeat:Dummy): Started node1_hb
 Resource Group: dummy2
     dummy2a    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2b    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2c    (ocf::heartbeat:Dummy): Started node1_hb
     dummy2d    (ocf::heartbeat:Dummy): Started node1_hb

Allocation scores:
native_color: kdump allocation score on node1_hb: 1000
native_color: kdump allocation score on node2_hb: 0
group_color: dummy1 allocation score on node1_hb: 0
group_color: dummy1 allocation score on node2_hb: 0
group_color: dummy1a allocation score on node1_hb: 1000
group_color: dummy1a allocation score on node2_hb: 0
group_color: dummy1b allocation score on node1_hb: 1000
group_color: dummy1b allocation score on node2_hb: 0
group_color: dummy1c allocation score on node1_hb: 1000
group_color: dummy1c allocation score on node2_hb: 0
group_color: dummy1d allocation score on node1_hb: 1000
group_color: dummy1d allocation score on node2_hb: 0
native_color: dummy1a allocation score on node1_hb: 10006
native_color: dummy1a allocation score on node2_hb: 0
native_color: dummy1b allocation score on node1_hb: 9006
native_color: dummy1b allocation score on node2_hb: -INFINITY
native_color: dummy1c allocation score on node1_hb: 8006
native_color: dummy1c allocation score on node2_hb: -INFINITY
native_color: dummy1d allocation score on node1_hb: 4002
native_color: dummy1d allocation score on node2_hb: -INFINITY
group_color: dummy2 allocation score on node1_hb: 0
group_color: dummy2 allocation score on node2_hb: 0
group_color: dummy2a allocation score on node1_hb: 1000
group_color: dummy2a allocation score on node2_hb: 0
group_color: dummy2b allocation score on node1_hb: 1000
group_color: dummy2b allocation score on node2_hb: 0
group_color: dummy2c allocation score on node1_hb: 1000
group_color: dummy2c allocation score on node2_hb: 0
group_color: dummy2d allocation score on node1_hb: 1000
group_color: dummy2d allocation score on node2_hb: 0
native_color: dummy2a allocation score on node1_hb: 6000
native_color: dummy2a allocation score on node2_hb: 0
native_color: dummy2b allocation score on node1_hb: 3000
native_color: dummy2b allocation score on node2_hb: -INFINITY
native_color: dummy2c allocation score on node1_hb: 2000
native_color: dummy2c allocation score on node2_hb: -INFINITY
native_color: dummy2d allocation score on node1_hb: 1000
native_color: dummy2d allocation score on node2_hb: -INFINITY

Transition Summary:

### END resource-stickiness=1000 and colocation constraint score=2000 ###


-----

Version-Release number of selected component (if applicable):

pacemaker-1.1.20-5.el7_7.1.x86_64
pcs-0.9.167-3.el7_7.1.x86_64

-----

How reproducible:

Always

-----

Steps to Reproduce:
1. Configure a default resource-stickiness value.
2. Configure two resource groups (`dummy1` and `dummy2`) with at least four resources each. Place them so that they are initially on different nodes.
3. Configure a colocation constraint between the two groups (`dummy2 with dummy1`), with a non-INFINITY score.

-----

Actual results:

If the constraint is `dummy2 with dummy1`, the dummy1 group moves to the node where dummy2 is located.

-----

Expected results:

If the constraint score is lower than the resource-stickiness score, the resources should not move.

If the constraint score is higher than the resource-stickiness score, `dummy2` should move to the node where `dummy1` is located.

-----

Additional info:

This is a customer-reported issue. They have nine resources per group. So far we haven't come up with a workaround except to use INFINITY constraints, which is not what the customer wants.

Comment 4 Ken Gaillot 2019-11-04 21:36:48 UTC
Investigation for a fix is ongoing, but the basic issue is that when pacemaker places the members of the first group, it internally applies the configured colocation for each member of the secondary group, and each of those colocations incorporates the internal colocation between each member of the secondary group, and this adds up to more than the first group's stickiness.

There is a workaround by manipulating the scores. One factor to consider is that a group's stickiness is the sum of the stickiness of all its members, so in the example with resource-stickiness=1000 and four members per group, each group's stickiness is 4000, not 1000. For that example, keeping the default stickiness at 1000, if you configure the secondary group members' stickiness to be 600, and the colocation score to be above 2400, then the secondary group will be moved to the primary group's location.

Comment 5 Ken Gaillot 2019-11-05 21:54:16 UTC
More detail about the underlying cause:

When pacemaker places a resource, it considers not only colocation constraints that directly involve it, but also colocation constraints that indirectly involve it via a chain of resources. For example, if rsc1 is colocated with rsc2, and rsc2 is colocated with rsc3, then rsc1 will take into account the situation of both rsc2 and rsc3. Groups are implemented as implicit ordering and colocation constraints between the members.

Pacemaker places a lesser value on the indirect relationships. For example, given a colocation score of 2000, and an indirectly related resource with a stickiness of 1000, the indirect effect will be stickiness * score / "infinity" = 1000 * 2000 / 1,000,000 = 2.

This situation would normally not occur because of that factor, which is applied when the primary group's first member takes into account the secondary group's first member (due to the "secondary group with primary group" constraint). However, the indirect relationships from the internal group colocations of the secondary group then get (wrongly) applied at full strength, because the score on those colocation is infinity (thus the multiplication factor for the stickiness is infinity/infinity or 1).

So, the fix (still in progress) will be to ensure that those indirect relationships get attenuated by the original factor, rather than their intrinsic factor.

Comment 6 Reid Wahl 2019-11-23 02:17:49 UTC
(In reply to Ken Gaillot from comment #4)
> Investigation for a fix is ongoing, but the basic issue is that when
> pacemaker places the members of the first group, it internally applies the
> configured colocation for each member of the secondary group, and each of
> those colocations incorporates the internal colocation between each member
> of the secondary group, and this adds up to more than the first group's
> stickiness.
> 
> There is a workaround by manipulating the scores. One factor to consider is
> that a group's stickiness is the sum of the stickiness of all its members,
> so in the example with resource-stickiness=1000 and four members per group,
> each group's stickiness is 4000, not 1000. For that example, keeping the
> default stickiness at 1000, if you configure the secondary group members'
> stickiness to be 600, and the colocation score to be above 2400, then the
> secondary group will be moved to the primary group's location.

I confirmed that worked with two four-resource groups. I set a default resource-stickiness=1000, set resource-stickiness=600 for each resource in group2, and set a colocation score of 2300 for group2 with group1. As desired, both groups stayed in place. When I set colocation score to 2500, group2 moved to group1's node.

I'm having a hard time finding values that work for nine-member resource groups, as in the customer's cluster. In my testing with nine members per group, group1 moves to group2's node every time no matter what, with a "group2 with group1" constraint.

Comment 7 Reid Wahl 2019-11-23 02:21:53 UTC
(In reply to Reid Wahl from comment #6)
> I'm having a hard time finding values that work for nine-member resource
> groups, as in the customer's cluster. In my testing with nine members per
> group, group1 moves to group2's node every time no matter what, with a
> "group2 with group1" constraint.

Got it. Default resource-stickiness=1000, group2 member resource-stickiness=100, colocation score=800. I don't quite understand how the math works out but this gets me the desired behavior. I'll see if this works for the customer while a fix is in progress.

Comment 10 Ken Gaillot 2020-03-23 17:58:52 UTC
I believe I have a fix, though it still needs thorough vetting for any unintended consequences.

However, it revealed something that might not have been obvious from the original request: with the correct logic, neither dummy1 nor dummy2 should move, since each group's cumulative stickiness of 4000 is higher than the colocation score of 2000. The colocation score would have to be higher than 4000 for dummy2 to move to dummy1's node (with the fix).

Comment 11 Reid Wahl 2020-03-23 18:07:21 UTC
(In reply to Ken Gaillot from comment #10)
> However, it revealed something that might not have been obvious from the
> original request: with the correct logic, neither dummy1 nor dummy2 should
> move, since each group's cumulative stickiness of 4000 is higher than the
> colocation score of 2000. The colocation score would have to be higher than
> 4000 for dummy2 to move to dummy1's node (with the fix).

The correct behavior that you describe was indeed my understanding of the user's goal.

Comment 12 Ken Gaillot 2020-03-27 18:26:46 UTC
The fix has been merged in the upstream 1.1 branch (for RHEL 7.9) via commit 15b0ad5a. It has also been merged in the upstream master branch via commit a7ac273c (which is expected in RHEL 8.3 via rebase).

Comment 15 Patrik Hagara 2020-07-08 16:54:50 UTC
testing with colocation constraint score = 2000, which is lower than the groups' stickiness (4*1000=4000 for a group of 4 resources, where 1000 is the configured default resource stickiness) -- the resources should stay where they are


before
======

> [root@virt-162 ~]# pcs resource defaults resource-stickiness=1000
> Warning: Defaults do not apply to resources which override them with their own defined values
> [root@virt-162 ~]# pcs resource defaults
> resource-stickiness=1000
> [root@virt-162 ~]# for i in {a..d}; do for j in {1,2}; do pcs resource create dummy${j}${i} ocf:heartbeat:Dummy --group dummy${j}; done; done
> [root@virt-162 ~]# crm_simulate -Ls
> 
> Current cluster status:
> Online: [ virt-162 virt-163 virt-164 ]
> 
>  fence-virt-162	(stonith:fence_xvm):	Started virt-162
>  fence-virt-163	(stonith:fence_xvm):	Started virt-163
>  fence-virt-164	(stonith:fence_xvm):	Started virt-164
>  Resource Group: dummy1
>      dummy1a	(ocf::heartbeat:Dummy):	Started virt-162
>      dummy1b	(ocf::heartbeat:Dummy):	Started virt-162
>      dummy1c	(ocf::heartbeat:Dummy):	Started virt-162
>      dummy1d	(ocf::heartbeat:Dummy):	Started virt-162
>  Resource Group: dummy2
>      dummy2a	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2b	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2c	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2d	(ocf::heartbeat:Dummy):	Started virt-163
> 
> Allocation scores:
> native_color: fence-virt-162 allocation score on virt-162: 1000
> native_color: fence-virt-162 allocation score on virt-163: 0
> native_color: fence-virt-162 allocation score on virt-164: 0
> native_color: fence-virt-163 allocation score on virt-162: 0
> native_color: fence-virt-163 allocation score on virt-163: 1000
> native_color: fence-virt-163 allocation score on virt-164: 0
> native_color: fence-virt-164 allocation score on virt-162: 0
> native_color: fence-virt-164 allocation score on virt-163: 0
> native_color: fence-virt-164 allocation score on virt-164: 1000
> group_color: dummy1 allocation score on virt-162: 0
> group_color: dummy1 allocation score on virt-163: 0
> group_color: dummy1 allocation score on virt-164: 0
> group_color: dummy1a allocation score on virt-162: 1000
> group_color: dummy1a allocation score on virt-163: 0
> group_color: dummy1a allocation score on virt-164: 0
> group_color: dummy1b allocation score on virt-162: 1000
> group_color: dummy1b allocation score on virt-163: 0
> group_color: dummy1b allocation score on virt-164: 0
> group_color: dummy1c allocation score on virt-162: 1000
> group_color: dummy1c allocation score on virt-163: 0
> group_color: dummy1c allocation score on virt-164: 0
> group_color: dummy1d allocation score on virt-162: 1000
> group_color: dummy1d allocation score on virt-163: 0
> group_color: dummy1d allocation score on virt-164: 0
> native_color: dummy1a allocation score on virt-162: 4000
> native_color: dummy1a allocation score on virt-163: 0
> native_color: dummy1a allocation score on virt-164: 0
> native_color: dummy1b allocation score on virt-162: 3000
> native_color: dummy1b allocation score on virt-163: -INFINITY
> native_color: dummy1b allocation score on virt-164: -INFINITY
> native_color: dummy1c allocation score on virt-162: 2000
> native_color: dummy1c allocation score on virt-163: -INFINITY
> native_color: dummy1c allocation score on virt-164: -INFINITY
> native_color: dummy1d allocation score on virt-162: 1000
> native_color: dummy1d allocation score on virt-163: -INFINITY
> native_color: dummy1d allocation score on virt-164: -INFINITY
> group_color: dummy2 allocation score on virt-162: 0
> group_color: dummy2 allocation score on virt-163: 0
> group_color: dummy2 allocation score on virt-164: 0
> group_color: dummy2a allocation score on virt-162: 0
> group_color: dummy2a allocation score on virt-163: 1000
> group_color: dummy2a allocation score on virt-164: 0
> group_color: dummy2b allocation score on virt-162: 0
> group_color: dummy2b allocation score on virt-163: 1000
> group_color: dummy2b allocation score on virt-164: 0
> group_color: dummy2c allocation score on virt-162: 0
> group_color: dummy2c allocation score on virt-163: 1000
> group_color: dummy2c allocation score on virt-164: 0
> group_color: dummy2d allocation score on virt-162: 0
> group_color: dummy2d allocation score on virt-163: 1000
> group_color: dummy2d allocation score on virt-164: 0
> native_color: dummy2a allocation score on virt-162: 0
> native_color: dummy2a allocation score on virt-163: 4000
> native_color: dummy2a allocation score on virt-164: 0
> native_color: dummy2b allocation score on virt-162: -INFINITY
> native_color: dummy2b allocation score on virt-163: 3000
> native_color: dummy2b allocation score on virt-164: -INFINITY
> native_color: dummy2c allocation score on virt-162: -INFINITY
> native_color: dummy2c allocation score on virt-163: 2000
> native_color: dummy2c allocation score on virt-164: -INFINITY
> native_color: dummy2d allocation score on virt-162: -INFINITY
> native_color: dummy2d allocation score on virt-163: 1000
> native_color: dummy2d allocation score on virt-164: -INFINITY
> 
> Transition Summary:
> [root@virt-162 ~]# pcs constraint colocation add dummy2 with dummy1 2000
> [root@virt-162 ~]# crm_simulate -Ls
> 
> Current cluster status:
> Online: [ virt-162 virt-163 virt-164 ]
> 
>  fence-virt-162	(stonith:fence_xvm):	Started virt-162
>  fence-virt-163	(stonith:fence_xvm):	Started virt-163
>  fence-virt-164	(stonith:fence_xvm):	Started virt-164
>  Resource Group: dummy1
>      dummy1a	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy1b	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy1c	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy1d	(ocf::heartbeat:Dummy):	Started virt-163 (Monitoring)
>  Resource Group: dummy2
>      dummy2a	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2b	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2c	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2d	(ocf::heartbeat:Dummy):	Started virt-163
> 
> Allocation scores:
> native_color: fence-virt-162 allocation score on virt-162: 1000
> native_color: fence-virt-162 allocation score on virt-163: 0
> native_color: fence-virt-162 allocation score on virt-164: 0
> native_color: fence-virt-163 allocation score on virt-162: 0
> native_color: fence-virt-163 allocation score on virt-163: 1000
> native_color: fence-virt-163 allocation score on virt-164: 0
> native_color: fence-virt-164 allocation score on virt-162: 0
> native_color: fence-virt-164 allocation score on virt-163: 0
> native_color: fence-virt-164 allocation score on virt-164: 1000
> group_color: dummy1 allocation score on virt-162: 0
> group_color: dummy1 allocation score on virt-163: 0
> group_color: dummy1 allocation score on virt-164: 0
> group_color: dummy1a allocation score on virt-162: 0
> group_color: dummy1a allocation score on virt-163: 1000
> group_color: dummy1a allocation score on virt-164: 0
> group_color: dummy1b allocation score on virt-162: 0
> group_color: dummy1b allocation score on virt-163: 1000
> group_color: dummy1b allocation score on virt-164: 0
> group_color: dummy1c allocation score on virt-162: 0
> group_color: dummy1c allocation score on virt-163: 1000
> group_color: dummy1c allocation score on virt-164: 0
> group_color: dummy1d allocation score on virt-162: 0
> group_color: dummy1d allocation score on virt-163: 1000
> group_color: dummy1d allocation score on virt-164: 0
> native_color: dummy1a allocation score on virt-162: 0
> native_color: dummy1a allocation score on virt-163: 10006
> native_color: dummy1a allocation score on virt-164: 0
> native_color: dummy1b allocation score on virt-162: -INFINITY
> native_color: dummy1b allocation score on virt-163: 9006
> native_color: dummy1b allocation score on virt-164: -INFINITY
> native_color: dummy1c allocation score on virt-162: -INFINITY
> native_color: dummy1c allocation score on virt-163: 8006
> native_color: dummy1c allocation score on virt-164: -INFINITY
> native_color: dummy1d allocation score on virt-162: -INFINITY
> native_color: dummy1d allocation score on virt-163: 4002
> native_color: dummy1d allocation score on virt-164: -INFINITY
> group_color: dummy2 allocation score on virt-162: 0
> group_color: dummy2 allocation score on virt-163: 0
> group_color: dummy2 allocation score on virt-164: 0
> group_color: dummy2a allocation score on virt-162: 0
> group_color: dummy2a allocation score on virt-163: 1000
> group_color: dummy2a allocation score on virt-164: 0
> group_color: dummy2b allocation score on virt-162: 0
> group_color: dummy2b allocation score on virt-163: 1000
> group_color: dummy2b allocation score on virt-164: 0
> group_color: dummy2c allocation score on virt-162: 0
> group_color: dummy2c allocation score on virt-163: 1000
> group_color: dummy2c allocation score on virt-164: 0
> group_color: dummy2d allocation score on virt-162: 0
> group_color: dummy2d allocation score on virt-163: 1000
> group_color: dummy2d allocation score on virt-164: 0
> native_color: dummy2a allocation score on virt-162: 0
> native_color: dummy2a allocation score on virt-163: 6000
> native_color: dummy2a allocation score on virt-164: 0
> native_color: dummy2b allocation score on virt-162: -INFINITY
> native_color: dummy2b allocation score on virt-163: 3000
> native_color: dummy2b allocation score on virt-164: -INFINITY
> native_color: dummy2c allocation score on virt-162: -INFINITY
> native_color: dummy2c allocation score on virt-163: 2000
> native_color: dummy2c allocation score on virt-164: -INFINITY
> native_color: dummy2d allocation score on virt-162: -INFINITY
> native_color: dummy2d allocation score on virt-163: 1000
> native_color: dummy2d allocation score on virt-164: -INFINITY
> 
> Transition Summary:
> [root@virt-162 ~]# pcs status
> Cluster name: STSRHTS6104
> Stack: corosync
> Current DC: virt-163 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
> Last updated: Wed Jul  8 18:37:19 2020
> Last change: Wed Jul  8 18:36:41 2020 by root via cibadmin on virt-162
> 
> 3 nodes configured
> 11 resources configured
> 
> Online: [ virt-162 virt-163 virt-164 ]
> 
> Full list of resources:
> 
>  fence-virt-162	(stonith:fence_xvm):	Started virt-162
>  fence-virt-163	(stonith:fence_xvm):	Started virt-163
>  fence-virt-164	(stonith:fence_xvm):	Started virt-164
>  Resource Group: dummy1
>      dummy1a	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy1b	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy1c	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy1d	(ocf::heartbeat:Dummy):	Started virt-163
>  Resource Group: dummy2
>      dummy2a	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2b	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2c	(ocf::heartbeat:Dummy):	Started virt-163
>      dummy2d	(ocf::heartbeat:Dummy):	Started virt-163
> 
> Daemon Status:
>   corosync: active/enabled
>   pacemaker: active/enabled
>   pcsd: active/enabled


result: group dummy1 moved to co-locate with dummy2 even though it shouldn't have. the allocation scores are computed incorrectly.




after
=====

> [root@virt-045 ~]# pcs resource defaults resource-stickiness=1000
> Warning: Defaults do not apply to resources which override them with their own defined values
> [root@virt-045 ~]# pcs resource defaults
> resource-stickiness=1000
> [root@virt-045 ~]# for i in {a..d}; do for j in {1,2}; do pcs resource create dummy${j}${i} ocf:heartbeat:Dummy --group dummy${j}; done; done
> [root@virt-045 ~]# crm_simulate -Ls
> 
> Current cluster status:
> Online: [ virt-045 virt-046 virt-047 ]
> 
>  fence-virt-045	(stonith:fence_xvm):	Started virt-045
>  fence-virt-046	(stonith:fence_xvm):	Started virt-046
>  fence-virt-047	(stonith:fence_xvm):	Started virt-047
>  Resource Group: dummy1
>      dummy1a	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1b	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1c	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1d	(ocf::heartbeat:Dummy):	Started virt-045
>  Resource Group: dummy2
>      dummy2a	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2b	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2c	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2d	(ocf::heartbeat:Dummy):	Started virt-046
> 
> Allocation scores:
> pcmk__native_allocate: fence-virt-045 allocation score on virt-045: 1000
> pcmk__native_allocate: fence-virt-045 allocation score on virt-046: 0
> pcmk__native_allocate: fence-virt-045 allocation score on virt-047: 0
> pcmk__native_allocate: fence-virt-046 allocation score on virt-045: 0
> pcmk__native_allocate: fence-virt-046 allocation score on virt-046: 1000
> pcmk__native_allocate: fence-virt-046 allocation score on virt-047: 0
> pcmk__native_allocate: fence-virt-047 allocation score on virt-045: 0
> pcmk__native_allocate: fence-virt-047 allocation score on virt-046: 0
> pcmk__native_allocate: fence-virt-047 allocation score on virt-047: 1000
> pcmk__group_allocate: dummy1 allocation score on virt-045: 0
> pcmk__group_allocate: dummy1 allocation score on virt-046: 0
> pcmk__group_allocate: dummy1 allocation score on virt-047: 0
> pcmk__group_allocate: dummy1a allocation score on virt-045: 1000
> pcmk__group_allocate: dummy1a allocation score on virt-046: 0
> pcmk__group_allocate: dummy1a allocation score on virt-047: 0
> pcmk__group_allocate: dummy1b allocation score on virt-045: 1000
> pcmk__group_allocate: dummy1b allocation score on virt-046: 0
> pcmk__group_allocate: dummy1b allocation score on virt-047: 0
> pcmk__group_allocate: dummy1c allocation score on virt-045: 1000
> pcmk__group_allocate: dummy1c allocation score on virt-046: 0
> pcmk__group_allocate: dummy1c allocation score on virt-047: 0
> pcmk__group_allocate: dummy1d allocation score on virt-045: 1000
> pcmk__group_allocate: dummy1d allocation score on virt-046: 0
> pcmk__group_allocate: dummy1d allocation score on virt-047: 0
> pcmk__native_allocate: dummy1a allocation score on virt-045: 4000
> pcmk__native_allocate: dummy1a allocation score on virt-046: 0
> pcmk__native_allocate: dummy1a allocation score on virt-047: 0
> pcmk__native_allocate: dummy1b allocation score on virt-045: 3000
> pcmk__native_allocate: dummy1b allocation score on virt-046: -INFINITY
> pcmk__native_allocate: dummy1b allocation score on virt-047: -INFINITY
> pcmk__native_allocate: dummy1c allocation score on virt-045: 2000
> pcmk__native_allocate: dummy1c allocation score on virt-046: -INFINITY
> pcmk__native_allocate: dummy1c allocation score on virt-047: -INFINITY
> pcmk__native_allocate: dummy1d allocation score on virt-045: 1000
> pcmk__native_allocate: dummy1d allocation score on virt-046: -INFINITY
> pcmk__native_allocate: dummy1d allocation score on virt-047: -INFINITY
> pcmk__group_allocate: dummy2 allocation score on virt-045: 0
> pcmk__group_allocate: dummy2 allocation score on virt-046: 0
> pcmk__group_allocate: dummy2 allocation score on virt-047: 0
> pcmk__group_allocate: dummy2a allocation score on virt-045: 0
> pcmk__group_allocate: dummy2a allocation score on virt-046: 1000
> pcmk__group_allocate: dummy2a allocation score on virt-047: 0
> pcmk__group_allocate: dummy2b allocation score on virt-045: 0
> pcmk__group_allocate: dummy2b allocation score on virt-046: 1000
> pcmk__group_allocate: dummy2b allocation score on virt-047: 0
> pcmk__group_allocate: dummy2c allocation score on virt-045: 0
> pcmk__group_allocate: dummy2c allocation score on virt-046: 1000
> pcmk__group_allocate: dummy2c allocation score on virt-047: 0
> pcmk__group_allocate: dummy2d allocation score on virt-045: 0
> pcmk__group_allocate: dummy2d allocation score on virt-046: 1000
> pcmk__group_allocate: dummy2d allocation score on virt-047: 0
> pcmk__native_allocate: dummy2a allocation score on virt-045: 0
> pcmk__native_allocate: dummy2a allocation score on virt-046: 4000
> pcmk__native_allocate: dummy2a allocation score on virt-047: 0
> pcmk__native_allocate: dummy2b allocation score on virt-045: -INFINITY
> pcmk__native_allocate: dummy2b allocation score on virt-046: 3000
> pcmk__native_allocate: dummy2b allocation score on virt-047: -INFINITY
> pcmk__native_allocate: dummy2c allocation score on virt-045: -INFINITY
> pcmk__native_allocate: dummy2c allocation score on virt-046: 2000
> pcmk__native_allocate: dummy2c allocation score on virt-047: -INFINITY
> pcmk__native_allocate: dummy2d allocation score on virt-045: -INFINITY
> pcmk__native_allocate: dummy2d allocation score on virt-046: 1000
> pcmk__native_allocate: dummy2d allocation score on virt-047: -INFINITY
> 
> Transition Summary:
> [root@virt-045 ~]# pcs constraint colocation add dummy2 with dummy1 2000
> [root@virt-045 ~]# crm_simulate -Ls
> 
> Current cluster status:
> Online: [ virt-045 virt-046 virt-047 ]
> 
>  fence-virt-045	(stonith:fence_xvm):	Started virt-045
>  fence-virt-046	(stonith:fence_xvm):	Started virt-046
>  fence-virt-047	(stonith:fence_xvm):	Started virt-047
>  Resource Group: dummy1
>      dummy1a	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1b	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1c	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1d	(ocf::heartbeat:Dummy):	Started virt-045
>  Resource Group: dummy2
>      dummy2a	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2b	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2c	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2d	(ocf::heartbeat:Dummy):	Started virt-046
> 
> Allocation scores:
> pcmk__native_allocate: fence-virt-045 allocation score on virt-045: 1000
> pcmk__native_allocate: fence-virt-045 allocation score on virt-046: 0
> pcmk__native_allocate: fence-virt-045 allocation score on virt-047: 0
> pcmk__native_allocate: fence-virt-046 allocation score on virt-045: 0
> pcmk__native_allocate: fence-virt-046 allocation score on virt-046: 1000
> pcmk__native_allocate: fence-virt-046 allocation score on virt-047: 0
> pcmk__native_allocate: fence-virt-047 allocation score on virt-045: 0
> pcmk__native_allocate: fence-virt-047 allocation score on virt-046: 0
> pcmk__native_allocate: fence-virt-047 allocation score on virt-047: 1000
> pcmk__group_allocate: dummy1 allocation score on virt-045: 0
> pcmk__group_allocate: dummy1 allocation score on virt-046: 0
> pcmk__group_allocate: dummy1 allocation score on virt-047: 0
> pcmk__group_allocate: dummy1a allocation score on virt-045: 1000
> pcmk__group_allocate: dummy1a allocation score on virt-046: 0
> pcmk__group_allocate: dummy1a allocation score on virt-047: 0
> pcmk__group_allocate: dummy1b allocation score on virt-045: 1000
> pcmk__group_allocate: dummy1b allocation score on virt-046: 0
> pcmk__group_allocate: dummy1b allocation score on virt-047: 0
> pcmk__group_allocate: dummy1c allocation score on virt-045: 1000
> pcmk__group_allocate: dummy1c allocation score on virt-046: 0
> pcmk__group_allocate: dummy1c allocation score on virt-047: 0
> pcmk__group_allocate: dummy1d allocation score on virt-045: 1000
> pcmk__group_allocate: dummy1d allocation score on virt-046: 0
> pcmk__group_allocate: dummy1d allocation score on virt-047: 0
> pcmk__native_allocate: dummy1a allocation score on virt-045: 4000
> pcmk__native_allocate: dummy1a allocation score on virt-046: 3002
> pcmk__native_allocate: dummy1a allocation score on virt-047: 0
> pcmk__native_allocate: dummy1b allocation score on virt-045: 3000
> pcmk__native_allocate: dummy1b allocation score on virt-046: -INFINITY
> pcmk__native_allocate: dummy1b allocation score on virt-047: -INFINITY
> pcmk__native_allocate: dummy1c allocation score on virt-045: 2000
> pcmk__native_allocate: dummy1c allocation score on virt-046: -INFINITY
> pcmk__native_allocate: dummy1c allocation score on virt-047: -INFINITY
> pcmk__native_allocate: dummy1d allocation score on virt-045: 1000
> pcmk__native_allocate: dummy1d allocation score on virt-046: -INFINITY
> pcmk__native_allocate: dummy1d allocation score on virt-047: -INFINITY
> pcmk__group_allocate: dummy2 allocation score on virt-045: 0
> pcmk__group_allocate: dummy2 allocation score on virt-046: 0
> pcmk__group_allocate: dummy2 allocation score on virt-047: 0
> pcmk__group_allocate: dummy2a allocation score on virt-045: 0
> pcmk__group_allocate: dummy2a allocation score on virt-046: 1000
> pcmk__group_allocate: dummy2a allocation score on virt-047: 0
> pcmk__group_allocate: dummy2b allocation score on virt-045: 0
> pcmk__group_allocate: dummy2b allocation score on virt-046: 1000
> pcmk__group_allocate: dummy2b allocation score on virt-047: 0
> pcmk__group_allocate: dummy2c allocation score on virt-045: 0
> pcmk__group_allocate: dummy2c allocation score on virt-046: 1000
> pcmk__group_allocate: dummy2c allocation score on virt-047: 0
> pcmk__group_allocate: dummy2d allocation score on virt-045: 0
> pcmk__group_allocate: dummy2d allocation score on virt-046: 1000
> pcmk__group_allocate: dummy2d allocation score on virt-047: 0
> pcmk__native_allocate: dummy2a allocation score on virt-045: 2000
> pcmk__native_allocate: dummy2a allocation score on virt-046: 4000
> pcmk__native_allocate: dummy2a allocation score on virt-047: 0
> pcmk__native_allocate: dummy2b allocation score on virt-045: -INFINITY
> pcmk__native_allocate: dummy2b allocation score on virt-046: 3000
> pcmk__native_allocate: dummy2b allocation score on virt-047: -INFINITY
> pcmk__native_allocate: dummy2c allocation score on virt-045: -INFINITY
> pcmk__native_allocate: dummy2c allocation score on virt-046: 2000
> pcmk__native_allocate: dummy2c allocation score on virt-047: -INFINITY
> pcmk__native_allocate: dummy2d allocation score on virt-045: -INFINITY
> pcmk__native_allocate: dummy2d allocation score on virt-046: 1000
> pcmk__native_allocate: dummy2d allocation score on virt-047: -INFINITY
> 
> Transition Summary:
> [root@virt-045 ~]# pcs status
> Cluster name: STSRHTS17112
> Stack: corosync
> Current DC: virt-046 (version 1.1.23-1.el7-9acf116022) - partition with quorum
> Last updated: Wed Jul  8 18:37:21 2020
> Last change: Wed Jul  8 18:36:40 2020 by root via cibadmin on virt-045
> 
> 3 nodes configured
> 11 resource instances configured
> 
> Online: [ virt-045 virt-046 virt-047 ]
> 
> Full list of resources:
> 
>  fence-virt-045	(stonith:fence_xvm):	Started virt-045
>  fence-virt-046	(stonith:fence_xvm):	Started virt-046
>  fence-virt-047	(stonith:fence_xvm):	Started virt-047
>  Resource Group: dummy1
>      dummy1a	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1b	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1c	(ocf::heartbeat:Dummy):	Started virt-045
>      dummy1d	(ocf::heartbeat:Dummy):	Started virt-045
>  Resource Group: dummy2
>      dummy2a	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2b	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2c	(ocf::heartbeat:Dummy):	Started virt-046
>      dummy2d	(ocf::heartbeat:Dummy):	Started virt-046
> 
> Daemon Status:
>   corosync: active/enabled
>   pacemaker: active/enabled
>   pcsd: active/enabled


result: neither group moved to co-locate with the other, as both have their cumulative stickiness = 4000, which is higher than the colocation constraint score of 2000


marking verified in pacemaker-1.1.23-1.el7

Comment 17 errata-xmlrpc 2020-09-29 20:03:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3951


Note You need to log in before you can comment on or make changes to this bug.