Bug 1943476

Summary: [RFE] Colocation without placement order dependency
Product: Red Hat Enterprise Linux 8 Reporter: Reid Wahl <nwahl>
Component: pacemakerAssignee: Ken Gaillot <kgaillot>
Status: NEW --- QA Contact: cluster-qe <cluster-qe>
Severity: low Docs Contact:
Priority: low    
Version: 8.3CC: cluster-maint
Target Milestone: rcKeywords: FutureFeature, Triaged
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Reid Wahl 2021-03-26 07:10:21 UTC
Description of problem:

It would at times be extremely convenient to be able to direct Pacemaker that either:
  (a) "This set of resources should all run on the same node" or more precisely, "For every resource in this set, all resources that are scheduled to run should run on the same node." Or:
  (b) "No two resources within this set should run on the same node."

Both currently require hacks to achieve. By default, my understanding is that colocation constraints entail a placement order, even though there's no action order. If B is colocated with A, then A must be placed first. Similarly if B is anti-colocated with A.

This can lead to unequal treatment of the resources. For example, the scheduler may schedule actions differently if resource A's node goes offline, compared to if resource B's node goes offline. There are valid use cases in which users want the scheduler to treat both/all resources in a(n anti-) colocation set the same.

In my experience, (a) is much simpler to achieve with a hack than (b). (a) can generally be achieved by using a dummy resource as a "colocator", and colocating all the other resources with the dummy resource. That way, the other resources don't depend on each other -- i.e., there's no particular placement order enforced among the other resources. Each one is simply colocated with the dummy. We discussed this in an internal mailing list thread[1], and I've had to use variations of this approach in a number of situations.

(b) is trickier. AFAICT there's no direct "dummy anti-colocator" analog to the (a) approach, for mutual anti-colocation. One anti-colocator resource can't make all the resources in the set repel each other, and multiple anti-colocators would bring us right back to the same situation in which we started. The best workaround I've come up with so far is to tie one ocf:pacemaker:attribute resource to each resource in the anti-colocated set, and use location rules to prevent any attribute resource from running where another one's attribute is set to true. This is discussed in a separate internal mailing list thread[2].

I acknowledge that this feature is likely to be difficult to implement. Maybe extremely so; I haven't done any exploration personally. However, if there are ways to achieve the desired behavior via hacks, then I'd expect it to be at least **possible** to implement via some state tracking internal to Pacemaker. Whether it's **practical** is another matter ;)

If we can make this happen, I believe it will create a much smoother experience for users who do need this type of behavior, as well as for the support engineers who assist them.

[1] https://mailman-int.corp.redhat.com/archives/cluster-list/2020-May/msg00067.html
[2] https://mailman-int.corp.redhat.com/archives/cluster-list/2021-March/msg00071.html

-----

Version-Release number of selected component (if applicable):

All

-----

How reproducible:

Always with the right sequence of events.

-----

Steps to Reproduce:

Since there are two sub-functionalities requested in the description and the reproducer setups are non-trivial, I'm going to leave the rest of this blank pending engineering's assessment of whether we can try to do this or whether it's a CANTFIX.

-----

Actual results:

-----

Expected results:

-----

Additional info:

See also BZ1876173. It's a different issue but also deals with a quirk of placement ordering and uses a dummy colocator as a workaround.

Comment 2 Ken Gaillot 2021-03-26 22:58:13 UTC
(In reply to Reid Wahl from comment #0)
> Description of problem:
> 
> It would at times be extremely convenient to be able to direct Pacemaker
> that either:
>   (a) "This set of resources should all run on the same node" or more
> precisely, "For every resource in this set, all resources that are scheduled
> to run should run on the same node." Or:
>   (b) "No two resources within this set should run on the same node."

These are essentially upstream bzs 5052 and 5320.

The implementation of (a) would basically be your workaround implemented internally. A dummy resource would be created implicitly to colocate the others with. The resource_set require-all option (currently only meaningful for orderings) could be reused for this purpose.

The upstream bz for (b) gives a workaround, using utilization values to ensure no one node can run more than one of the resources (pcs does support utilization values). For syntax, we might be able to use the desired behavior anytime score="-INFINITY" on a colocated set. The implementation would be to modify how Pacemaker accounts for dependent resources' preferences.

Basically in a colocation chain, C with B and B with A, A will take into account B's location preferences at an attentuated value, and then recursively take C's into account as well. However if it is C with B and B *not* with A, then A takes the *inverse* of B's preferences into account, and recursively the inverse of C's preferences. All well to this point. But if it is C *not* with B and B *not* with A, A will take B's inverse preferences and C's normal (inverse of inverse) preferences (aka the enemy of my enemy is my friend), which is not the intent here. Figuring out which preferences to consider normal, which to consider inverted, and which to ignore may be painful but doable.

Comment 3 Reid Wahl 2021-03-26 23:35:39 UTC
(In reply to Ken Gaillot from comment #2)
> (In reply to Reid Wahl from comment #0)
> > Description of problem:
> > 
> > It would at times be extremely convenient to be able to direct Pacemaker
> > that either:
> >   (a) "This set of resources should all run on the same node" or more
> > precisely, "For every resource in this set, all resources that are scheduled
> > to run should run on the same node." Or:
> >   (b) "No two resources within this set should run on the same node."
> 
> These are essentially upstream bzs 5052 and 5320.
> 
> The implementation of (a) would basically be your workaround implemented
> internally. A dummy resource would be created implicitly to colocate the
> others with. The resource_set require-all option (currently only meaningful
> for orderings) could be reused for this purpose.

That reuse sounds good, as long as it doesn't change the behavior of require-all when another set is colocated with this set (set A). No negative interactions are coming to mind for me. If some set B depends on set A, then all the resources in set A should run on the same node anyway.

I think the only place it **might** change behavior is in certain scenarios where there are conflicting constraints. In those cases, a user needs to fix the constraints and shouldn't expect resources to be placed in a reliable way.


> The upstream bz for (b) gives a workaround, using utilization values to
> ensure no one node can run more than one of the resources (pcs does support
> utilization values).

That's fine in the more common case where there's only one anti-colocation set. If there are two or more anti-colocation sets, then I believe the utilization workaround falls apart. Pacemaker wouldn't be able to ensure that at most one resource from each set is contributing to the node's utilization.


> For syntax, we might be able to use the desired
> behavior anytime score="-INFINITY" on a colocated set. The implementation
> would be to modify how Pacemaker accounts for dependent resources'
> preferences.
> 
> Basically in a colocation chain, C with B and B with A, A will take into
> account B's location preferences at an attentuated value, and then
> recursively take C's into account as well. However if it is C with B and B
> *not* with A, then A takes the *inverse* of B's preferences into account,
> and recursively the inverse of C's preferences. All well to this point. But
> if it is C *not* with B and B *not* with A, A will take B's inverse
> preferences and C's normal (inverse of inverse) preferences (aka the enemy
> of my enemy is my friend), which is not the intent here.
> 
> Figuring out which
> preferences to consider normal, which to consider inverted, and which to
> ignore may be painful but doable.

And whether to allow a user to configure the now-current (at that point legacy) behavior. I don't know if there's any reason a user might prefer or rely on the current behavior. I suspect most users are either indifferent or want the new proposed behavior.

Maybe there could be a "chain" mode and a "set" mode. It's unlikely we'd want to use those exact terms in the syntax, but they convey my point.
  - The chain mode would maintain the current behavior or something similar to it. Each resource directly influences the next resource in the chain.
  - The set mode would be a sort of inverse of the "dummy colocator" model -- i.e., the scenario (b) in this RFE: "No two resources in this set can run on the same node."

Comment 4 Reid Wahl 2021-03-26 23:48:52 UTC
(In reply to Reid Wahl from comment #3)
> Maybe there could be a "chain" mode and a "set" mode. It's unlikely we'd
> want to use those exact terms in the syntax, but they convey my point.
>   - The chain mode would maintain the current behavior or something similar
> to it. Each resource directly influences the next resource in the chain.
>   - The set mode would be a sort of inverse of the "dummy colocator" model
> -- i.e., the scenario (b) in this RFE: "No two resources in this set can run
> on the same node."

In the set mode, if there aren't enough nodes available to run all the anti-colocated resources (so that one or more resources must be stopped), we should ideally place any running resources first. For example, if resource A's node is shutting down, and resource C is running somewhere else, we place resource C before placing resource A. That way, we don't stop C in order to run A.