Bug 1176210
| Summary: | advanced usage of clone ordering constraints with interleave=true | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | David Vossel <dvossel> | ||||
| Component: | pacemaker | Assignee: | David Vossel <dvossel> | ||||
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||
| Severity: | urgent | Docs Contact: | |||||
| Priority: | urgent | ||||||
| Version: | 7.2 | CC: | abeekhof, cfeist, cluster-maint, dvossel, fdinitto, jkortus, mnovacek, tlavigne | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | pacemaker-1.1.12-22 | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2015-03-05 10:00:33 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Assume this is the require-all=false feature we spoke of This feature (require-all) has been merged upstream. https://github.com/ClusterLabs/pacemaker/pull/624 Example: only a single instance of A-clone must be available before all instances of B-clone are eligible to run. # pcs constraint order A-clone then B-clone require-all=false
Having two clones (dummy1-clone, dummy2-clone) that are anti-colocated, ordered
and have interleave=true, I have verifid that setting require-all=false as a
parameter for the ordering constraint allows dummy2-clone to start as soon as
there is at least one instance of the dummy1-clone available with
pacemaker-1.1.12-22.el7.x86_64
----
I have setup dummy1-clone and dummy2-clone as follows:
* interleave=true is set
* dummy1-clone can run on duck-02 and duck-03 only
* dummy2-clone can run on duck-01 only
[root@duck-01 ~]# pcs status
Cluster name: duck
Last updated: Fri Jan 16 12:18:44 2015
Last change: Fri Jan 16 12:18:41 2015
Stack: corosync
Current DC: duck-01 (1) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
15 Resources configured
Online: [ duck-01 duck-02 duck-03 ]
Full list of resources:
fencing-duck01 (stonith:fence_ipmilan): Started duck-01
fencing-duck02 (stonith:fence_ipmilan): Started duck-02
fencing-duck03 (stonith:fence_ipmilan): Started duck-03
Clone Set: dummy1-clone [dummy1]
Stopped: [ duck-01 duck-02 duck-03 ]
Clone Set: dummy2-clone [dummy2]
Stopped: [ duck-01 duck-02 duck-03 ]
PCSD Status:
duck-01: Online
duck-02: Online
duck-03: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
[root@duck-01 ~]# rpm -q pacemaker
pacemaker-1.1.12-22.el7.x86_64
[root@duck-01 ~]# pcs resource show dummy1-clone dummy2-clone
Clone: dummy1-clone
Meta Attrs: interleave=true
Resource: dummy1 (class=ocf provider=heartbeat type=Dummy)
Operations: start interval=0s timeout=20 (dummy1-start-timeout-20)
stop interval=0s timeout=20 (dummy1-stop-timeout-20)
monitor interval=10 timeout=20 (dummy1-monitor-interval-10)
Clone: dummy2-clone
Meta Attrs: interleave=true
Resource: dummy2 (class=ocf provider=heartbeat type=Dummy)
Operations: start interval=0s timeout=20 (dummy2-start-timeout-20)
stop interval=0s timeout=20 (dummy2-stop-timeout-20)
monitor interval=10 timeout=20 (dummy2-monitor-interval-10)
[root@duck-01 ~]# pcs constraint
Location Constraints:
Resource: dummy1-clone
Disabled on: duck-01 (score:-INFINITY)
Resource: dummy2-clone
Disabled on: duck-02 (score:-INFINITY)
Disabled on: duck-03 (score:-INFINITY)
Ordering Constraints:
Colocation Constraints:
----
-> no ordering constraint, clones run anticolocated
[root@duck-01 ~]# pcs resource
Clone Set: dummy1-clone [dummy1]
Started: [ duck-02 duck-03 ]
Stopped: [ duck-01 ]
Clone Set: dummy2-clone [dummy2]
Started: [ duck-01 ]
Stopped: [ duck-02 duck-03 ]
-> first dummy1-clone then dummy2-clone, require-all=true, dummy2-clone does NOT run
[root@duck-01 ~]# pcs constraint order dummy1-clone then dummy2-clone require-all=true
Adding dummy1-clone dummy2-clone (kind: Mandatory) (Options: require-all=true first-action=start then-action=start)
[root@duck-01 ~]# pcs resource
Clone Set: dummy1-clone [dummy1]
Started: [ duck-02 duck-03 ]
Stopped: [ duck-01 ]
Clone Set: dummy2-clone [dummy2]
Stopped: [ duck-01 duck-02 duck-03 ]
-> first dummy1-clone then dummy2-clone, require-all=false, dummy2-clone DOES run
[root@duck-01 ~]# pcs constraint order dummy1-clone then dummy2-clone require-all=false
Adding dummy1-clone dummy2-clone (kind: Mandatory) (Options: require-all=false first-action=start then-action=start)
[root@duck-01 ~]# pcs resource
Clone Set: dummy1-clone [dummy1]
Started: [ duck-02 duck-03 ]
Stopped: [ duck-01 ]
Clone Set: dummy2-clone [dummy2]
Started: [ duck-01 ]
Stopped: [ duck-02 duck-03 ]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0440.html |
Created attachment 971261 [details] cib example Description of problem: We need the ability build a ordering constraint between cloned resources with 'interleave=true' set where the two cloned resources are incapable of running on the same node. This sounds a bit odd. I believe the functionality we're looking for is, start cloneA then start cloneB, where cloneB starts as soon as any instance of cloneA has started. In this scenario cloneA and cloneB would be anti colocated. I've included an example. Version-Release number of selected component (if applicable): How reproducible: 100% crm_simulate -S -x interleave-advanced-example.xml Actual results: nova-compute-clone does not start Expected results: nova-compute-clone starts on mrg nodes Additional info: removing the nova-compute-clone ordering constraint with the nova-conductor-clone allows nova-compute to start. The nova-conductor-clone can only start on rhos nodes, nova-compute-clone can only start on mgr nodes. We need a way to say, start nova-compute-clone once any instance of nova-conductor-clone is up.