Description of problem:
There is a requirement to run 2-3 VMs only between two hosts with HA mode so that if one host will be down VM should move to other but if both hosts will be down, VM should not move to other hosts in teh cluster. Customer want to achieve the plan like the below:
- There are 5 hosts in a cluster.
- They want to create a VM affinity group for 2-3 VMs and want those VMs to run specifically on host3 and host4.
- If host 3 and host 4 will be down, its OK to have the down time for the VM but it should not start on other hosts except host3 or host4.
Version-Release number of selected component (if applicable):
Currently no options available in the VM-Affinity group to make such a policy.
Need some option via VM-Affinity to allow restrict a group of VMs to run only betweeb some specific hosts with HA features.
it sounds like a better resolution will be to pin the VMs to specific hosts.
We should have an RFE for it already. Do you think it'll resolve this RFE?
Pinning the VM to a host will be a good point but it will not allow VM migration to other(specific host). I guess the pinning concept is pertially same but the requirement is little more. The pinning is needed between 2 or 3 hosts only out of many hosts from the same cluster. And when all the mentioned hosts will down, its OK to keep the VM down.
Am I clear enough regarding the requirement ? Please let me know, I will be happy to explain you what customer is actually looking for.
I'm suggesting a different RFE which others already asked for and may resolve
this issue as well; pin to hosts (not host). Basically it means that a VM can
only run on a sub-set of hosts in the cluster. The idea is to extend the existing
pin to host for multiple hosts and this should resolve your request if I'm not
I got yout point. This is a good idea indeed and this will help to fulfil the requirement of the cutsomer.
2 RHEV-H in Rack A, 2 RHEV-H in Rack B
Guests A, B and C should only run on any RHEV-H in Rack A
Guests D, E and F should only run on any RHEV-H in Rack B
Should the RHEV-H in Rack A become unavailable, these guests should run on any host in Rack B but automatically migrate back when the RHEV-H in Rack A become stable.
I see only limited use in the current affinity capabilities.
Really hoping Red Hat addresses this in 3.4.1.
For 3.6.0, this will be implemented as 'pin to hosts', which extends the
existing functionality of pin to host.
Please find in here the RFE feature design.
What would happen if both hosts a VM is pinned to go down?
Would the VM go down as well?
How is the scenario in comment 5 handled?
(In reply to Colin Coe from comment #12)
> What would happen if both hosts a VM is pinned to go down?
> Would the VM go down as well?
> How is the scenario in comment 5 handled?
you're asking for something different than this BZ.
This BZ is resolving the following case (taken from description):
... if one host will be down VM should move to other but if both hosts will be down, VM should not move to other hosts in teh cluster.
For your scenario you can write your own scheduling filter in Python.
Multiple select host UI changes have been merged.
(In reply to Doron Fediuck from comment #13)
> (In reply to Colin Coe from comment #12)
> > What would happen if both hosts a VM is pinned to go down?
> > Would the VM go down as well?
> > How is the scenario in comment 5 handled?
> > Thanks
> you're asking for something different than this BZ.
> This BZ is resolving the following case (taken from description):
> ... if one host will be down VM should move to other but if both hosts will
> be down, VM should not move to other hosts in teh cluster.
> For your scenario you can write your own scheduling filter in Python.
Could you point me at am example of this? Please note that it needs to run in RHEV 3.5
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.