Today the machine type (eg. -M 6.3.0) is taken from the cluster level.
We need the ability for a user to specify the machine type per VM.
We need to research if we just use the qemu machine type or if we have to extend this and have a compatibility schedule that encompasses more than just the -M flag.
let's start with -M setting only.
currently in vdc_options per cluster level.
Need default selection in cluster dialog and override in VM dialog
since we can't tell what's tested per -M level, i think the appraoch should be to specify the VM cluster compatibility level.
then for example, the list of allowed cpu's (there is another RFE to select this one) would be based on the vm compat level.
same for validation on operations (maybe we don't allow to hotplug a nic to a VM which is in an older cluster level).
i.e., not sure we should handle the -M directly, rather than the compat level.
Andrew - please elaborate on reason for this feature and expected behavior.
there is quite a lot of complexity/risk around manging this wrt testing since we have no way to distinguish which features get tested at a specific cluster compatibility level based on -M of qemu-kvm.
andrew, the default for VMs would be "cluster default", right?
so on change of cluster to 4.0, all VMs will boot with -M rhel7, unless admin specifically changes it to say "3.5"?
would we do this based on rhel versions, or rhev cluster compatibility versions?
(In reply to Itamar Heim from comment #5)
> andrew, the default for VMs would be "cluster default", right?
> so on change of cluster to 4.0, all VMs will boot with -M rhel7, unless
> admin specifically changes it to say "3.5"?
> would we do this based on rhel versions, or rhev cluster compatibility
IMHO rhev cluster compatibility levels
Verified with rhevm-3.6.2-0.1.el6.noarch according to attached test plan.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.