We disable high availability for VMs that are set with dedicated hosts (i.e., pinned to host(s) and set with Migration mode = 'Do not allow migration'/'Allow manual migration only').
The idea is that we cannot guarantee high availability in case the VM is set to run on a single or just a few hosts (because if that host dies, the VM cannot run).
An exception to the above is high performance VMs for which we do enable to set high availability also when they are configured with dedicated host(s).
The request here is to configure a VM (that is not of type 'high performance') with automatic restart also in case it is configured with dedicate host(s). It should be reflected that it's not really high availability  but a subset (automatic restart).
 For example when fencing a host such VM is running on, it would be ok that this VM won't get precedence in migrating out from the host
It was possible to create a HA VM pinned to one host with migration disabled using REST API. The attached patch removed the relevant checks that prevented to create such VM in the UI. The HA VM pinned to one host is not truly HA, but it supports automatic restart as stated in the bug summary.
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.
verified on ovirt-engine-22.214.171.124-0.11.el8ev.noarch
Created in UI and also by Rest API the following types of VMs (on a setup with three hosts):
1. HA VM pinned to one host, not allowed migration.
2. HA VM pinned to two hosts, not allowed migration.
3. HA VM pinned to one host, user migratable
4. HA VM pinned to two hosts, user migratable.
Checked that the HA option is always enabled in UI for VMs created by rest and by UI.
Run all the VMs.
Used host2 as a proxy to run the following commands to power off two other hosts (host1 and host3) the VMs are pinned to
vdsm-client Host fenceNode username=ADMIN password=ADMIN addr=ocelot01-bmc.mgmt.lab3.tlv.redhat.com agent=ipmilan action=off options=lanplus=1 port=
vdsm-client Host fenceNode username=ADMIN password=ADMIN addr=ocelot03-bmc.mgmt.lab3.tlv.redhat.com agent=ipmilan action=off options=lanplus=1 port=
While two hosts were powered off one VM restarted to the host2 , others remained in unknown state (marked in UI with question mark).
Then, when the hosts are Up again , all the VMs running
This bugzilla is included in oVirt 4.4.7 release, published on July 6th 2021.
Since the problem described in this bug report should be resolved in oVirt 4.4.7 release, it has been closed with a resolution of CURRENT RELEASE.
If the solution does not work for you, please open a new bug report.