This is mainly for the use case of OCP on RHV. This bug will allow auto-setting of the CPU pinning and NUMA pinning to get the best performance possible for a specific VM. By that: - The VM should be pinned to a host. - CPU passthourgh enabled. - The VM will get the VCPU as the PCPU the host has. - SAP HANA implementation of CPU and NUMA configuration will be set automatically. Ideally, for best performance, only this VM should run on the host.
> - The VM should be pinned to a host. I would skip pinning. We should support migration to same hosts, scheduler already takes that into account. Manual migration mode should be ok > - CPU passthourgh enabled. > - The VM will get the VCPU as the PCPU the host has. why would he VM be the same size? Wha matters is to use the same topology, but that doesn't mean you have to make the VM same size, just the same "shape". I'd say we should keep the requested number of vCPUs and validate it against the socket size. Or round it to it. Or depending on API changes, maybe just use sockets as input parameter. > - SAP HANA implementation of CPU and NUMA configuration will be set automatically.
(In reply to Michal Skrivanek from comment #1) > why would he VM be the same size? Wha matters is to use the same topology, > but that doesn't mean you have to make the VM same size, just the same > "shape". I'd say we should keep the requested number of vCPUs and validate > it against the socket size. Or round it to it. Or depending on API changes, > maybe just use sockets as input parameter. IIRC Kubernetes/Openshift defines taints and tolerations to prevent running workloads on control plane-nodes. Is there a requirement to run both control plane-nodes and non-control plane-nodes on the same physical server in RHV?
and we have affinity for that. Regardless, 1) yes, you can expect in demo environments to have more workers or control&workers on the same physical host. 2) you can have other VMs on that host, not just OCP nodes
That's my point - if we set affinity to imitate how openshift on bare metal is deployed, I don't see a reason for not taking all the CPU resources on the node. Sure, we can enable adjustments - but if we aim for the simplest solution that "fix them all", it sounds like an overspec But if we take into account the use-cases you've mentioned, we probably shouldn't take openshift on bare-metal as a reference
verified on the base of added polarion plan. ovirt-engine-4.4.3.8-0.1.el8ev.noarch. Automation must be added (https://issues.redhat.com/browse/RHV-39430)
This bugzilla is included in oVirt 4.4.3 release, published on November 10th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.3 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.
Hi Liran, please review this doc text for the errata and release notes: This enhancement introduces a new option for automatically setting the CPU and NUMA pinning of a Virtual Machine by introducing a new configuration parameter, auto_pinning_policy. This option can be set to `existing`, using the current topology of the Virtual Machine's CPU, or it can be set to `adjust`, using the dedicated host CPU topology and changing it according to the Virtual Machine.
Doc looks good. Thanks Eli!
Due to QE capacity, we are not going to cover this issue in our automation