Description of problem: Its not possible to disable IRQs for KubeVirt pods, on a per-pod basis ("irq-load-balancing.crio.io: disabled", as documented in the Performance Addon Operator documentation). This might require use of globallyDisableIrqLoadBalancing=true, which globally disables irqs for any isolated CPUs of a worker node (and therefore any non-infrastructure pods scheduled on that node). This might reduce performance of pods deployed on such worker nodes. Version-Release number of selected component (if applicable): Steps to Reproduce: Start a KubeVirt pod with "irq-load-balancing.crio.io: disabled" annotation, feature won't be functional (CPUs assigned to such a KubeVirt pod will not be protected from IRQs of the host system). 1. 2. 3. Actual results: Expected results: That for KubeVirt pods with "irq-load-balancing.crio.io: disabled" annotation, the CPUs assigned for the pod have no IRQs which can be fired on them. Additional info:
As it was previously discussed in [1] We've agreed to follow the idea of a policy based solution. The cluster admin who knows which runtime class maps to what functionality would provide a policy - similar to a migration policy. The VM user will add a label or an annotation that will provide a hint about the application (where it requires cpu/irq balancing disabled, etc...) Virt-controller will match the correct policy and will set the runtime class. [1] https://github.com/kubevirt/kubevirt/pull/9402
Moving this to network, as it relates to SR-IOV and the networking epic https://issues.redhat.com/browse/CNV-24676