Description of problem: Currently the kubevirt-dpdk-checkup has 2 optional parameters for the nodelabelselector - one for each entity (DPDK VMI and traffic generator Pod): 1. DPDKNodeLabelSelector 2. trafficGeneratorNodeLabelSelector If these parameters are not set, then the desired behavior is that the objects will prefer to be scheduled on different nodes. This behavior is not yet implemented. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. run kubevirt-dpdk-checkup without the DPDKNodeLabelSelector, trafficGeneratorNodeLabelSelector optional parameters. 2. 3. Actual results: there is no a pod anti-affinity that prefers to schedule the objects on different nodes Expected results: there is a pod anti-affinity that prefers to schedule the objects on different nodes. Additional info:
https://github.com/kiagnose/kubevirt-dpdk-checkup/pull/64
Verified by running teh reproduction scenario of https://bugzilla.redhat.com/show_bug.cgi?id=2196459x. CNV 4.14.0 container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.14.0-116 I ran several retries, in which I didn't any of the node-selector fields in the job's ConfigMap. In all cases - the traffic-generator and the target VMI were scheduled on separate nodes.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6817