Description of problem: Nodes are drained in parallel if 2 policies are applied at the same time Version-Release number of selected component (if applicable): 4.5 How reproducible: Steps to Reproduce: 1. Deploy sriov network operator on a cluster with at least 2 sriov capable worker nodes 2. Apply the following policies together ``` --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-2 spec: resourceName: nic2 nodeSelector: kubernetes.io/hostname: worker-0 feature.node.kubernetes.io/network-sriov.capable: "true" priority: 99 mtu: 9000 numVfs: 4 nicSelector: pfNames: ['ens803f0#0-0'] isRdma: false --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-2-vfio spec: resourceName: nic2vfio nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: 99 mtu: 9000 numVfs: 4 nicSelector: vendor: "8086" pfNames: ['ens803f0#0-0'] deviceType: vfio-pci isRdma: false ``` 3. Actual results: More than 1 worker node was drained and set to 'unschedulable' in parallel. Expected results: All worker nodes shall be drained one by one in sequence. Additional info: The workaround is to apply the second policy after the first one was fully synced on all the nodes.
Fixed in PR https://github.com/openshift/sriov-network-operator/pull/249 and https://github.com/openshift/sriov-network-operator/pull/260
The manifest in step 2 shall be ``` --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-2 spec: resourceName: nic2 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: 99 mtu: 9000 numVfs: 4 nicSelector: pfNames: ['ens803f0#0-0'] isRdma: false --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-2-vfio spec: resourceName: nic2vfio nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: 99 mtu: 9000 numVfs: 4 nicSelector: vendor: "8086" pfNames: ['ens803f0#1-1'] deviceType: vfio-pci isRdma: false ```
Verified this bug on 4.6 with following images: quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.6.0-202007020026.p0-ose-sriov-network-operator-20200702.024121 quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.6.0-202007020026.p0-ose-sriov-network-config-daemon-20200702.024121 those two nodes will be unscheduled one by one following above steps.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196