Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1852647

Summary: [sriov] Nodes are drained simultaneously if 2 policies are applied at the same time
Product: OpenShift Container Platform Reporter: Peng Liu <pliu>
Component: NetworkingAssignee: Peng Liu <pliu>
Networking sub component: SR-IOV QA Contact: zhaozhanqi <zzhao>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: unspecified    
Version: 4.6   
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1852648 (view as bug list) Environment:
Last Closed: 2020-10-27 16:10:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1852648    

Description Peng Liu 2020-07-01 02:55:58 UTC
Description of problem:
Nodes are drained in parallel if 2 policies are applied at the same time

Version-Release number of selected component (if applicable):
4.5

How reproducible:

Steps to Reproduce:
1. Deploy sriov network operator on a cluster with at least 2 sriov capable worker nodes

2. Apply the following policies together

```
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: policy-net-2
spec:
  resourceName: nic2
  nodeSelector:
    kubernetes.io/hostname: worker-0
    feature.node.kubernetes.io/network-sriov.capable: "true"
  priority: 99
  mtu: 9000
  numVfs: 4
  nicSelector:
    pfNames: ['ens803f0#0-0']
  isRdma: false
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: policy-net-2-vfio
spec:
  resourceName: nic2vfio
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true"
  priority: 99
  mtu: 9000
  numVfs: 4
  nicSelector:
    vendor: "8086"
    pfNames: ['ens803f0#0-0']
  deviceType: vfio-pci
  isRdma: false
```
3.

Actual results:
More than 1 worker node was drained and set to 'unschedulable' in parallel.

Expected results:
All worker nodes shall be drained one by one in sequence. 

Additional info:
The workaround is to apply the second policy after the first one was fully synced on all the nodes.

Comment 4 Peng Liu 2020-07-01 13:36:40 UTC
The manifest in step 2 shall be 

```
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: policy-net-2
spec:
  resourceName: nic2
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true"
  priority: 99
  mtu: 9000
  numVfs: 4
  nicSelector:
    pfNames: ['ens803f0#0-0']
  isRdma: false
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: policy-net-2-vfio
spec:
  resourceName: nic2vfio
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true"
  priority: 99
  mtu: 9000
  numVfs: 4
  nicSelector:
    vendor: "8086"
    pfNames: ['ens803f0#1-1']
  deviceType: vfio-pci
  isRdma: false
```

Comment 5 zhaozhanqi 2020-07-02 03:51:52 UTC
Verified this bug on 4.6 with following images:
quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.6.0-202007020026.p0-ose-sriov-network-operator-20200702.024121
quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.6.0-202007020026.p0-ose-sriov-network-config-daemon-20200702.024121

those two nodes will be unscheduled one by one following above steps.

Comment 7 errata-xmlrpc 2020-10-27 16:10:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196