+++ This bug was initially created as a clone of Bug #1927895 +++ Description of problem: When we call mergeRuntimeConfig, the global RuntimeConfig gets overwritten with the result of the merging, thus affecting the subsequent delegates. How reproducible: Always. Steps to Reproduce: (could someone provide these so we can have QE take a look, thanks!) Upstream PR @ https://github.com/intel/multus-cni/pull/607
Steps to Reproduce: Used the yaml files from https://github.com/openshift/app-netutil, specifically from https://github.com/openshift/app-netutil/tree/master/samples/dpdk_app/sriov In an OCP or Kubernetes cluster with Multus as default CNI, deploy a pod with two SR-IOV VFs, each VF from it's own network. Replace kubectl with oc below if on OCP. 1) Download yaml files: git clone https://github.com/openshift/app-netutil.git cd app-netutil/samples/dpdk_app/sriov -- or -- go get github.com/openshift/app-netutil cd $GOPATH/src/github.com/openshift/app-netutil/samples/dpdk_app/sriov 2) If SR-IOV Device Plugin in not deployed, modify config map to match system and then create configMap, then deploy SR-IOV DP: vi configMap.yaml kubectl create -f configMap.yaml kubectl create -f sriovdp-daemonset.yaml 3) Create the Network Attachment Definition for each VF (if configMap already existed, update Network Attachment Definition to resourceName from existing configMap): kubectl create -f netAttach-sriov-dpdk-a.yaml kubectl create -f netAttach-sriov-dpdk-b.yaml 4) Create pod that uses the SR-IOV VFs. I was using the dpdk-app-centos image from the app-netutil repo, however, any container will work. Once the Pod comes up, view the annotations associated with the pod. kubectl create -f sriov-pod-1.yaml kubectl describe pod sriov-pod-1 : Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "", "interface": "eth0", "ips": [ "10.244.0.5" ], "mac": "4e:cd:27:8a:38:e5", "default": true, "dns": {} },{ "name": "default/sriov-net-a", "interface": "net1", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:01:02.4" } } },{ "name": "default/sriov-net-b", "interface": "net2", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:01:02.4" } } }] k8s.v1.cni.cncf.io/networks: sriov-net-a, sriov-net-b From this output, "device-info" for "net1" and "net2" are the same. They should be different.
@zzhao Seems the verification need to deploy a pod with two SR-IOV VFs and you verified same bug https://bugzilla.redhat.com/show_bug.cgi?id=1927895, could you help to verify this one? Thanks!
Verified this bug on 4.7.0-0.nightly-2021-04-13-144216 # oc describe pod sriov-pod-1 -n z1 Name: sriov-pod-1 Namespace: z1 Priority: 0 Node: dell-per740-14.rhts.eng.pek2.redhat.com/10.73.116.62 Start Time: Wed, 14 Apr 2021 02:39:44 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.131.0.63/23"],"mac_address":"0a:58:0a:83:00:3f","gateway_ips":["10.131.0.1"],"ip_address":"10.131.0.63/23"... k8s.v1.cni.cncf.io/network-status: [{ "name": "", "interface": "eth0", "ips": [ "10.131.0.63" ], "mac": "0a:58:0a:83:00:3f", "default": true, "dns": {} },{ "name": "z1/intel-dpdk-rhcos", "interface": "net1", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:3b:0a.1" } } },{ "name": "z1/intel-dpdk-rhcos2", "interface": "net2", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:3b:0a.0" } } }] k8s.v1.cni.cncf.io/networks: intel-dpdk-rhcos, intel-dpdk-rhcos2 k8s.v1.cni.cncf.io/networks-status: [{ "name": "", "interface": "eth0", "ips": [ "10.131.0.63" ], "mac": "0a:58:0a:83:00:3f", "default": true, "dns": {} },{ "name": "z1/intel-dpdk-rhcos", "interface": "net1", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:3b:0a.1" } } },{ "name": "z1/intel-dpdk-rhcos2", "interface": "net2", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:3b:0a.0" } } }] openshift.io/scc: privileged Status: Running IP: 10.131.0.63
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.7.7 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1149