Description of problem: When requested a vfio devices the sriov operator configure the VFs on the node but the device plugin doesn't expose it in the node resource. This is because of a bug in the sriov operator that select deviceType as ETH for vfio
Bug fixed. Tested with latest upstream version 4.8 quay.io/openshift/origin-sriov-network-operator:latest OCP: Server Version: 4.7.0-rc.3 Kubernetes Version: v1.20.0+bd9e442 Test SriovNetworkNodePolicy: apiVersion: sriovnetwork.openshift.io/v1 Version: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: my-policy namespace: openshift-sriov-network-operator spec: resourceName: sriovnic nodeSelector: node-role.kubernetes.io/worker-cnf: "" priority: 10 vendor: "15b3" numVfs: 5 nicSelector: pfNames: ["ens8f0"] deviceType: "vfio-pci" isRdma: false Interface ens8f0 - Intel Test sriovNetwork apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mynetwork namespace: openshift-sriov-network-operator spec: networkNamespace: bugvalidation ipam: |- `{ "type": "static" }` resourceName: sriovnic After applying SriovNetworkNodePolicy the relevant resource appears under node description: oc describe node helix11.lab.eng.tlv2.redhat.com Capacity: cpu: 80 ephemeral-storage: 457275Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 263596864Ki openshift.io/sriovnic: 5 After creating pods sriov nics allocated correctly: apiVersion: v1 kind: Pod metadata: name: pod-a namespace: bugvalidation annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "mynetwork", "ips": ["192.168.1.1/24"] } ] spec: containers: - name: samplepod command: ["/bin/bash", "-c", "sleep INF"] image: centos:7 Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1039m (1%) 0 (0%) memory 3308Mi (1%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) openshift.io/sriovnic 2 2 Relevant env var present in Pod [root@pod-a /]# env PCIDEVICE_OPENSHIFT_IO_SRIOVNIC=0000:d8:02.1 and device available in pod under /dev/vfio [root@pod-a /]# ll /dev/vfio/ total 0 crw-rw-rw-. 1 root 801 234, 1 Feb 22 12:56 78 crw-rw-rw-. 1 root root 10, 196 Feb 22 12:56 vfio
thanks @nkononov help verified this issue. I just also have a test with this image registry.svc.ci.openshift.org/ocp/4.8@sha256:0209b24d347f012d6e7fe04e3fc6e7c25f76ee24e3c5afda60805861c6ebc7e8 it works well Move this bug to 'verified'
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker keyword has been added to this bug. If the impact statement indicated blocking edges is not warranted, please remove the UpgradeBlocker keyword. The expectation is that the assignee answers these questions. Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? * example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet * example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time What is the impact? Is it serious enough to warrant blocking edges? * example: Up to 2 minute disruption in edge routing * example: Up to 90seconds of API downtime * example: etcd loses quorum and you have to restore from backup How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? * example: Issue resolves itself after five minutes * example: Admin uses oc to fix things * example: Admin must SSH to hosts, restore from backups, or other non standard admin activities Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? * example: No, itβs always been like this we just never noticed * example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1
SR-IOV operator is an OLM-installed operator, not part of the OpenShift core release image, so dropping UpgradeBlocker. I still think it is useful to work up an impact statement responding to the above template, in case that informs whether the SR-IOV maintainers need to do to feed a skip or equivalent blocker into the OLM catalog pipeline.
(In reply to W. Trevor King from comment #4) > We're asking the following questions to evaluate whether or not this bug > warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. > The ultimate goal is to avoid delivering an update which introduces new risk > or reduces cluster functionality in any way. Sample answers are provided to > give more context and the UpgradeBlocker keyword has been added to this bug. > If the impact statement indicated blocking edges is not warranted, please > remove the UpgradeBlocker keyword. The expectation is that the assignee > answers these questions. > > Who is impacted? If we have to block upgrade edges based on this issue, > which edges would need blocking? Customers upgrading from 4.y.Z to 4.7.0 running on Baremetal with a SriovNetworkNodePolicy CR defined using vfio-pci deviceType. > > What is the impact? Is it serious enough to warrant blocking edges? All SR-IOV Pods that use vfio-pci device as pod additional networks will not be able to create after node reboot. > How involved is remediation (even moderately serious impacts might be > acceptable if they are easy to mitigate)? Admin uses oc to fix: 1) disable network-resources-injector by patching default SriovOperatorConfig CR 2) edit existing SriovNetworkNodePolicy CRs that use vfio-pci deviceType to not specify "linkType: eth" explicitly > > Is this a regression (if all previous versions were also vulnerable, > updating to the new, vulnerable version does not increase exposure)? Yes, from 4.y.z to 4.7.0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438