Bug 1930268 - intel vfio devices are not expose as resources
Summary: intel vfio devices are not expose as resources
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.8.0
Assignee: zenghui.shi
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks: 1930469
TreeView+ depends on / blocked
 
Reported: 2021-02-18 15:51 UTC by Sebastian Scheinkman
Modified: 2021-11-16 06:45 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 22:45:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift sriov-network-operator pull 471 0 None open Bug 1930268: Sync 18 2 20 2021-02-18 15:53:23 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:47:27 UTC

Description Sebastian Scheinkman 2021-02-18 15:51:42 UTC
Description of problem:

When requested a vfio devices the sriov operator configure the VFs on the node but the device plugin doesn't expose it in the node resource.

This is because of a bug in the sriov operator that select deviceType as ETH for vfio

Comment 2 Nikita 2021-02-22 13:13:59 UTC
Bug fixed.

Tested with latest upstream version 4.8 quay.io/openshift/origin-sriov-network-operator:latest
OCP:
Server Version: 4.7.0-rc.3
Kubernetes Version: v1.20.0+bd9e442

Test SriovNetworkNodePolicy:

apiVersion: sriovnetwork.openshift.io/v1
Version: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: my-policy
  namespace: openshift-sriov-network-operator
spec:
  resourceName: sriovnic
  nodeSelector:
    node-role.kubernetes.io/worker-cnf: ""
  priority: 10
  vendor: "15b3"
  numVfs: 5
  nicSelector: 
    pfNames: ["ens8f0"] 
  deviceType: "vfio-pci"
  isRdma: false


Interface ens8f0 - Intel 

Test sriovNetwork
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
  name: mynetwork
  namespace: openshift-sriov-network-operator
spec:
  networkNamespace: bugvalidation
  ipam: |- 
    `{ "type": "static" }`
  resourceName: sriovnic


After applying SriovNetworkNodePolicy the relevant resource appears under node description:
oc describe node helix11.lab.eng.tlv2.redhat.com
Capacity:
  cpu:                                  80
  ephemeral-storage:                    457275Mi
  hugepages-1Gi:                        0
  hugepages-2Mi:                        0
  memory:                               263596864Ki
  openshift.io/sriovnic:                5

After creating pods sriov nics allocated correctly:

apiVersion: v1
kind: Pod
metadata:
  name: pod-a
  namespace: bugvalidation
  annotations:
     k8s.v1.cni.cncf.io/networks: |-
       [
         {
           "name": "mynetwork",
           "ips": ["192.168.1.1/24"]
         }
       ]
spec:
  containers:
  - name: samplepod
    command: ["/bin/bash", "-c", "sleep INF"]
    image: centos:7


Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                             Requests     Limits
  --------                             --------     ------
  cpu                                  1039m (1%)   0 (0%)
  memory                               3308Mi (1%)  0 (0%)
  ephemeral-storage                    0 (0%)       0 (0%)
  hugepages-1Gi                        0 (0%)       0 (0%)
  hugepages-2Mi                        0 (0%)       0 (0%)
  openshift.io/sriovnic                2            2


Relevant env var present in Pod

[root@pod-a /]# env
PCIDEVICE_OPENSHIFT_IO_SRIOVNIC=0000:d8:02.1

and device available in pod under /dev/vfio
[root@pod-a /]# ll /dev/vfio/
total 0
crw-rw-rw-. 1 root  801 234,   1 Feb 22 12:56 78
crw-rw-rw-. 1 root root  10, 196 Feb 22 12:56 vfio

Comment 3 zhaozhanqi 2021-02-23 10:53:31 UTC
thanks @nkononov help verified this issue. 

I just also have a test with this image registry.svc.ci.openshift.org/ocp/4.8@sha256:0209b24d347f012d6e7fe04e3fc6e7c25f76ee24e3c5afda60805861c6ebc7e8 it works well

Move this bug to 'verified'

Comment 4 W. Trevor King 2021-03-05 21:39:08 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z.  The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way.  Sample answers are provided to give more context and the UpgradeBlocker keyword has been added to this bug.  If the impact statement indicated blocking edges is not warranted, please remove the UpgradeBlocker keyword.  The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
* example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

What is the impact?  Is it serious enough to warrant blocking edges?
* example: Up to 2 minute disruption in edge routing
* example: Up to 90seconds of API downtime
* example: etcd loses quorum and you have to restore from backup

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: No, it’s always been like this we just never noticed
* example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 5 W. Trevor King 2021-03-05 23:38:41 UTC
SR-IOV operator is an OLM-installed operator, not part of the OpenShift core release image, so dropping UpgradeBlocker.  I still think it is useful to work up an impact statement responding to the above template, in case that informs whether the SR-IOV maintainers need to do to feed a skip or equivalent blocker into the OLM catalog pipeline.

Comment 6 zenghui.shi 2021-03-06 01:49:20 UTC
(In reply to W. Trevor King from comment #4)
> We're asking the following questions to evaluate whether or not this bug
> warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. 
> The ultimate goal is to avoid delivering an update which introduces new risk
> or reduces cluster functionality in any way.  Sample answers are provided to
> give more context and the UpgradeBlocker keyword has been added to this bug.
> If the impact statement indicated blocking edges is not warranted, please
> remove the UpgradeBlocker keyword.  The expectation is that the assignee
> answers these questions.
> 
> Who is impacted?  If we have to block upgrade edges based on this issue,
> which edges would need blocking?

Customers upgrading from 4.y.Z to 4.7.0 running on Baremetal with a SriovNetworkNodePolicy CR defined using vfio-pci deviceType.

> 
> What is the impact?  Is it serious enough to warrant blocking edges?

All SR-IOV Pods that use vfio-pci device as pod additional networks will not be able to create after node reboot.


> How involved is remediation (even moderately serious impacts might be
> acceptable if they are easy to mitigate)?

Admin uses oc to fix:
1) disable network-resources-injector by patching default SriovOperatorConfig CR
2) edit existing SriovNetworkNodePolicy CRs that use vfio-pci deviceType to not specify "linkType: eth" explicitly


> 
> Is this a regression (if all previous versions were also vulnerable,
> updating to the new, vulnerable version does not increase exposure)?

Yes, from 4.y.z to 4.7.0

Comment 9 errata-xmlrpc 2021-07-27 22:45:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.