Bug 2077506 - [backport-4.10] SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF
Summary: [backport-4.10] SR-IOV Network Device Plugin should handle offloaded VF inste...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.10
Hardware: All
OS: All
medium
high
Target Milestone: ---
: 4.10.z
Assignee: Emilien Macchi
QA Contact: Ziv Greenberg
Tomas 'Sheldon' Radej
URL:
Whiteboard:
Depends On: 2036948
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-04-21 13:22 UTC by Emilien Macchi
Modified: 2022-07-11 15:28 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2036948
Environment:
Last Closed: 2022-07-11 15:27:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift sriov-network-operator pull 656 0 None open Bug 2077506: backport fixes for OVS HW offload & dependencies 2022-05-19 14:01:54 UTC
Red Hat Product Errata RHBA-2022:5513 0 None None None 2022-07-11 15:28:11 UTC

Comment 16 Ziv Greenberg 2022-07-06 10:07:01 UTC
Hello,

I have verified that the "Net Filter" is working now in the 4.10 release.

cloud-user@installer-host ~]$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2022-06-08-150219   True        False         47h     Cluster version is 4.10.0-0.nightly-2022-06-08-150219
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$ oc get csv -n openshift-sriov-network-operator
NAME                                         DISPLAY                      VERSION               REPLACES   PHASE
performance-addon-operator.v4.10.4           Performance Addon Operator   4.10.4                           Succeeded
sriov-network-operator.4.10.0-202206212036   SR-IOV Network Operator      4.10.0-202206212036              Succeeded
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$ oc get sriovnetworknodepolicy -n openshift-sriov-network-operator -o yaml hwoffload9
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  creationTimestamp: "2022-07-04T08:59:48Z"
  generation: 1
  name: hwoffload9
  namespace: openshift-sriov-network-operator
  resourceVersion: "884407"
  uid: 2857d708-bb8a-4fa5-b958-ad57a238cfbc
spec:
  deviceType: netdevice
  isRdma: true
  nicSelector:
    netFilter: openstack/NetworkID:5f98e264-9922-46cc-b2aa-8c50e6d00c9e
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true"
  numVfs: 1
  priority: 99
  resourceName: hwoffload9
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$ oc get net-attach-def -o yaml hwoffload9
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":null,"k8s.v1.cni.cncf.io/resourceName":"openshift.io/hwoffload9","name":"hwoffload9","namespace":"default"},"spec":{"config":"{\"cniVersion\":\"0.3.1\",
      \"name\":\"hwoffload9\",\"type\":\"host-device\",\"pciBusId\":\"0000:00:05.0\",\"ipam\":{}}"}}'
  creationTimestamp: "2022-07-04T09:01:51Z"
  generation: 1
  name: hwoffload9
  namespace: default
  resourceVersion: "885297"
  uid: a3b15305-5d8c-4083-9f81-6d0291fd34ae
spec:
  config: '{"cniVersion":"0.3.1", "name":"hwoffload9","type":"host-device","pciBusId":"0000:00:05.0","ipam":{}}'
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$ oc describe SriovNetworkNodeState -n openshift-sriov-network-operator ostest-wrvtd-worker-1
Name:         ostest-wrvtd-worker-1
Namespace:    openshift-sriov-network-operator
Labels:       <none>
Annotations:  <none>
API Version:  sriovnetwork.openshift.io/v1
Kind:         SriovNetworkNodeState
Metadata:
  Creation Timestamp:  2022-07-04T08:58:20Z
  Generation:          3
  Managed Fields:
    API Version:  sriovnetwork.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:ownerReferences:
          .:
          k:{"uid":"857f7944-a356-4eed-a8b8-8810d221d2a1"}:
      f:spec:
        .:
        f:dpConfigVersion:
        f:interfaces:
    Manager:      sriov-network-operator
    Operation:    Update
    Time:         2022-07-04T08:59:48Z
    API Version:  sriovnetwork.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        f:interfaces:
        f:syncStatus:
    Manager:      sriov-network-config-daemon
    Operation:    Update
    Subresource:  status
    Time:         2022-07-04T09:55:01Z
  Owner References:
    API Version:           sriovnetwork.openshift.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  SriovNetworkNodePolicy
    Name:                  default
    UID:                   857f7944-a356-4eed-a8b8-8810d221d2a1
  Resource Version:        928311
  UID:                     f592fd89-c1e6-481a-8dcc-935023058afc
Spec:
  Dp Config Version:  884444
  Interfaces:
    Name:         ens6
    Num Vfs:      1
    Pci Address:  0000:00:06.0
    Vf Groups:
      Device Type:    netdevice
      Is Rdma:        true
      Policy Name:    hwoffload10
      Resource Name:  hwoffload10
      Vf Range:       0-0
    Name:             ens5
    Num Vfs:          1
    Pci Address:      0000:00:05.0
    Vf Groups:
      Device Type:    netdevice
      Is Rdma:        true
      Policy Name:    hwoffload9
      Resource Name:  hwoffload9
      Vf Range:       0-0
Status:
  Interfaces:
    Vfs:
      Device ID:    1000
      Driver:       virtio-pci
      Mac:          fa:16:3e:8e:9c:c4
      Pci Address:  0000:00:03.0
      Vendor:       1af4
      Vf ID:        0
    Device ID:      1000
    Driver:         virtio-pci
    Link Speed:     -1 Mb/s
    Link Type:      ETH
    Mac:            fa:16:3e:8e:9c:c4
    Name:           ens3
    Net Filter:     openstack/NetworkID:d8ee82bf-5455-4946-a83b-7dae18a93156
    Num Vfs:        1
    Pci Address:    0000:00:03.0
    Totalvfs:       1
    Vendor:         1af4
    Vfs:
      Device ID:    1018
      Driver:       mlx5_core
      Pci Address:  0000:00:05.0
      Vendor:       15b3
      Vf ID:        0
    Device ID:      1018
    Driver:         mlx5_core
    Net Filter:     openstack/NetworkID:5f98e264-9922-46cc-b2aa-8c50e6d00c9e
    Num Vfs:        1
    Pci Address:    0000:00:05.0
    Totalvfs:       1
    Vendor:         15b3
    Vfs:
      Device ID:    1018
      Driver:       mlx5_core
      Pci Address:  0000:00:06.0
      Vendor:       15b3
      Vf ID:        0
    Device ID:      1018
    Driver:         mlx5_core
    Net Filter:     openstack/NetworkID:6932dfd2-ed64-4dc7-89ad-5ace9445844a
    Num Vfs:        1
    Pci Address:    0000:00:06.0
    Totalvfs:       1
    Vendor:         15b3
  Sync Status:      Succeeded
Events:             <none>
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$
[cloud-user@installer-host ~]$

[cloud-user@installer-host ~]$ oc rsh hw-offload-testpmd
sh-4.4# testpmd -l 4,5,6 -w 0000:00:05.0 -w 0000:00:06.0 --socket-mem 1024 -n 4 -- -i --nb-cores=2 --auto-start --rxd=1024 --txd=1024
EAL: Detected 21 lcore(s)
EAL: Detected 1 NUMA nodes
Option -w, --pci-whitelist is deprecated, use -a, --allow option instead
Option -w, --pci-whitelist is deprecated, use -a, --allow option instead
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:00:05.0 (socket 0)
mlx5_pci: No available register for Sampler.
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:00:06.0 (socket 0)
mlx5_pci: No available register for Sampler.
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: FA:16:3E:38:53:FD
Configuring Port 1 (socket 0)
Port 1: FA:16:3E:35:4F:28
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 5 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 6 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=1024 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=1024 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd>


Thanks,
Ziv

Comment 18 errata-xmlrpc 2022-07-11 15:27:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.10.22 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5513


Note You need to log in before you can comment on or make changes to this bug.