Bug 2075200 - VLAN filtering cannot be configured with Intel X710
Summary: VLAN filtering cannot be configured with Intel X710
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Networking
Version: 4.11.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.11.0
Assignee: Petr Horáček
QA Contact: Yossi Segev
URL:
Whiteboard:
Depends On: 2026621
Blocks: 2040316 2040317
TreeView+ depends on / blocked
 
Reported: 2022-04-13 19:36 UTC by Ruth Netser
Modified: 2023-11-13 08:11 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2026621
Environment:
Last Closed: 2022-09-14 19:30:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CNV-17584 0 None None None 2023-11-13 08:11:54 UTC
Red Hat Product Errata RHSA-2022:6526 0 None None None 2022-09-14 19:30:59 UTC

Description Ruth Netser 2022-04-13 19:36:48 UTC
+++ This bug was initially created as a clone of Bug #2026621 +++

Once CNV consumes RHEL8.6, this but should be moved to ON_QE

Created attachment 1843563 [details]
Diff of the failed configuration

Description of problem:
When any VLAN filtering is configured on Intel X710, the configuration fails to get applied, with dmesg containing a number of following messages:
[79627.840049] i40e 0000:19:00.1: Error I40E_AQ_RC_ENOSPC adding RX filters on PF, promiscuous mode forced on

The odd part is that it happens even with a single VLAN trunk getting open, so it **should** not be a problem with lack of memory.


Version-Release number of selected component (if applicable):
RHEL 8.4
nmstate 1.0.2-14
NetworkManager in container, version 1.30.0-13.el8_4
NetworkManager on the host, version 1.30.0-10.el8_4


How reproducible:
Consistently on some PFs of the NIC. It always happens on the second one but did not happen on the third. While the second one had a wire connected, third one was disconnected.


Steps to Reproduce:
1. Get a host with Intel X710 NIC
2. Apply the following config:
    interfaces:
    - bridge:
        options:
          stp:
            enabled: false
        port:
        - name: eno2
          vlan:
            mode: trunk
            trunk-tags:
            - id: 1000
      ipv4:
        auto-dns: true
        dhcp: false
        enabled: false
      ipv6:
        enabled: false
      name: br1test
      state: up
      type: linux-bridge

Actual results:
Configuration fails, nmstate notices that requested VLANs were not applied. dmesg contains number of messages complaining about I40E_AQ_RC_ENOSPC. The number depends on how many IDs we atttemped to open.


Expected results:
We should be able to apply at least a limited numbed of trunk IDs on hardware that has offloading capability.


Additional info:

This reproduces even if vlan offloading gets disabled through ethtool:
rx-vlan-offload: off
tx-vlan-offload: off

Used NICs:
[root@cnv-qe-infra-32 /]# lspci  | grep 710
19:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
19:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
19:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
19:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)

--- Additional comment from Petr Horáček on 2021-11-25 11:01:30 UTC ---



--- Additional comment from Fernando F. Mancera on 2021-11-25 11:09:19 UTC ---

Debugging with nmcli and iproute2. I think this is probably a NM or kernel bug.

--- Additional comment from Petr Horáček on 2021-11-25 11:25:07 UTC ---

After reboot of the host and trying again, the dmesg stopped appearing (is restart needed to clear the memory?). However, it still fails to configure the VLAN trunk.

--- Additional comment from Petr Horáček on 2021-11-25 11:27:35 UTC ---



--- Additional comment from Petr Horáček on 2021-12-07 16:20:34 UTC ---

This may be related to a very similar issue that happened with Pensando NICs: https://bugzilla.redhat.com/show_bug.cgi?id=1959512#c22

--- Additional comment from Gris Ge on 2021-12-13 08:51:36 UTC ---

Hi Petr,

This sounds like a kernel bug to me.
Can I close as duplication of bug 1959512 ?

--- Additional comment from Petr Horáček on 2021-12-13 09:07:59 UTC ---

It's not clear that these two bugs are the same to me. Despite being in the same area, they are different. The on with Pensando caused the host to freeze. This Intel one just silently ignores the configuration. Marking it as a duplicate would be dangerous as we would skip the investigation and just assume it is already fixed. Could we move the BZ to kernel and let them evaluate it instead?

--- Additional comment from Petr Horáček on 2021-12-16 12:03:58 UTC ---

The same issue seems to happen to the customer in the attached case - after we attempt to apply VLANs, it is not reflected on the NIC

--- Additional comment from nijin ashok on 2021-12-17 08:55:10 UTC ---

[170980.608957] i40e 0000:1a:00.0: Error I40E_AQ_RC_ENOSPC adding RX filters on PF, promiscuous mode forced on

sosreport-node05-03102868-2021-12-16-nnpyfja]$ grep Ethernet sos_commands/pci/lspci_-tv
 +-[0000:5d]-+-00.0-[5e]--+-00.0  Intel Corporation Ethernet Controller 10G X550T
 |           |            \-00.1  Intel Corporation Ethernet Controller 10G X550T


head sos_commands/networking/ethtool_-i_eno1
driver: i40e
version: 4.18.0-305.28.1.rt7.100.el8_4.x
firmware-version: 3.31 0x80000c92 1.1747.0
expansion-rom-version: 
bus-info: 0000:1a:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

--- Additional comment from Gris Ge on 2021-12-20 05:07:30 UTC ---

Hi Petr and Nijin,

The bug 1959512 has updated the driver of i40e/ionic, could you use the newest kernel of 8.6 and try again?

Thank you!

--- Additional comment from nijin ashok on 2021-12-20 11:54:41 UTC ---

(In reply to nijin ashok from comment #9)
> [170980.608957] i40e 0000:1a:00.0: Error I40E_AQ_RC_ENOSPC adding RX filters
> on PF, promiscuous mode forced on
> 
> sosreport-node05-03102868-2021-12-16-nnpyfja]$ grep Ethernet
> sos_commands/pci/lspci_-tv
>  +-[0000:5d]-+-00.0-[5e]--+-00.0  Intel Corporation Ethernet Controller 10G
> X550T
>  |           |            \-00.1  Intel Corporation Ethernet Controller 10G
> X550T
> 
> 
> head sos_commands/networking/ethtool_-i_eno1
> driver: i40e
> version: 4.18.0-305.28.1.rt7.100.el8_4.x
> firmware-version: 3.31 0x80000c92 1.1747.0
> expansion-rom-version: 
> bus-info: 0000:1a:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: yes
> supports-register-dump: yes
> supports-priv-flags: yes

Sorry, I think I didn't put all info. We have another customer (case 03102868) having the same issue. The logs above are from the customer's environment. 

I cannot reproduce this in my test server (kernel 4.18.0-305.25.1.el8_4.x86_64) which is having X710/X557-AT NICs. Although I get the error "I40E_AQ_RC_ENOSPC", the VLANs do get configured.

--- Additional comment from Fernando F. Mancera on 2021-12-20 12:03:57 UTC ---

(In reply to nijin ashok from comment #11)
> (In reply to nijin ashok from comment #9)
> > [170980.608957] i40e 0000:1a:00.0: Error I40E_AQ_RC_ENOSPC adding RX filters
> > on PF, promiscuous mode forced on
> > 
> > sosreport-node05-03102868-2021-12-16-nnpyfja]$ grep Ethernet
> > sos_commands/pci/lspci_-tv
> >  +-[0000:5d]-+-00.0-[5e]--+-00.0  Intel Corporation Ethernet Controller 10G
> > X550T
> >  |           |            \-00.1  Intel Corporation Ethernet Controller 10G
> > X550T
> > 
> > 
> > head sos_commands/networking/ethtool_-i_eno1
> > driver: i40e
> > version: 4.18.0-305.28.1.rt7.100.el8_4.x
> > firmware-version: 3.31 0x80000c92 1.1747.0
> > expansion-rom-version: 
> > bus-info: 0000:1a:00.0
> > supports-statistics: yes
> > supports-test: yes
> > supports-eeprom-access: yes
> > supports-register-dump: yes
> > supports-priv-flags: yes
> 
> Sorry, I think I didn't put all info. We have another customer (case
> 03102868) having the same issue. The logs above are from the customer's
> environment. 
> 
> I cannot reproduce this in my test server (kernel
> 4.18.0-305.25.1.el8_4.x86_64) which is having X710/X557-AT NICs. Although I
> get the error "I40E_AQ_RC_ENOSPC", the VLANs do get configured.

This seems to be a driver issue, assigning it to kernel component.

--- Additional comment from nijin ashok on 2021-12-21 09:55:00 UTC ---


> I cannot reproduce this in my test server (kernel
> 4.18.0-305.25.1.el8_4.x86_64) which is having X710/X557-AT NICs. Although I
> get the error "I40E_AQ_RC_ENOSPC", the VLANs do get configured.

I did this using nmcli and tried with nmstatectl today and I was able to reproduce the issue.

It works fine with nmcli.

~~~
# nmcli connection add type bridge ifname br0 con-name br0 bridge.vlan-filtering yes

# nmcli connection add type ethernet ifname ens1f0 con-name ens1f0 master br0 slave-type bridge bridge-port.vlans 2-3

# bridge vlan show
port              vlan-id  
ens1f0            1 PVID Egress Untagged
                  2
                  3
~~~

However, trying the same with nmstatectl fails while configuring the vlan.

~~~
---
interfaces:
  - name: br0
    type: linux-bridge
    state: up
    bridge:
      options:
        stp:
          enabled: false
      port:
        - name: ens1f0
          vlan:
            mode: trunk
            trunk-tags:
              - id-range:
                  max: 3
                  min: 2


difference
==========
--- desired
+++ current
.........
.........
-    vlan:
-      enable-native: false
-      mode: trunk
-      trunk-tags:
-      - id: 2
-      - id: 3
+    stp-hairpin-mode: false
+    stp-path-cost: 100
+    stp-priority: 32
+    vlan: {}
~~~

However, when I run "bridge vlan show" inside a for loop while nmstatectl is applying the conf, it was showing the vlan's on bridge ports although nmstatectl reports error. So it looks like configuration is getting applied but it fails to read the current state correctly?

IIUC, nmstatectl retrieve the port state and conf using nipsor.

Even if I configure the bridge vlans using nmcli, the nispor was not showing these vlan's on the bridge port.

~~~
# bridge vlan show
port              vlan-id  
ens1f0            1 PVID Egress Untagged
                  2
                  3

# npc iface ens1f0 |grep -A4 vlans
Unhandled AF_SPEC_BRIDGE_INFO: 0 [2, 0]
Unhandled AF_SPEC_BRIDGE_INFO: 1 [1, 0]
~~~


And this happens only with the i40e driver. I have another nic in the _same_ server which is using igb driver and the nmstatectl works fine on this interface. Also, nispor shows the correct state.

~~~
# nmstatectl apply bridge.yml
.....
.....
Desired state applied: 
---
interfaces:
- name: br1
  type: linux-bridge
  state: up
  bridge:
    options:
      stp:
        enabled: false
    port:
    - name: eno2
      vlan:
        mode: trunk
        trunk-tags:
        - id-range:
            max: 3
            min: 2


# npc  iface eno2|grep -A 10 vlans
Unhandled AF_SPEC_BRIDGE_INFO: 0 [2, 0]
Unhandled AF_SPEC_BRIDGE_INFO: 1 [1, 0]
    vlans:
      - vid: 1
        is_pvid: true
        is_egress_untagged: true
      - vid_range:
          - 2
          - 3
        is_pvid: false
        is_egress_untagged: false
~~~

Versions.

~~~
# ethtool -i ens1f0 
driver: i40e
version: 4.18.0-305.25.1.el8_4.x86_64
firmware-version: 7.10 0x800075fe 19.5.12
expansion-rom-version: 
bus-info: 0000:3b:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

# uname -r
4.18.0-305.25.1.el8_4.x86_64

# ethtool -i eno2
driver: igb
version: 4.18.0-305.25.1.el8_4.x86_64
firmware-version: 1.67, 0x80000faa, 19.5.12
expansion-rom-version: 
bus-info: 0000:19:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

# rpm -qa|egrep -i "nmstate|nispor"
python3-nispor-1.1.1-1.el8.noarch
python3-libnmstate-1.1.0-3.el8.noarch
nispor-1.1.1-1.el8.x86_64
nmstate-1.1.0-3.el8.noarch
~~~

--- Additional comment from Fernando F. Mancera on 2021-12-21 10:06:35 UTC ---

This is very useful! Then it seems the information is being exposed to netlink in a different way than in the other NICs.. is this possible? I thought all NICs should expose the information using the same format.

--- Additional comment from Petr Horáček on 2022-01-06 14:18:42 UTC ---

(In reply to Gris Ge from comment #10)
> Hi Petr and Nijin,
> 
> The bug 1959512 has updated the driver of i40e/ionic, could you use the
> newest kernel of 8.6 and try again?

Unfortunately we cannot. OpenShift is still on 8.4. It would be a high priority to get a fix for this backported all the way there as multiple customers are suffering from this issue. Would that be possible?

@ferferna do we have to get a fix for this in RHEL or can nispor/nmstate work around it too? I wonder what is the way forward here and who should I pursue to resolve this.

--- Additional comment from Gris Ge on 2022-01-07 02:56:27 UTC ---

If nmcli can works, then nmstate should fix it or at lease workaround it.
I will take a look before 14 Jan 2022.

--- Additional comment from Gris Ge on 2022-01-07 03:00:39 UTC ---

Hi Petr,

Is there server in RH lab I could use to reproduce this problem?

--- Additional comment from Gris Ge on 2022-01-07 03:04:41 UTC ---

Acceptance criteria: Nmstate should support vlan filtering on intel X710(i40e driver).

--- Additional comment from Gris Ge on 2022-01-07 04:43:23 UTC ---

Patch posted to https://github.com/nispor/nispor/pull/166

Backport scratch build:

RHEL 8.6: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=42249637
RHEL 8.5: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=42249613
RHEL 8.4: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=42249571


I have reproduce the problem and tested the 8.5 rpm on IBM s390x(ppc64le) server with Intel Corporation Ethernet Controller X710/X557-AT 10GBASE-T.

--- Additional comment from Gris Ge on 2022-01-07 04:57:29 UTC ---

Hi nijin ashok,

Could you use above scratch rpm to test in your environment?

Thank you!

--- Additional comment from Petr Horáček on 2022-01-07 08:07:03 UTC ---

(In reply to Gris Ge from comment #17)
> Hi Petr,
> 
> Is there server in RH lab I could use to reproduce this problem?

Our QE should have one, but it takes a while to get access to it. Let me know in case you still need it.

Thanks a lot for jumping at this problem and proposing those fixes.

--- Additional comment from Gris Ge on 2022-01-07 12:21:33 UTC ---

Hi Petr,

I don't need the server any more. Thanks!

--- Additional comment from nijin ashok on 2022-01-13 02:27:25 UTC ---

(In reply to Gris Ge from comment #20)
> Hi nijin ashok,
> 
> Could you use above scratch rpm to test in your environment?
> 
> Thank you!

Tested with the new build and nmstatectl was able to apply the VLANs successfully.

~~~
npc iface ens1f0 |grep -A8 vlans
Unhandled AF_SPEC_BRIDGE_INFO: 0 [2, 0]
Unhandled AF_SPEC_BRIDGE_INFO: 1 [1, 0]
Unhandled AF_SPEC_BRIDGE_INFO: 0 [2, 0]
Unhandled AF_SPEC_BRIDGE_INFO: 1 [1, 0]
    vlans:
      - vid: 1
        is_pvid: true
        is_egress_untagged: true
      - vid_range:
          - 100
          - 2412
        is_pvid: false
        is_egress_untagged: false
~~~

--- Additional comment from Gris Ge on 2022-01-13 12:40:30 UTC ---

Approving zstream for RHEL 8.4.0 and RHEL 8.5.0:
 * Patch tested by reporter.
 * The patch is trivial.
 * Big business impact on CNV for intel i40e card.
 * Has customer case attached.

--- Additional comment from RHEL Program Management on 2022-01-13 12:40:37 UTC ---

This BZ has been approved for cloning.

The BZ can be now clone by everyone with the self-service cloning tool http://watson.int.open.paas.redhat.com/rules

For more information regarding ZStream and cloning please visit https://docs.google.com/document/d/1yL8iTHjxyQ7sI-fC4PcPjpOOyfF5ECGnK-B7r_QRZm4/edit#

--- Additional comment from RHEL Program Management Team on 2022-01-13 12:58:20 UTC ---

This bug has been copied as 8.4.0 stream bug#2040316 and now must be resolved in the current update release, blocker flag set.

This bug has been copied as 8.5.0 stream bug#2040317 and now must be resolved in the current update release, blocker flag set.

--- Additional comment from RHEL Program Management Team on 2022-01-17 09:57:48 UTC ---

This bug has been copied as 8.5.0 stream 2040317 and now must be resolved in the current update release, blocker flag set.

--- Additional comment from errata-xmlrpc on 2022-01-17 14:38:28 UTC ---

Bug report changed to ON_QA status by Errata System.
A QE request has been submitted for advisory RHEA-2021:84890-01
https://errata.devel.redhat.com/advisory/84890

--- Additional comment from errata-xmlrpc on 2022-01-17 14:38:38 UTC ---

This bug has been added to advisory RHEA-2021:84890 by auto/ptp-jenkins (auto/ptp-jenkins)

--- Additional comment from Gris Ge on 2022-01-20 11:34:26 UTC ---



--- Additional comment from Mingyu Shi on 2022-02-07 10:13:47 UTC ---

Verified with versions:
nmstate-1.2.1-0.2.alpha2.el8.x86_64
nispor-1.2.3-1.el8.x86_64
NetworkManager-1.36.0-0.7.el8.x86_64

[17:17:30@dell-per740-80 ~]0# cat i40e.yaml 
---
interfaces:
- name: ens1f0
  type: ethernet
  state: up
- name: br1
  type: linux-bridge
  state: up
  bridge:
    options:
      stp:
        enabled: false
    port:
    - name: ens1f0
      vlan:
        mode: trunk
        trunk-tags:
        - id-range:
            max: 3
            min: 2
[17:17:40@dell-per740-80 ~]0# nmstatectl set i40e.yaml 
/tmp/nmstatelog/2022-02-07-17:17:51-739689322.log
Desired state applied: 
---
interfaces:
- name: ens1f0
  type: ethernet
  state: up
- name: br1
  type: linux-bridge
  state: up
  bridge:
    options:
      stp:
        enabled: false
    port:
    - name: ens1f0
      vlan:
        mode: trunk
        trunk-tags:
        - id-range:
            max: 3
            min: 2
/tmp/nmstatelog/2022-02-07-17:17:51-739689322.0.log nmstatectl set i40e.yaml return 0
- name: ens1f0                                                (
  state: down                                                 |   state: up
[17:17:52@dell-per740-80 ~]0# nmstatectl show br1
---
dns-resolver:
  config: {}
  running:
    search:
    - rhts.eng.pek2.redhat.com
    server:
    - 10.73.2.107
    - 10.73.2.108
    - 10.66.127.10
route-rules:
  config: []
routes:
  config: []
  running: []
interfaces:
- name: br1
  type: linux-bridge
  state: up
  accept-all-mac-addresses: false
  bridge:
    options:
      gc-timer: 27243
      group-addr: 01:80:C2:00:00:00
      group-forward-mask: 0
      hash-max: 4096
      hello-timer: 0
      mac-ageing-time: 300
      multicast-last-member-count: 2
      multicast-last-member-interval: 100
      multicast-querier: false
      multicast-querier-interval: 25500
      multicast-query-interval: 12500
      multicast-query-response-interval: 1000
      multicast-query-use-ifaddr: false
      multicast-router: 1
      multicast-snooping: true
      multicast-startup-query-count: 2
      multicast-startup-query-interval: 3125
      stp:
        enabled: false
        forward-delay: 15
        hello-time: 2
        max-age: 20
        priority: 32768
    port:
    - name: ens1f0
      stp-hairpin-mode: false
      stp-path-cost: 100
      stp-priority: 32
      vlan:
        enable-native: false
        mode: trunk
        trunk-tags:
        - id-range:
            max: 3
            min: 2
  ethtool:
    feature:
      highdma: true
      rx-gro: true
      rx-gro-list: false
      rx-udp-gro-forwarding: false
      tx-checksum-ip-generic: true
      tx-esp-segmentation: true
      tx-fcoe-segmentation: false
      tx-generic-segmentation: true
      tx-gre-csum-segmentation: true
      tx-gre-segmentation: true
      tx-gso-list: false
      tx-gso-partial: true
      tx-gso-robust: false
      tx-ipxip4-segmentation: true
      tx-ipxip6-segmentation: true
      tx-nocache-copy: false
      tx-scatter-gather-fraglist: false
      tx-sctp-segmentation: false
      tx-tcp-ecn-segmentation: true
      tx-tcp-mangleid-segmentation: true
      tx-tcp-segmentation: true
      tx-tcp6-segmentation: true
      tx-tunnel-remcsum-segmentation: true
      tx-udp-segmentation: true
      tx-udp_tnl-csum-segmentation: true
      tx-udp_tnl-segmentation: true
      tx-vlan-hw-insert: true
      tx-vlan-stag-hw-insert: true
  ipv4:
    enabled: false
    address: []
    dhcp: false
  ipv6:
    enabled: false
    address: []
    autoconf: false
    dhcp: false
  lldp:
    enabled: false
  mac-address: F8:F2:1E:C2:45:30
  mtu: 1500
ovs-db:
  external_ids:
    hostname: dell-per740-80.rhts.eng.pek2.redhat.com
    rundir: /var/run/openvswitch
    system-id: a771c79e-d929-4cc9-b6cd-1a87c0597d79
  other_config: {}
[17:18:19@dell-per740-80 ~]0# npc iface ens1f0
[2022-02-07T09:18:40Z WARN  nispor::netlink::bridge_vlan] Unhandled AF_SPEC_BRIDGE_INFO: 0 [2, 0]
[2022-02-07T09:18:40Z WARN  nispor::netlink::bridge_vlan] Unhandled AF_SPEC_BRIDGE_INFO: 1 [1, 0]
---
- name: ens1f0
  iface_type: ethernet
  state: up
  mtu: 1500
  flags:
    - broadcast
    - lower_up
    - multicast
    - running
    - up
  mac_address: "f8:f2:1e:c2:45:30"
  permanent_mac_address: "f8:f2:1e:c2:45:30"
  controller: br1
  controller_type: bridge
  ethtool:
    pause:
      rx: false
      tx: false
      auto_negotiate: false
    features:
      fixed:
        esp-hw-offload: false
        esp-tx-csum-hw-offload: false
        fcoe-mtu: false
        loopback: false
        netns-local: false
        rx-all: false
        rx-fcs: false
        rx-gro-hw: false
        rx-lro: false
        rx-vlan-filter: true
        rx-vlan-stag-filter: false
        rx-vlan-stag-hw-parse: false
        tls-hw-record: false
        tls-hw-rx-offload: false
        tls-hw-tx-offload: false
        tx-checksum-fcoe-crc: false
        tx-checksum-ip-generic: false
        tx-esp-segmentation: false
        tx-fcoe-segmentation: false
        tx-gso-list: false
        tx-gso-robust: false
        tx-lockless: false
        tx-scatter-gather-fraglist: false
        tx-sctp-segmentation: false
        tx-tunnel-remcsum-segmentation: false
        tx-vlan-stag-hw-insert: false
        vlan-challenged: false
      changeable:
        highdma: true
        hw-tc-offload: true
        l2-fwd-offload: false
        rx-checksum: true
        rx-gro: true
        rx-gro-list: false
        rx-hashing: true
        rx-ntuple-filter: true
        rx-udp-gro-forwarding: false
        rx-udp_tunnel-port-offload: true
        rx-vlan-hw-parse: true
        tx-checksum-ipv4: true
        tx-checksum-ipv6: true
        tx-checksum-sctp: true
        tx-generic-segmentation: true
        tx-gre-csum-segmentation: true
        tx-gre-segmentation: true
        tx-gso-partial: true
        tx-ipxip4-segmentation: true
        tx-ipxip6-segmentation: true
        tx-nocache-copy: false
        tx-tcp-ecn-segmentation: true
        tx-tcp-mangleid-segmentation: false
        tx-tcp-segmentation: true
        tx-tcp6-segmentation: true
        tx-udp-segmentation: true
        tx-udp_tnl-csum-segmentation: true
        tx-udp_tnl-segmentation: true
        tx-vlan-hw-insert: true
    coalesce:
      rx_max_frames_irq: 256
      rx_usecs: 50
      rx_usecs_high: 0
      tx_max_frames_irq: 256
      tx_usecs: 50
      tx_usecs_high: 0
      use_adaptive_rx: true
      use_adaptive_tx: true
    ring:
      rx: 512
      rx_max: 4096
      tx: 512
      tx_max: 4096
    link_mode:
      auto_negotiate: false
      ours:
        - Autoneg
        - FIBRE
        - Pause
        - Asym_Pause
        - 1000baseX/Full
        - 10000baseSR/Full
      speed: 10000
      duplex: full
  bridge_port:
    stp_state: forwarding
    stp_priority: 32
    stp_path_cost: 100
    hairpin_mode: false
    bpdu_guard: false
    root_block: false
    multicast_fast_leave: false
    learning: true
    unicast_flood: true
    proxyarp: false
    proxyarp_wifi: false
    designated_root: 8000.f8f21ec24530
    designated_bridge: 8000.f8f21ec24530
    designated_port: 32769
    designated_cost: 0
    port_id: "0x8001"
    port_no: "0x1"
    change_ack: false
    config_pending: false
    message_age_timer: 0
    forward_delay_timer: 0
    hold_timer: 0
    multicast_router: temp_query
    multicast_flood: true
    multicast_to_unicast: false
    vlan_tunnel: false
    broadcast_flood: true
    group_fwd_mask: 0
    neigh_suppress: false
    isolated: false
    mrp_ring_open: false
    mrp_in_open: false
    mcast_eht_hosts_limit: 512
    mcast_eht_hosts_cnt: 0
    vlans:
      - vid: 1
        is_pvid: true
        is_egress_untagged: true
      - vid_range:
          - 2
          - 3
        is_pvid: false
        is_egress_untagged: false
  sriov:
    vfs: []

Comment 2 Yossi Segev 2022-05-18 12:54:55 UTC
Verified on a bare-metal cluster with
OCP 4.11 (4.11.0-0.nightly-2022-05-11-054135)
CNV 4.11.0
NetworkManager 1.32.10-5.el8_5
nmstate 1.2.1-1.el8.x86_64

1. On one of the cluster worker nodes - I searched for an Intel X710 device:

sh-4.4# lspci | grep 710
3b:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
3b:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
sh-4.4# 
sh-4.4# ethtool -i ens2f0
driver: i40e
...
bus-info: 0000:3b:00.0
...

(once I found an X710 device, i searched for the interface that has that PCI address - 3b:00.0 in my case).

2. I applied the following policy:

ApiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br1test-nncp
spec:
  desiredState:
    interfaces:
    - bridge:
        options:
          stp:
            enabled: false
        port:
        - name: ens2f0
          vlan:
            mode: trunk
            trunk-tags:
            - id: 1000
      ipv4:
        auto-dns: true
        dhcp: false
        enabled: false
      ipv6:
        enabled: false
      name: br1test
      state: up
      type: linux-bridge
  nodeSelector:
    kubernetes.io/hostname: cnvqe-10.lab.eng.tlv2.redhat.com

3. After applying, I waited for the policy to complete configuration successfully.

[cnv-qe-jenkins@cnvqe-01 yossi]$ oc apply -f vlan-nncp.yaml 
oc get nncp -wnodenetworkconfigurationpolicy.nmstate.io/br1test-nncp created
[cnv-qe-jenkins@cnvqe-01 yossi]$ oc get nncp -w
NAME           STATUS        REASON
br1test-nncp   Progressing   ConfigurationProgressing
br1test-nncp   Progressing   ConfigurationProgressing
br1test-nncp   Progressing   ConfigurationProgressing
br1test-nncp   Available     SuccessfullyConfigured

4. On the node on which I applied the policy (using nodeSelector), I verified that the bridge was created successfully.

[cnv-qe-jenkins@cnvqe-01 yossi]$ oc debug node/cnvqe-10.lab.eng.tlv2.redhat.com
Starting pod/cnvqe-10labengtlv2redhatcom-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.46.41.13
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# 
sh-4.4# nmcli c show
NAME              UUID                                  TYPE           DEVICE  
Wired Connection  47ca2063-95a4-4946-816e-6dfce656b73e  ethernet       eno1    
...
br1test           0ea60ba8-e48a-4ad2-8078-0822f1e58126  bridge         br1test 
ens2f0            5447ef1c-f832-476e-9d52-19b3e1619dcb  ethernet       ens2f0  
...

Comment 5 errata-xmlrpc 2022-09-14 19:30:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.11.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6526


Note You need to log in before you can comment on or make changes to this bug.