RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2156342 - [nmstate] Unable to set MTU for DPDK ports
Summary: [nmstate] Unable to set MTU for DPDK ports
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: nmstate
Version: 9.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Gris Ge
QA Contact: Mingyu Shi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-12-26 14:30 UTC by Karthik Sundaravel
Modified: 2023-05-09 08:22 UTC (History)
5 users (show)

Fixed In Version: nmstate-2.2.3-2.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-09 07:31:53 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github nmstate nmstate pull 2167 0 None Merged ovsdb: Switch to ovsdb plugin for querying all OVS related info 2023-01-04 07:00:07 UTC
Red Hat Issue Tracker NMT-131 0 None None None 2023-01-22 15:29:12 UTC
Red Hat Issue Tracker RHELPLAN-143189 0 None None None 2022-12-26 14:40:36 UTC
Red Hat Product Errata RHBA-2023:2190 0 None None None 2023-05-09 07:32:04 UTC

Description Karthik Sundaravel 2022-12-26 14:30:00 UTC
Description of problem:
I used the below template to create a dpdk port and user bridge, however the MTU could not be applied. 

{
    "interfaces": [
        {
            "ipv4": {
                "dhcp": false,
                "enabled": false
            },
            "ipv6": {
                "autoconf": false,
                "dhcp": false,
                "enabled": false
            },
            "mtu": 2000,
            "name": "br-link-p",
            "state": "up",
            "type": "ovs-interface"
        },
        {
            "bridge": {
                "options": {
                    "datapath": "netdev",
                    "fail-mode": "standalone",
                    "mcast-snooping-enable": false,
                    "rstp": false,
                    "stp": false
                },
                "port": [
                    {
                        "name": "dpdk0"
                    },
                    {
                        "name": "br-link-p"
                    }
                ]
            },
            "mtu": 2000,
            "name": "br-link",
            "ovs-db": {
                "external_ids": {}
            },
            "state": "up",
            "type": "ovs-bridge"
        },
        {
            "dpdk": {
                "devargs": "0000:19:00.2",
                "rx-queue": 1
            },
            "ipv4": {
                "dhcp": false,
                "enabled": false
            },
            "ipv6": {
                "autoconf": false,
                "dhcp": false,
                "enabled": false
            },
            "mtu": 2000,
            "name": "dpdk0",
            "state": "up",
            "type": "ovs-interface"
        }
    ]
}


Version-Release number of selected component (if applicable):
nmstatectl 2.2.2
nmcli tool, version 1.39.10-1.el9


Actual results:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/os_net_config/impl_nmstate.py", line 1429, in apply
    self.set_ifaces(interfaces)
  File "/usr/local/lib/python3.9/site-packages/os_net_config/impl_nmstate.py", line 426, in set_ifaces
    netapplier.apply(state, verify_change=verify)
  File "/usr/lib/python3.9/site-packages/libnmstate/netapplier.py", line 29, in apply
    return apply_net_state(
  File "/usr/lib/python3.9/site-packages/libnmstate/clib_wrapper.py", line 138, in apply_net_state
    raise map_error(err_kind, err_msg)
libnmstate.error.NmstateVerificationError: Verification failure: br-link.interface.mtu desire '2000', current 'null'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/os-net-config", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.9/site-packages/os_net_config/cli.py", line 343, in main
    files_changed = provider.apply(cleanup=opts.cleanup,
  File "/usr/local/lib/python3.9/site-packages/os_net_config/impl_nmstate.py", line 1432, in apply
    raise os_net_config.ConfigurationError(msg)
os_net_config.ConfigurationError: Error setting interfaces state: Verification failure: br-link.interface.mtu desire '2000', current 'null'


Expected results:

The dpdk ports/user bridge shall be created with the MTU set.

Comment 1 Gris Ge 2022-12-27 02:19:50 UTC
The `br-link` interface is a `ovs-bridge` which is not allowed to have MTU settings as it is user space only interface.

Comment 2 Gris Ge 2022-12-27 05:24:57 UTC
For setting MTU to DPDK ovs internal interface, it's a bug of nmstate.

The DPDK OVS interface does not have kernel representative, but only exists in OVSDB:

# ovs-vsctl list interface

_uuid               : 09e87e99-f6fd-4105-a06d-c568fe557b79
admin_state         : up
bfd                 : {}
bfd_status          : {}
cfm_fault           : []
cfm_fault_status    : []
cfm_flap_count      : []
cfm_health          : []
cfm_mpid            : []
cfm_remote_mpids    : []
cfm_remote_opstate  : []
duplex              : full
error               : []
external_ids        : {NM.connection.uuid="47c80658-4ffa-4995-9baf-2b6cf2d83e05"}
ifindex             : 898261
ingress_policing_burst: 0
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: 0
lacp_current        : []
link_resets         : 0
link_speed          : 10000000000
link_state          : up
lldp                : {}
mac                 : []
mac_in_use          : "e4:43:4b:5c:96:82"
mtu                 : 9000
mtu_request         : 9000
name                : iface0
ofport              : 2
ofport_request      : []
options             : {dpdk-devargs="0000:19:00.2"}
other_config        : {}
statistics          : {mac_local_errors=24, mac_remote_errors=15, ovs_rx_qos_drops=0, ovs_tx_failure_drops=0, ovs_tx_invalid_hwol_drops=0, ovs_tx_mtu_exceeded_drops=0, ovs_tx_qos_drops=0, rx_1024_to_1522_packets=0, rx_128_to_255_packets=413, rx_1523_to_max_packets=0, rx_1_to_64_packets=0, rx_256_to_511_packets=1753, rx_512_to_1023_packets=0, rx_65_to_127_packets=0, rx_broadcast_packets=0, rx_bytes=820798, rx_crc_errors=0, rx_dropped=0, rx_errors=0, rx_fragmented_errors=0, rx_illegal_byte_errors=0, rx_jabber_errors=0, rx_length_errors=0, rx_mac_short_dropped=0, rx_mbuf_allocation_errors=0, rx_missed_errors=0, rx_oversize_errors=0, rx_packets=2157, rx_undersized_errors=0, tx_1024_to_1522_packets=0, tx_128_to_255_packets=11, tx_1523_to_max_packets=0, tx_1_to_64_packets=0, tx_256_to_511_packets=28, tx_512_to_1023_packets=0, tx_65_to_127_packets=47, tx_broadcast_packets=28, tx_bytes=15712, tx_dropped=0, tx_errors=0, tx_link_down_dropped=0, tx_multicast_packets=58, tx_packets=86}
status              : {driver_name=net_i40e, if_descr="DPDK 21.11.2 net_i40e", if_type="6", link_speed="10Gbps", max_hash_mac_addrs="0", max_mac_addrs="64", max_rx_pktlen="9018", max_rx_queues="192", max_tx_queues="192", max_vfs="0", max_vmdq_pools="32", min_rx_bufsize="1024", numa_id="0", pci-device_id="0x1572", pci-vendor_id="0x8086", port_no="0"}
type                : dpdk



Will fix it up.

Comment 3 Gris Ge 2023-01-03 15:30:42 UTC
Hi Kathik,

Could you try `sudo dnf copr enable packit/nmstate-nmstate-2167 -y; sudo dnf upgrade nmstate -y`?

Thank you!

Comment 4 Karthik Sundaravel 2023-01-04 05:20:43 UTC
Hi Gris

It works fine with this.

Thanks!

Comment 5 Gris Ge 2023-01-04 07:00:07 UTC
Upstream patch merged: https://github.com/nmstate/nmstate/pull/2167

Comment 9 Mingyu Shi 2023-02-27 09:10:38 UTC
Verified with:
nmstate-2.2.7-1.el9.x86_64
nispor-1.2.10-1.el9.x86_64
NetworkManager-1.42.0-1.el9.x86_64
openvswitch2.15-2.15.0-79.el9fdp.x86_64
Linux dell-per740-68.rhts.eng.pek2.redhat.com 5.14.0-277.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Feb 17 09:45:09 EST 2023 x86_64 x86_64 x86_64 GNU/Linux
DISTRO=RHEL-9.2.0-20230223.23

[17:06:39@dell-per740-68 ~]0# nmstatectl set dpdk.yaml 
Using 'set' is deprecated, use 'apply' instead.
[2023-02-27T09:08:17Z INFO  nmstate::nispor::base_iface] Got unsupported interface type Tun: ovs-netdev, ignoring
[2023-02-27T09:08:17Z INFO  nmstate::nispor::show] Got unsupported interface ovs-netdev type Tun
[2023-02-27T09:08:17Z INFO  nmstate::query_apply::net_state] Created checkpoint /org/freedesktop/NetworkManager/Checkpoint/17
[2023-02-27T09:08:17Z INFO  nmstate::ifaces::inter_ifaces] Ignoring interface eno2 type ethernet
[2023-02-27T09:08:17Z INFO  nmstate::ifaces::inter_ifaces] Ignoring interface ens3f1np1 type ethernet
[2023-02-27T09:08:17Z INFO  nmstate::ifaces::inter_ifaces] Ignoring interface eno4 type ethernet
[2023-02-27T09:08:17Z INFO  nmstate::ifaces::inter_ifaces] Ignoring interface eno3 type ethernet
[2023-02-27T09:08:17Z INFO  nmstate::ifaces::inter_ifaces] Ignoring interface enp59s0 type ethernet
[2023-02-27T09:08:17Z INFO  nmstate::nm::query_apply::profile] Modifying connection UUID Some("f08b3dda-f4e1-49db-aa26-4015b499f196"), ID Some("dpdk0-if"), type Some("ovs-interface") name Some("dpdk0")
[2023-02-27T09:08:17Z INFO  nmstate::nm::query_apply::profile] Modifying connection UUID Some("70ce7fe2-add2-4a68-8923-c9f26df0e673"), ID Some("ovsbr0-br"), type Some("ovs-bridge") name Some("ovsbr0")
[2023-02-27T09:08:17Z INFO  nmstate::nm::query_apply::profile] Modifying connection UUID Some("bc21f088-2eef-4b16-b65d-4e11651d740e"), ID Some("dpdk0-port"), type Some("ovs-port") name Some("dpdk0")
[2023-02-27T09:08:18Z INFO  nmstate::nm::query_apply::profile] Activating connection 70ce7fe2-add2-4a68-8923-c9f26df0e673: ovsbr0/ovs-bridge
[2023-02-27T09:08:18Z INFO  nmstate::nm::query_apply::profile] Activating connection bc21f088-2eef-4b16-b65d-4e11651d740e: dpdk0/ovs-port
[2023-02-27T09:08:18Z INFO  nmstate::nm::query_apply::profile] Reapplying connection f08b3dda-f4e1-49db-aa26-4015b499f196: dpdk0/ovs-interface
[2023-02-27T09:08:18Z INFO  nmstate::nispor::base_iface] Got unsupported interface type Tun: ovs-netdev, ignoring
[2023-02-27T09:08:18Z INFO  nmstate::nispor::show] Got unsupported interface ovs-netdev type Tun
[2023-02-27T09:08:18Z INFO  nmstate::query_apply::net_state] Retrying on: VerificationError: Verification failure: ovsbr0.interface.bridge desire '{"options":{"datapath":"netdev"},"port":[{"name":"dpdk0"}]}', current 'null'
[2023-02-27T09:08:19Z INFO  nmstate::nispor::base_iface] Got unsupported interface type Tun: ovs-netdev, ignoring
[2023-02-27T09:08:19Z INFO  nmstate::nispor::show] Got unsupported interface ovs-netdev type Tun
[2023-02-27T09:08:19Z INFO  nmstate::query_apply::net_state] Retrying on: VerificationError: Verification failure: dpdk0.interface.mtu desire '3000', current 'null'
[2023-02-27T09:08:20Z INFO  nmstate::nispor::base_iface] Got unsupported interface type Tun: ovs-netdev, ignoring
[2023-02-27T09:08:20Z INFO  nmstate::nispor::show] Got unsupported interface ovs-netdev type Tun
[2023-02-27T09:08:21Z INFO  nmstate::query_apply::net_state] Destroyed checkpoint /org/freedesktop/NetworkManager/Checkpoint/17
dns-resolver: {}
route-rules: {}
routes: {}
interfaces:
- name: dpdk0
  type: ovs-interface
  state: up
  mtu: 3000
  dpdk:
    devargs: 0000:5e:00.0
    rx-queue: 2
- name: ovsbr0
  type: ovs-bridge
  state: up
  bridge:
    options:
      datapath: netdev
    port:
    - name: dpdk0
ovs-db: {}

[17:08:21@dell-per740-68 ~]0# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 34:48:ed:f8:ad:e4 brd ff:ff:ff:ff:ff:ff
    altname enp24s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 34:48:ed:f8:ad:e5 brd ff:ff:ff:ff:ff:ff
    altname enp24s0f1
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 34:48:ed:f8:ad:e6 brd ff:ff:ff:ff:ff:ff
    altname enp25s0f0
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 34:48:ed:f8:ad:e7 brd ff:ff:ff:ff:ff:ff
    altname enp25s0f1
6: enp59s0: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:1b:21:a0:94:b6 brd ff:ff:ff:ff:ff:ff
7: ens3f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 3000 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 0c:42:a1:5f:5c:58 brd ff:ff:ff:ff:ff:ff
    altname enp94s0f0np0
8: ens3f1np1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 0c:42:a1:5f:5c:59 brd ff:ff:ff:ff:ff:ff
    altname enp94s0f1np1
32: ovs-netdev: <BROADCAST,MULTICAST,PROMISC> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 6a:c6:86:92:95:f7 brd ff:ff:ff:ff:ff:ff
[17:08:23@dell-per740-68 ~]0# ovs-vsctl show
d0c6a0e8-6223-4e42-8bd9-fad3f518147d
    Bridge ovsbr0
        datapath_type: netdev
        Port dpdk0
            Interface dpdk0
                type: dpdk
                options: {dpdk-devargs="0000:5e:00.0", n_rxq="2"}
    ovs_version: "2.15.8"
[17:08:30@dell-per740-68 ~]0# ovs-vsctl list interface
_uuid               : e77bffa5-aa68-418a-95ad-5ca42abe2e6b
admin_state         : up
bfd                 : {}
bfd_status          : {}
cfm_fault           : []
cfm_fault_status    : []
cfm_flap_count      : []
cfm_health          : []
cfm_mpid            : []
cfm_remote_mpids    : []
cfm_remote_opstate  : []
duplex              : full
error               : []
external_ids        : {NM.connection.uuid="f08b3dda-f4e1-49db-aa26-4015b499f196"}
ifindex             : 10668788
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current        : []
link_resets         : 0
link_speed          : 10000000000
link_state          : up
lldp                : {}
mac                 : []
mac_in_use          : "0c:42:a1:5f:5c:58"
mtu                 : 3000
mtu_request         : 3000
name                : dpdk0
ofport              : 1
ofport_request      : []
options             : {dpdk-devargs="0000:5e:00.0", n_rxq="2"}
other_config        : {}
statistics          : {ovs_rx_qos_drops=0, ovs_tx_failure_drops=0, ovs_tx_invalid_hwol_drops=0, ovs_tx_mtu_exceeded_drops=0, ovs_tx_qos_drops=0, rx_broadcast_packets=1, rx_bytes=11078, rx_dropped=0, rx_errors=0, rx_mbuf_allocation_errors=0, rx_missed_errors=0, rx_packets=83, rx_phy_crc_errors=0, rx_phy_in_range_len_errors=0, rx_phy_symbol_errors=0, rx_q0_errors=0, rx_q1_errors=0, rx_wqe_errors=0, tx_broadcast_packets=0, tx_bytes=0, tx_dropped=0, tx_errors=0, tx_multicast_packets=0, tx_packets=0, tx_phy_errors=0, tx_pp_clock_queue_errors=0, tx_pp_missed_interrupt_errors=0, tx_pp_rearm_queue_errors=0, tx_pp_timestamp_future_errors=0, tx_pp_timestamp_past_errors=0}
status              : {driver_name=mlx5_pci, if_descr="DPDK 20.11.1 mlx5_pci", if_type="6", link_speed="10Gbps", max_hash_mac_addrs="0", max_mac_addrs="128", max_rx_pktlen="3018", max_rx_queues="1024", max_tx_queues="1024", max_vfs="0", max_vmdq_pools="0", min_rx_bufsize="32", numa_id="0", pci-device_id="0x1015", pci-vendor_id="0x15b3", port_no="0"}
type                : dpdk

Comment 11 errata-xmlrpc 2023-05-09 07:31:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (nmstate bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2190


Note You need to log in before you can comment on or make changes to this bug.