Bug 2003976
| Summary: | [RFE] Support OVS-DPDK in nmstate | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Karthik Sundaravel <ksundara> |
| Component: | nmstate | Assignee: | Fernando F. Mancera <ferferna> |
| Status: | CLOSED ERRATA | QA Contact: | Mingyu Shi <mshi> |
| Severity: | unspecified | Docs Contact: | Marc Muehlfeld <mmuehlfe> |
| Priority: | urgent | ||
| Version: | 8.4 | CC: | ferferna, fge, jhsiao, jiji, jishi, network-qe, till, vcandapp |
| Target Milestone: | rc | Keywords: | FutureFeature, Triaged |
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | nmstate-1.2.1-0.1.alpha1.el8 | Doc Type: | Enhancement |
| Doc Text: |
.The `nmstate` API now supports OVS-DPDK
This enhancement adds the schema for the Open vSwitch (OVS) Data Plane Development Kit (DPDK) to the `nmstate` API. As a result, you can use `nmstate` to configure OVS devices with DPDK ports.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-05-10 13:34:46 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2006093 | ||
|
Description
Karthik Sundaravel
2021-09-14 09:06:45 UTC
Hi, We need more than `Support for OVS-DPDK shall be available in nmstate.` to move forward. Please provide link or detailed use case on how it been used. We use the below commands using nmcli to create ovs-user-bridges attach it to DPDK ports nmcli con add type ovs-bridge conn.interface ovsbridge0 con-name ovs-bridge0 ovs-bridge.datapath-type netdev nmcli con add type ovs-port conn.interface dpdkbond0 conn.master ovsbridge0 con-name ovs-dpdkbond0 ovs-port.bond-mode balance-slb nmcli con add type ovs-interface conn.interface iface0 conn.master dpdkbond0 con-name ovs-iface0 ovs-dpdk.devargs 000:18:00.2 ovs-interface.type dpdk nmcli con add type ovs-interface conn.interface iface1 conn.master dpdkbond0 con-name ovs-iface1 ovs-dpdk.devargs 000:18:00.3 ovs-interface.type dpdk nmstate shall have provisions to configure 1. ovs-bridge.datapath-type as netdev 2. ovs-dpdk.devargs <PCI address of the interface> 3. ovs-interface.type as dpdk I have raised a BZ in nmcli to enable RX Queue configuration for DPDK. BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2001563 Whenever this configuration is supported in nmcli, the similar support would be required in nmstate as well. I could raise another BZ if required. Verified with versions:
nmstate-1.2.1-0.2.alpha2.el8.x86_64
nispor-1.2.3-1.el8.x86_64
NetworkManager-1.36.0-0.7.el8.x86_64
openvswitch2.15-2.15.0-72.el8fdp.x86_64
echo "
---
interfaces:
- name: ovsbr0
type: ovs-bridge
state: up
bridge:
options:
datapath: netdev
port:
- name: ovs0
- name: ens4f0
- name: ovs0
type: ovs-interface
state: up
dpdk:
devargs: 0000:af:00.0
" | nmstatectl apply -
# ovs-vsctl show
489e7257-158e-42ec-9b18-eccf9845b2ea
Bridge ovsbr0
datapath_type: netdev
Port ens4f0
Interface ens4f0
type: system
ovs_version: "2.15.4"
[02:32:10@dell-per740-10 ~]0# nmstatectl show ovsbr0
---
dns-resolver:
config: {}
running:
search:
- knqe.lab.eng.bos.redhat.com
server:
- 10.19.42.41
- 10.11.5.19
- 10.5.30.160
route-rules:
config: []
routes:
config: []
running: []
interfaces:
- name: ovsbr0
type: ovs-bridge
state: up
bridge:
options:
datapath: netdev
fail-mode: ''
mcast-snooping-enable: false
rstp: false
stp: false
port:
- name: ens4f0
lldp:
enabled: false
ovs-db:
external_ids: {}
ovs-db:
external_ids:
hostname: dell-per740-10.knqe.lab.eng.bos.redhat.com
rundir: /var/run/openvswitch
system-id: d8d1ce08-a6b7-45d1-b659-544eff49fcb7
other_config: {}
Not really sure if nmstatectl works properly.
Here I have a know OVS-DPDK config:
[root@netqe29 jhsiao]# ovs-vsctl show
2d6456f4-3ddd-4338-a379-ae2e5e5e4404
Bridge ovsbr0
datapath_type: netdev
Port ovsbr0
Interface ovsbr0
type: internal
Port dpdk-11
Interface dpdk-11
type: dpdk
options: {dpdk-devargs="0000:5e:00.1", n_rxq="1"}
Port dpdk-10
Interface dpdk-10
type: dpdk
options: {dpdk-devargs="0000:5e:00.0", n_rxq="1"}
ovs_version: "2.15.4"
[root@netqe29 jhsiao]#
And, here is "nmstatectl show | grep name" :
[root@netqe29 jhsiao]# nmstatectl show | grep name
2022-03-25 13:27:31,364 root DEBUG NetworkManager version 1.36.0
2022-03-25 13:27:31,365 root DEBUG Async action: Retrieve applied config: ethernet eno1 started
2022-03-25 13:27:31,365 root DEBUG Async action: Retrieve applied config: bridge virbr0 started
2022-03-25 13:27:31,366 root DEBUG Async action: Retrieve applied config: ethernet eno1 finished
2022-03-25 13:27:31,366 root DEBUG Async action: Retrieve applied config: bridge virbr0 finished
2022-03-25 13:27:31,368 root DEBUG Interface ethernet.eno1 found. Merging the interface information.
- name: eno1
- name: eno2
- name: eno3
- name: eno4
- name: ens4
- name: lo
- name: ovs-netdev
- name: ovsbr0
- name: virbr0
So, where are the two dpdk ports ?
Is this expected ?
Thanks!
Jean
(In reply to Jean-Tsung Hsiao from comment #11) > Not really sure if nmstatectl works properly. > Here I have a know OVS-DPDK config: > [root@netqe29 jhsiao]# ovs-vsctl show > 2d6456f4-3ddd-4338-a379-ae2e5e5e4404 > Bridge ovsbr0 > datapath_type: netdev > Port ovsbr0 > Interface ovsbr0 > type: internal > Port dpdk-11 > Interface dpdk-11 > type: dpdk > options: {dpdk-devargs="0000:5e:00.1", n_rxq="1"} > Port dpdk-10 > Interface dpdk-10 > type: dpdk > options: {dpdk-devargs="0000:5e:00.0", n_rxq="1"} > ovs_version: "2.15.4" > [root@netqe29 jhsiao]# > > And, here is "nmstatectl show | grep name" : > > [root@netqe29 jhsiao]# nmstatectl show | grep name > 2022-03-25 13:27:31,364 root DEBUG NetworkManager version 1.36.0 > 2022-03-25 13:27:31,365 root DEBUG Async action: Retrieve applied > config: ethernet eno1 started > 2022-03-25 13:27:31,365 root DEBUG Async action: Retrieve applied > config: bridge virbr0 started > 2022-03-25 13:27:31,366 root DEBUG Async action: Retrieve applied > config: ethernet eno1 finished > 2022-03-25 13:27:31,366 root DEBUG Async action: Retrieve applied > config: bridge virbr0 finished > 2022-03-25 13:27:31,368 root DEBUG Interface ethernet.eno1 found. > Merging the interface information. > - name: eno1 > - name: eno2 > - name: eno3 > - name: eno4 > - name: ens4 > - name: lo > - name: ovs-netdev > - name: ovsbr0 > - name: virbr0 > > So, where are the two dpdk ports ? > > Is this expected ? > > Thanks! > Jean Hi Jean-Tsung, Nmstate manages OVS and OVS-DPDK interfaces with NetworkManager, that means NetworkManager will show the interfaces only when they are managed. If you created this interfaces using "ovs-vsctl" or similar tool, NetworkManager will not manage it and therefore Nmstate will not find it. You can do "nmcli d" to see if the ports are managed or not by NetworkManager. Please, let me know if that is the case or there is something wrong in Nmstate. Thank you! (In reply to Mingyu Shi from comment #8) > Verified with versions: > nmstate-1.2.1-0.2.alpha2.el8.x86_64 > nispor-1.2.3-1.el8.x86_64 > NetworkManager-1.36.0-0.7.el8.x86_64 > openvswitch2.15-2.15.0-72.el8fdp.x86_64 > > echo " > --- > interfaces: > - name: ovsbr0 > type: ovs-bridge > state: up > bridge: > options: > datapath: netdev > port: > - name: ovs0 > - name: ens4f0 > - name: ovs0 > type: ovs-interface > state: up > dpdk: > devargs: 0000:af:00.0 > " | nmstatectl apply - > > # ovs-vsctl show > 489e7257-158e-42ec-9b18-eccf9845b2ea > Bridge ovsbr0 > datapath_type: netdev > Port ens4f0 > Interface ens4f0 > type: system > ovs_version: "2.15.4" > > [02:32:10@dell-per740-10 ~]0# nmstatectl show ovsbr0 > --- > dns-resolver: > config: {} > running: > search: > - knqe.lab.eng.bos.redhat.com > server: > - 10.19.42.41 > - 10.11.5.19 > - 10.5.30.160 > route-rules: > config: [] > routes: > config: [] > running: [] > interfaces: > - name: ovsbr0 > type: ovs-bridge > state: up > bridge: > options: > datapath: netdev > fail-mode: '' > mcast-snooping-enable: false > rstp: false > stp: false > port: > - name: ens4f0 > lldp: > enabled: false > ovs-db: > external_ids: {} > ovs-db: > external_ids: > hostname: dell-per740-10.knqe.lab.eng.bos.redhat.com > rundir: /var/run/openvswitch > system-id: d8d1ce08-a6b7-45d1-b659-544eff49fcb7 > other_config: {} Today Openstack NFV deployment uses os-net-config. It takes a yaml like [1] or [2] as input. Internally os-net-config binds the interface with the driver required using "driverctl set-override". It also gets the PCI address of the interface. Once the driver is bound with DPDK, the interface may not be visible to the kernel IP stack. So the interface name can no longer serve as a means to identify the interface. The PCI address is later used to attach the interface to the ovs user bridge using the dpdk-devargs. A sample "ovs-vsctl show" for dpdkbond $] ovs-vsctl show Bridge br-link0 fail_mode: standalone datapath_type: netdev Port br-link0 tag: 131 Interface br-link0 type: internal Port dpdkbond0 Interface dpdk0 type: dpdk options: {dpdk-devargs="0000:06:00.0", n_rxq="1"} Interface dpdk1 type: dpdk options: {dpdk-devargs="0000:06:00.1", n_rxq="1"} ovs_version: "2.13.0" [1] https://github.com/openstack/os-net-config/blob/master/etc/os-net-config/samples/ovs_dpdk_bond.yaml [2] https://github.com/openstack/os-net-config/blob/master/etc/os-net-config/samples/ovs_dpdk.yaml There are couple of steps that needs to be done before proceeding with the nmcli commands for DPDK. 1. Prior to binding the drivers, the kernelargs shall be modified to "default_hugepagesz=1GB hugepagesz=1G hugepages=32 intel_iommu=on iommu=pt" 2. Before attaching the interface with ovs_user_bridge, we need to setup ovs [1]. ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,1024" I am not sure if nmstate or nmcli will also handle these DPDK initialization. [1] https://docs.openvswitch.org/en/latest/intro/install/dpdk/#setup-ovs (In reply to Karthik Sundaravel from comment #14) > There are couple of steps that needs to be done before proceeding with the > nmcli commands for DPDK. > > 1. Prior to binding the drivers, the kernelargs shall be modified to > "default_hugepagesz=1GB hugepagesz=1G hugepages=32 intel_iommu=on iommu=pt" > > 2. Before attaching the interface with ovs_user_bridge, we need to setup ovs > [1]. > > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true > ovs-vsctl --no-wait set Open_vSwitch . > other_config:dpdk-socket-mem="1024,1024" > > I am not sure if nmstate or nmcli will also handle these DPDK initialization. > > [1] https://docs.openvswitch.org/en/latest/intro/install/dpdk/#setup-ovs Your step 2 explains why I failed with the step in Comment 8. I am not a tester for this bug. The reason I got involved is that I need to review Networking/OVS-DPDK section of 8.6 release doc, and this bug shows up there. Thanks for your explanation! Jean HI Jean Does nmstate support the setting of rx_q for DPDK ports as well ? The associated nmcli BZ[1] is given here for reference. [1] https://bugzilla.redhat.com/show_bug.cgi?id=2001563 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (nmstate bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2022:1772 All set. |