RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1551508 - Request for doc update about dpdkvhostuser port is not supported with vIOMMU.
Summary: Request for doc update about dpdkvhostuser port is not supported with vIOMMU.
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.5
Hardware: Unspecified
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Kevin Traynor
QA Contact: ovs-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-05 10:16 UTC by Pei Zhang
Modified: 2023-09-18 00:13 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-18 17:17:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
XML of VM (5.29 KB, text/html)
2018-03-05 10:16 UTC, Pei Zhang
no flags Details

Description Pei Zhang 2018-03-05 10:16:30 UTC
Created attachment 1404249 [details]
XML of VM

Description of problem:
This is testing with vIOMMU. When ovs acts as vhost-user server, VM acts as vhost-user client, dpdk's testpmd can not receive packets.


Version-Release number of selected component (if applicable):
kernel-3.10.0-855.el7.x86_64
qemu-kvm-rhev-2.10.0-21.el7.x86_64
libvirt-3.9.0-13.el7.x86_64
dpdk-17.11-7.el7.x86_64
openvswitch-2.9.0-1.el7fdb.x86_64
microcode-20180108.tgz

How reproducible:
100%

Steps to Reproduce:
1. Boot OVS with vhost-iommu-support=true and acts as vhost-user server. Full script refer to[1]

ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-iommu-support=true

2. Boot VM acts as vhost-user client, full xml is attached to this Description.

3. In guest, load vfio, refer to[3]

4. In guest, start dpdk's testpmd, refer to[4]

5. In another host, start TRex as packets generator, refer to [5]. Testpmd fail to receive packets.


Actual results:
Guest dpdk's testpmd fail to receive packets when testing with vIOMMU and ovs vhost-user server.


Expected results:
Guest dpdk's testpmd should be able to receive packets when testing with vIOMMU and ovs vhost-user server.


Additional info:
1. When ovs acts as vhost-user client mode, guest dpdk's testpmd can receive packets, and everything works well, this scenario was tested/verified in below bug:
Bug 1532956 - Boot guest with vhost-user setting "iommu='on' ats='on'", dpdk's testpmd can not receive packets.


Reference:
[1]
# cat boot_ovs_server.sh 
#!/bin/bash

set -e

echo "killing old ovs process"
pkill -f ovs- || true
pkill -f ovsdb || true

echo "probing ovs kernel module"
modprobe -r openvswitch || true
modprobe openvswitch

rm -rf /var/run/openvswitch
mkdir /var/run/openvswitch

echo "clean env"
DB_FILE=/etc/openvswitch/conf.db
rm -rf /var/run/openvswitch
rm -f $DB_FILE
mkdir /var/run/openvswitch

echo "init ovs db and boot db server"
export DB_SOCK=/var/run/openvswitch/db.sock
ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach --log-file
ovs-vsctl --no-wait init

echo "start ovs vswitch daemon"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,1024"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask="0x1"
ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-iommu-support=true
ovs-vswitchd unix:$DB_SOCK --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log

echo "creating bridge and ports"

ovs-vsctl --if-exists del-br ovsbr0
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:81:00.0
ovs-vsctl add-port ovsbr0 vhost-user0 -- set Interface vhost-user0 type=dpdkvhostuser
ovs-ofctl del-flows ovsbr0
ovs-ofctl add-flow ovsbr0 "in_port=1,idle_timeout=0 actions=output:2"
ovs-ofctl add-flow ovsbr0 "in_port=2,idle_timeout=0 actions=output:1"

ovs-vsctl --if-exists del-br ovsbr1
ovs-vsctl add-br ovsbr1 -- set bridge ovsbr1 datapath_type=netdev
ovs-vsctl add-port ovsbr1 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:81:00.1
ovs-vsctl add-port ovsbr1 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
ovs-ofctl del-flows ovsbr1
ovs-ofctl add-flow ovsbr1 "in_port=1,idle_timeout=0 actions=output:2"
ovs-ofctl add-flow ovsbr1 "in_port=2,idle_timeout=0 actions=output:1"

echo "all done"

# ovs-vsctl show
16a8d9db-288b-4730-b1d3-04ab3522d132
    Bridge "ovsbr0"
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:81:00.0", n_rxq="2"}
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
        Port "vhost-user0"
            Interface "vhost-user0"
                type: dpdkvhostuser
    Bridge "ovsbr1"
        Port "ovsbr1"
            Interface "ovsbr1"
                type: internal
        Port "vhost-user1"
            Interface "vhost-user1"
                type: dpdkvhostuser
        Port "dpdk1"
            Interface "dpdk1"
                type: dpdk
                options: {dpdk-devargs="0000:81:00.1", n_rxq="2"}

[3]

# modprobe vfio
# modprobe vfio-pci
# dpdk-devbind --bind=vfio-pci 0000:03:00.0
# dpdk-devbind --bind=vfio-pci 0000:04:00.0

[4]
# cat testpmd.sh 
/usr/bin/testpmd \
-l 1,2,3,4,5 \
-n 4 \
-d /usr/lib64/librte_pmd_virtio.so \
-w 0000:03:00.0 -w 0000:04:00.0 \
-- \
--nb-cores=4 \
--disable-hw-vlan \
-i \
--disable-rss \
--rxq=2 --txq=2

[5]
# cat start_throughput.sh 
DIRECTORY=~/src/lua-trafficgen
cd $DIRECTORY
./binary-search.py \
        --traffic-generator=trex-txrx \
        --search-runtime=30 \
        --validation-runtime=60 \
        --rate-unit=mpps \
        --rate=0 \
        --run-bidirec=1 \
        --run-revunidirec=0 \
        --frame-size=64 \
        --num-flows=1024 \
        --one-shot=0 \
        --max-loss-pct=0 \
        --measure-latency=0

Comment 2 Pei Zhang 2018-03-05 10:21:19 UTC
Additional info (continued):

2. In PVP(testpmd acts as ovs role) testing, when host testpmd acts as vhost-user server mode, guest testpmd can receive packets and everything works well.

Comment 3 Maxime Coquelin 2018-03-05 10:42:43 UTC
Hi Pei,

Looking at OVS code, it seems that the RTE_VHOST_USER_IOMMU_SUPPORT 
flag is only set in case of client init (see below).
I'm not sure why this is done like this, I think this is not
intentional. Kevin, any thoughts?


netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)
{
    struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
    int err;
    uint64_t vhost_flags = 0;
    bool zc_enabled;

    ovs_mutex_lock(&dev->mutex);

    /* Configure vHost client mode if requested and if the following criteria
     * are met:
     *  1. Device hasn't been registered yet.
     *  2. A path has been specified.
     */
    if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)
            && strlen(dev->vhost_id)) {
        /* Register client-mode device. */
        vhost_flags |= RTE_VHOST_USER_CLIENT;

        /* Enable IOMMU support, if explicitly requested. */
        if (dpdk_vhost_iommu_enabled()) {
            vhost_flags |= RTE_VHOST_USER_IOMMU_SUPPORT;
        }


Cheers,
MAxime

Comment 4 Kevin Traynor 2018-03-05 16:10:34 UTC
Hi, this is intentional as vhostuser ports are deprecated, so the new feature is not enabled for them.

In the patches commit message this was noted, but seemed to drop off the commit msg in later versions.

"Note that support for this feature is only implemented for vhost
user client ports (since vhost user ports are considered deprecated)."
from https://mail.openvswitch.org/pipermail/ovs-dev/2017-November/340975.html

Instead the IOMMU feature documentation was deliberately changed to only reference that it is supported on vhostuserclient ports.

https://github.com/openvswitch/ovs/commit/a14d1cc8a74858c7488207e02b9ebdb67e50bd88#diff-ab039b38e727a56d7a5e1344c30e72f6R276

As such, I would propose to close as NOTABUG, or reduce priority and change to be a request for a doc update to make it more explicit that vhostuser ports are not supported.

Comment 5 Pei Zhang 2018-03-06 07:39:32 UTC
Thanks Maxime and Kevin for your quickly response.

Hi Kevin,

I understand Commnet 4 like this: the dpdkvhostuser ports are deprecated. So I'd like to ask:

1. Is there any plan to disable this port in future? Like when users creating it will cause fail and prompt warning messages.

2. Does QE need to test dpdkvhostuser port? Should we stop testing this scenario? 



Thanks,
Pei

Comment 6 Kevin Traynor 2018-03-06 11:11:01 UTC
(In reply to Pei Zhang from comment #5)
> Thanks Maxime and Kevin for your quickly response.
> 
> Hi Kevin,
> 
> I understand Commnet 4 like this: the dpdkvhostuser ports are deprecated. So
> I'd like to ask:
> 
> 1. Is there any plan to disable this port in future? Like when users
> creating it will cause fail and prompt warning messages.

Yes, this is planned to happen in the future, as vhostuserclient is superior.  Upstream, vhostuser(server) are not really causing any maintenance issues, so there is no set timeline. Probably if there is some re-write of vhost in OVS (like moving to vhost pmd api) then someone might suggest it is a good time to remove vhostuser(server) ports.

Comment 9 Pei Zhang 2018-03-09 04:13:11 UTC
(In reply to Kevin Traynor from comment #4)
> Hi, this is intentional as vhostuser ports are deprecated, so the new
> feature is not enabled for them.
[...]
> 
> As such, I would propose to close as NOTABUG, or reduce priority and change
> to be a request for a doc update to make it more explicit that vhostuser
> ports are not supported.

I agree, and I prefer to reduce priority and change to request for doc explain, which can make users more clear about this info. So update the bug title and lower priority.


Best Regards,
Pei

Comment 10 Kevin Traynor 2018-03-09 12:34:22 UTC
Patch posted here: https://mail.openvswitch.org/pipermail/ovs-dev/2018-March/344953.html

Has acked-by and will be part of next pull request from ovs-dpdk branch to master/2.9.

Comment 11 Kevin Traynor 2018-04-18 17:17:38 UTC
Updated OVS docs, will be part of OVS 2.10 release.

https://github.com/openvswitch/ovs/blob/master/Documentation/topics/dpdk/vhost-user.rst#vhost-user

Comment 12 Red Hat Bugzilla 2023-09-18 00:13:19 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.