This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2040509 - [RFE]:Add support for changing "tx_queue_size" to a setable value
Summary: [RFE]:Add support for changing "tx_queue_size" to a setable value
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: All
OS: All
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Laurent Vivier
QA Contact: Lei Yang
URL:
Whiteboard:
Depends On:
Blocks: 2007572
TreeView+ depends on / blocked
 
Reported: 2022-01-13 22:07 UTC by Shravan Kumar Tiwari
Modified: 2023-08-30 06:20 UTC (History)
13 users (show)

Fixed In Version: qemu-kvm-8.0.0-10.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-10 09:34:21 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/centos-stream/src qemu-kvm merge_requests 191 0 None opened virtio-net: correctly report maximum tx_queue_size value 2023-07-25 15:31:28 UTC
Red Hat Issue Tracker   RHEL-1139 0 None None None 2023-08-21 09:22:31 UTC
Red Hat Issue Tracker RHELPLAN-107896 0 None None None 2022-01-13 22:10:12 UTC

Description Shravan Kumar Tiwari 2022-01-13 22:07:13 UTC
Description of problem:
This RFE is comming from a BZ (https://bugzilla.redhat.com/show_bug.cgi?id=2007572) where customer was trying to set up tx_queue_size (a recommendation that they received from their VNF vendor).

This is a telco customer and they have this recommendation from their VNF vendor to have tx_queue_size set for > 256.




Version-Release number of selected component (if applicable):
RHOSP16.1/RHEL8.2

How reproducible:
- RX and TX queue size are configured on director templates
    NovaLibvirtRxQueueSize: 1024
    NovaLibvirtTxQueueSize: 1024

- this is also correctly set in nova.conf
tx_queue_size=1024
rx_queue_size=1024

- however the libvirt domain XML only has RX settings
[shtiwari@supportshell virsh]$ grep -e queue -e UUID virsh_-r_dumpxml_instance-00000e74
      <driver name='vhost' queues='2' rx_queue_size='1024'/>

Instance UUID is:
virsh_-r_dominfo_instance-00000e74:UUID:           2f9201e3-df6a-4099-9277-4101db156d10


Actual results:


Expected results:


Additional info:

Comment 12 Lei Yang 2023-04-18 14:22:08 UTC
Hi Jason

Could you please help review the Comment 11? Thanks in advance.

Best Regards
Lei

Comment 17 Laurent Vivier 2023-05-30 15:02:40 UTC
I've been able to reproduce the problem with tap and stream netdev, but it works fine with vhost-user netdev.

The value can also be checked using "info qtree" in QEMU monitor CLI rather than using ethtool.

Comment 18 Laurent Vivier 2023-05-30 15:39:20 UTC
(In reply to Laurent Vivier from comment #17)
> I've been able to reproduce the problem with tap and stream netdev, but it
> works fine with vhost-user netdev.
> 
> The value can also be checked using "info qtree" in QEMU monitor CLI rather
> than using ethtool.

The value comes from:

static int virtio_net_max_tx_queue_size(VirtIONet *n)
{
    NetClientState *peer = n->nic_conf.peers.ncs[0];

    /*
     * Backends other than vhost-user or vhost-vdpa don't support max queue
     * size.
     */
    if (!peer) {
        return VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE;
    }

    switch(peer->info->type) {
    case NET_CLIENT_DRIVER_VHOST_USER:
    case NET_CLIENT_DRIVER_VHOST_VDPA:
        return VIRTQUEUE_MAX_SIZE;
    default:
        return VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE;
    };
}

VIRTQUEUE_MAX_SIZE is 1024 and VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE is 256.

A tx_queue_size greater than 256 is only accepted with vhost-user and vdpa.

commit 9b02e1618cf26aa52cf786f215d757506dda14f8
Author: Wei Wang <wei.w.wang>
Date:   Wed Jun 28 10:37:59 2017 +0800

    virtio-net: enable configurable tx queue size
    
    This patch enables the virtio-net tx queue size to be configurable
    between 256 (the default queue size) and 1024 by the user when the
    vhost-user backend is used.
    
    Currently, the maximum tx queue size for other backends is 512 due
    to the following limitations:
    - QEMU backend: the QEMU backend implementation in some cases may
    send 1024+1 iovs to writev.
    - Vhost_net backend: there are possibilities that the guest sends
    a vring_desc of memory which crosses a MemoryRegion thereby
    generating more than 1024 iovs after translation from guest-physical
    address in the backend.
    
    Signed-off-by: Wei Wang <wei.w.wang>
    Reviewed-by: Michael S. Tsirkin <mst>
    Signed-off-by: Michael S. Tsirkin <mst>

So I guess the value returned by virtio_net_max_tx_queue_size() could be increased to 512.

Also, the error message should be fixed with:

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 6df6b7329d..e9905aac83 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3630,12 +3630,12 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
     }
 
     if (n->net_conf.tx_queue_size < VIRTIO_NET_TX_QUEUE_MIN_SIZE ||
-        n->net_conf.tx_queue_size > VIRTQUEUE_MAX_SIZE ||
+        n->net_conf.tx_queue_size > virtio_net_max_tx_queue_size(n) ||
         !is_power_of_2(n->net_conf.tx_queue_size)) {
         error_setg(errp, "Invalid tx_queue_size (= %" PRIu16 "), "
                    "must be a power of 2 between %d and %d",
                    n->net_conf.tx_queue_size, VIRTIO_NET_TX_QUEUE_MIN_SIZE,
-                   VIRTQUEUE_MAX_SIZE);
+                   virtio_net_max_tx_queue_size(n));
         virtio_cleanup(vdev);
         return;
     }

Comment 20 RHEL Program Management 2023-07-13 07:31:54 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 22 Lei Yang 2023-07-13 09:13:33 UTC
Hi Laurent

Is there plan to fix this bug in the current release? If yes, is it possible to set ITR=9.3.0? Otherwise, can set up ITR=9.4.0 first to make sure this bug can be fixed in time. 

Best Regards
Lei

Comment 23 Laurent Vivier 2023-07-18 07:50:44 UTC
(In reply to Lei Yang from comment #22)
> Hi Laurent
> 
> Is there plan to fix this bug in the current release? If yes, is it possible
> to set ITR=9.3.0? Otherwise, can set up ITR=9.4.0 first to make sure this
> bug can be fixed in time. 
> 

The fix that reports the invalid queue size has been merged upstream, so I'm going to backport it.

commit 4271f4038372f174dbafffacca1a748d058a03ba
Author: Laurent Vivier <lvivier>
Date:   Mon Jun 5 16:21:25 2023 +0200

    virtio-net: correctly report maximum tx_queue_size value
    
    Maximum value for tx_queue_size depends on the backend type.
    1024 for vDPA/vhost-user, 256 for all the others.
    
    The value is returned by virtio_net_max_tx_queue_size() to set the
    parameter:
    
        n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
                                        n->net_conf.tx_queue_size);
    
    But the parameter checking uses VIRTQUEUE_MAX_SIZE (1024).
    
    So the parameter is silently ignored and ethtool reports a different
    value than the one provided by the user.
    
       ... -netdev tap,... -device virtio-net,tx_queue_size=1024
    
        # ethtool -g enp0s2
        Ring parameters for enp0s2:
        Pre-set maximums:
        RX:         256
        RX Mini:    n/a
        RX Jumbo:   n/a
        TX:         256
        Current hardware settings:
        RX:         256
        RX Mini:    n/a
        RX Jumbo:   n/a
        TX:         256
    
       ... -netdev vhost-user,... -device virtio-net,tx_queue_size=2048
    
        Invalid tx_queue_size (= 2048), must be a power of 2 between 256 and 1024
    
    With this patch the correct maximum value is checked and displayed.
    
    For vDPA/vhost-user:
    
        Invalid tx_queue_size (= 2048), must be a power of 2 between 256 and 1024
    
    For all the others:
    
        Invalid tx_queue_size (= 512), must be a power of 2 between 256 and 256
    
    Fixes: 2eef278b9e63 ("virtio-net: fix tx queue size for !vhost-user")
    Cc: mst
    Cc: qemu-stable
    Signed-off-by: Laurent Vivier <lvivier>
    Signed-off-by: Jason Wang <jasowang>

Comment 25 Yanan Fu 2023-08-02 02:26:59 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 26 Lei Yang 2023-08-02 06:12:28 UTC
Hello Laurent

According to QE test result, for the tap device, this parameter only can be setup "tx_queue_size=256", if QE setup "tx_queue_size=512" or other number, it fail with: Invalid tx_queue_size (= 512), must be a power of 2 between 256 and 256. Is this a expect result? If yes, it looks like does not match the original intention which is to become a parameter that can be change, could you please help review it again? Thanks in advance.

Thanks
Lei

Comment 27 Laurent Vivier 2023-08-02 09:12:54 UTC
(In reply to Lei Yang from comment #26)
> Hello Laurent
> 
> According to QE test result, for the tap device, this parameter only can be
> setup "tx_queue_size=256", if QE setup "tx_queue_size=512" or other number,
> it fail with: Invalid tx_queue_size (= 512), must be a power of 2 between
> 256 and 256. Is this a expect result? If yes, it looks like does not match
> the original intention which is to become a parameter that can be change,
> could you please help review it again? Thanks in advance.
> 

In fact, the tx_queue_size maximum value depends on the backend type.

For vDPA/vhost-user: 1024
All the others: 256

tx_queue_size can be set since:

9b02e1618cf2 ("virtio-net: enable configurable tx queue size") [qemu v2.10]

The fix allows QEMU to report correctly these values now (errors were silently ignored and then ethtool didn't report the right value as in comment #11)

Jason, do we want to increase the maximum value for tx_queue_size for others than vDPA and vhost-user?

Comment 31 Lei Yang 2023-08-09 02:51:07 UTC
==>Test on the qemu-kvm-8.0.0-9.el9.x86_64

=>Test Version
qemu-kvm-8.0.0-9.el9.x86_64
kernel-5.14.0-351.el9.x86_64
edk2-ovmf-20230524-2.el9.noarch

=>Test Steps
1. Boot a guest with "tx_queue_size: 256"
-device '{"driver": "virtio-net-pci", "mac": "00:11:22:33:44:00", "id": "net0", "netdev": "hostnet0", "tx_queue_size": 256, "bus": "pcie-root-port-3", "addr": "0x0"}'  \
-netdev tap,id=hostnet0,vhost=on \

2. Check the size of this parameter inside the guest
# ethtool -g eth0 |grep  TX
TX:		256
TX:		256
TX Push:	off

3. Boot a guest with "tx_queue_size: 512"
-device '{"driver": "virtio-net-pci", "mac": "00:11:22:33:44:00", "id": "net0", "netdev": "hostnet0", "tx_queue_size": 512, "bus": "pcie-root-port-3", "addr": "0x0"}'  \
-netdev tap,id=hostnet0,vhost=on \

4. Check the size of this parameter inside the guest
# ethtool -g eth0 |grep  TX
TX:		256
TX:		256
TX Push:	off

==>So in past versions, qemu command line support to setup other size, but inside the guest it doesn't change. 

==>Test on the qemu-kvm-8.0.0-10.el9.x86_64
1. Boot a guest with "tx_queue_size: 256"
-device '{"driver": "virtio-net-pci", "mac": "00:11:22:33:44:00", "id": "net0", "netdev": "hostnet0", "tx_queue_size": 256, "bus": "pcie-root-port-3", "addr": "0x0"}'  \
-netdev tap,id=hostnet0,vhost=on \

2. Check the size of this parameter inside the guest
# ethtool -g eth0 |grep  TX
TX:		256
TX:		256
TX Push:	off

3. Boot a guest with "tx_queue_size: 512"
-device '{"driver": "virtio-net-pci", "mac": "00:11:22:33:44:00", "id": "net0", "netdev": "hostnet0", "tx_queue_size": 512, "bus": "pcie-root-port-3", "addr": "0x0"}'  \
-netdev tap,id=hostnet0,vhost=on \

4. it fail with: (qemu) qemu-kvm: -device {"driver": "virtio-net-pci", "mac": "00:11:22:33:44:00", "id": "net0", "netdev": "hostnet0", "tx_queue_size": 512, "bus": "pcie-root-port-3", "addr": "0x0"}: Invalid tx_queue_size (= 512), must be a power of 2 between 256 and 256

So this parameter only can be setup 256 in the fix version.I noticed Laurent's comment that this parameter size according to different network backends(For vDPA/vhost-user: 1024,All the others: 256). But according to bug description which expect result is that the tap device can be set to "tx_queue_size=1024" via the command line.

Comment 32 Lei Yang 2023-08-09 03:14:25 UTC
Hello Jason and Laurent

Could you please help review this bug again? Just from the QE perspective the current test results are inconsistent with the original expected results of the bug. 
Please let me know if this is a expect result, if this is expect result, QE will based on the Comment 31 test result to verified this bug. If not, looks like this bug also should be verified in advance, then create a new bug to track it.

Thanks
Lei

Comment 33 jason wang 2023-08-09 07:50:11 UTC
(In reply to Laurent Vivier from comment #27)
> (In reply to Lei Yang from comment #26)
> > Hello Laurent
> > 
> > According to QE test result, for the tap device, this parameter only can be
> > setup "tx_queue_size=256", if QE setup "tx_queue_size=512" or other number,
> > it fail with: Invalid tx_queue_size (= 512), must be a power of 2 between
> > 256 and 256. Is this a expect result? If yes, it looks like does not match
> > the original intention which is to become a parameter that can be change,
> > could you please help review it again? Thanks in advance.
> > 
> 
> In fact, the tx_queue_size maximum value depends on the backend type.
> 
> For vDPA/vhost-user: 1024
> All the others: 256
> 
> tx_queue_size can be set since:
> 
> 9b02e1618cf2 ("virtio-net: enable configurable tx queue size") [qemu v2.10]
> 
> The fix allows QEMU to report correctly these values now (errors were
> silently ignored and then ethtool didn't report the right value as in
> comment #11)
> 
> Jason, do we want to increase the maximum value for tx_queue_size for others
> than vDPA and vhost-user?

It requires some changes in the vhost-kernel. For example, vhost-kernel has a #iov limitation (UIO_MAXIOV), we need tweak the code get rid of this limitation first.

Thanks

Comment 34 jason wang 2023-08-09 07:56:04 UTC
(In reply to Lei Yang from comment #32)
> Hello Jason and Laurent
> 
> Could you please help review this bug again? Just from the QE perspective
> the current test results are inconsistent with the original expected results
> of the bug. 
> Please let me know if this is a expect result, if this is expect result, QE
> will based on the Comment 31 test result to verified this bug. If not, looks
> like this bug also should be verified in advance, then create a new bug to
> track it.
> 
> Thanks
> Lei

I think we still need some work in the upstream for vhost-kernel.

If you agree, you can keep the bz open (and assign it to me).

Thanks

Comment 35 Lei Yang 2023-08-09 09:58:40 UTC
(In reply to jason wang from comment #34)
> (In reply to Lei Yang from comment #32)
> > Hello Jason and Laurent
> > 
> > Could you please help review this bug again? Just from the QE perspective
> > the current test results are inconsistent with the original expected results
> > of the bug. 
> > Please let me know if this is a expect result, if this is expect result, QE
> > will based on the Comment 31 test result to verified this bug. If not, looks
> > like this bug also should be verified in advance, then create a new bug to
> > track it.
> > 
> > Thanks
> > Lei
> 
Hi Jason

> I think we still need some work in the upstream for vhost-kernel.

If these works are for vhost-kernel, QE think it should file a new kernel bug to track this problem.Then the current bug depends on this kernel bug. If you agree to file a new kernel bug to track this problem, QE would like to confirm that it can be fixed on rhel 9.3?
 
Thanks
Lei
> 
> If you agree, you can keep the bz open (and assign it to me).
> 
> Thanks

Comment 36 jason wang 2023-08-10 07:11:35 UTC
(In reply to Lei Yang from comment #35)
> (In reply to jason wang from comment #34)
> > (In reply to Lei Yang from comment #32)
> > > Hello Jason and Laurent
> > > 
> > > Could you please help review this bug again? Just from the QE perspective
> > > the current test results are inconsistent with the original expected results
> > > of the bug. 
> > > Please let me know if this is a expect result, if this is expect result, QE
> > > will based on the Comment 31 test result to verified this bug. If not, looks
> > > like this bug also should be verified in advance, then create a new bug to
> > > track it.
> > > 
> > > Thanks
> > > Lei
> > 
> Hi Jason
> 
> > I think we still need some work in the upstream for vhost-kernel.
> 
> If these works are for vhost-kernel, QE think it should file a new kernel
> bug to track this problem.Then the current bug depends on this kernel bug.

Ok.

> If you agree to file a new kernel bug to track this problem, QE would like
> to confirm that it can be fixed on rhel 9.3?

I think not, we're lacking sufficient time to do that. 9.4 material.

Thanks

>  
> Thanks
> Lei
> > 
> > If you agree, you can keep the bz open (and assign it to me).
> > 
> > Thanks

Comment 37 Laurent Vivier 2023-08-21 09:22:31 UTC
Clearing NeedInfo


Note You need to log in before you can comment on or make changes to this bug.