RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1541960 - set rx_queue_size and tx_queue_size in qemu.conf
Summary: set rx_queue_size and tx_queue_size in qemu.conf
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.4
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: yalzhang@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1540525
TreeView+ depends on / blocked
 
Reported: 2018-02-05 10:17 UTC by Michal Privoznik
Modified: 2018-02-13 12:51 UTC (History)
23 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1540525
Environment:
Last Closed: 2018-02-13 12:51:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
log for comment 10 start vm with vhostuser interface on unsupported qemu (2.25 MB, text/plain)
2018-02-09 02:11 UTC, yalzhang@redhat.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1512941 1 None None None 2023-03-21 17:54:36 UTC

Description Michal Privoznik 2018-02-05 10:17:52 UTC
+++ This bug was initially created as a clone of Bug #1540525 +++

Description of problem:
Request for adding deployment option to director to set rx_queue_size and  tx_queue_size in /etc/libvirt/qemu.conf

The current libvirt patch, proposed by Michal Privoznik https://www.redhat.com/archives/libvir-list/2018-January/msg00608.html , enables setting rx_queue_size and tx_queue_size in /etc/libvirt/qemu.conf,  which would enable libvirt to set <driver rx/tx_queue_size=''/> in VM/Instance xml, if rx/tx_queue_size is not explicitly defined in VM/Instance xml.

In case rx_queue_size or tx_queue_size is not explicitly set in  /etc/libvirt/qemu.conf  or qemu-kvm not supporting queue size (for example tx_queue_size in case of older qemu) and <driver rx/tx_queue_size=''/> not set in VM xml, which would be the case for openstack, rx/tx_queue_size would be set to the default value of 256 set by qemu.This bug would resolve setting rx_queue_size and tx_queue_size on per-host basis.

Once libvirt patch is part of a release, will open a separate bug for OSP10, to track back-porting of this feature and libvirt to OSP10.

upstream Libvirt patch:
https://www.redhat.com/archives/libvir-list/2018-January/msg00608.html

--- Additional comment from Saravanan KR on 2018-01-31 11:24:28 CET ---

https://github.com/openstack/puppet-nova/blob/master/manifests/compute/libvirt/qemu.pp#L23

This puppet class takes care of updating the qemu.conf values.

Comment 1 Michal Privoznik 2018-02-05 10:24:37 UTC
This is already committed upstream:

commit 038eb472a0d970a17ccf4343ead0666df5c92f9d
Author:     Michal Privoznik <mprivozn>
AuthorDate: Fri Jan 19 11:34:54 2018 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Fri Feb 2 07:09:22 2018 +0100

    qemu: Expose rx/tx_queue_size in qemu.conf too
    
    In 2074ef6cd4a2 and c56cdf259 (and friends) we've added two
    attributes to virtio NICs: rx_queue_size and tx_queue_size.
    However, sysadmins might want to set these on per-host basis but
    don't necessarily have an access to domain XML (e.g. because they
    are generated by some other app). So let's expose them under
    qemu.conf (the settings from domain XML still take precedence as
    they are more specific ones).
    
    Signed-off-by: Michal Privoznik <mprivozn>
    Reviewed-by: John Ferlan <jferlan>

v4.0.0-126-g038eb472a

Comment 8 yalzhang@redhat.com 2018-02-08 11:54:18 UTC
Test on below packages, the result is as expected, set the bug to be verified
libvirt-3.9.0-12.el7.x86_64
qemu-kvm-rhev-2.10.0-20.el7.x86_64

scenarios:
1. start guest with the setting in qemu.conf;
2. start guest with setting both in qemu.conf and xml;
3. hotplug interface;
4. hotplug device with or without the setting in xml;
5. migration;
6. save -> restore, and managedsave -> start;
7. negative: set invalid value in qemu.conf;
and cover interface type as vhostuser, direct and network, hostdev

Details:
S1:
1. set "rx_queue_size = 1024 tx_queue_size = 512" in qemu.conf, and restart libvirtd

2. set 'rx_queue_size' then start the guest, the qemu cmd and guest xml will auto added 'tx_queue_size' and keep the xml setting.
# virsh dumpxml new | grep /interface -B6
    <interface type='network'>
      <mac address='52:54:00:53:cc:3f'/>
      <source network='default'/>
      <model type='virtio'/>
      <driver name='vhost' queues='5' rx_queue_size='512'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
# virsh start new
Domain new started
# virsh dumpxml new | grep /interface -B7
      <mac address='52:54:00:53:cc:3f'/>
      <source network='default' bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <driver name='vhost' queues='5' rx_queue_size='512' tx_queue_size='512'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
qemu cmd:
-netdev tap,fds=27:29:30:31:32,id=hostnet0,vhost=on,vhostfds=33:34:35:36:37 -device virtio-net-pci,mq=on,vectors=12,rx_queue_size=512,tx_queue_size=512,netdev=hostnet0,id=net0,mac=52:54:00:53:cc:3f,bus=pci.1,addr=0x0

3. restart libvirtd, then check on guest, the result is as expected
# ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX:		512
RX Mini:	0
RX Jumbo:	0
TX:		256
Current hardware settings:
RX:		512
RX Mini:	0
RX Jumbo:	0
TX:		256

4. hotplug interface with virtio model type, the 'rx_queue_size' or 'tx_queue_size'  will be added
# virsh attach-interface new direct enp4s0f0 --model virtio 
# virsh dumpxml new | grep /interface -B9
...
<interface type='direct'>
      <mac address='52:54:00:49:9d:65'/>
      <source dev='enp4s0f0' mode='vepa'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <driver rx_queue_size='1024' tx_queue_size='512'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </interface>

5. for other model types, no 'rx_queue_size' or 'tx_queue_size' added.

6. attach device with 'rx_queue_size' setting
# cat interface.xml
<interface type='direct'>
      <mac address='52:54:00:47:11:4d'/>
      <source network='direct-macvtap' dev='enp4s0f0' mode='bridge'/>
      <target dev='macvtap2'/>
      <model type='virtio'/>
      <driver name='vhost' queues='5' rx_queue_size='512'>
      </driver>
    </interface>
# virsh attach-device new interface.xml 
Device attached successfully

# virsh dumpxml new | grep /interface -B9
...
<interface type='direct'>
      <mac address='52:54:00:47:11:4d'/>
      <source dev='enp4s0f0' mode='bridge'/>
      <target dev='macvtap1'/>
      <model type='virtio'/>
      <driver name='vhost' queues='5' rx_queue_size='512' tx_queue_size='512'/>
      <alias name='net2'/>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </interface>

with macvtap passthrough mode and source dev is vf
# cat interface.xml
<interface type='direct'>
    <source dev='p6p1_3' mode='passthrough'/>
<model type='virtio'/>
  </interface>
# virsh attach-device rhel interface.xml
Device attached successfully
after attach:
<interface type='direct'>
      <mac address='52:54:00:cf:33:8d'/>
      <source dev='p6p1_3' mode='passthrough'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <driver rx_queue_size='1024' tx_queue_size='1024'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </interface>

for hostdev interface type
# cat interface.xml
<interface type='hostdev' managed='yes'>
    <driver name='vfio'/>
    <source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x10' function='0x4'/>
    </source>
<model type='virtio'/>   ===> this is invalid setting
  </interface>
# virsh attach-device rhel interface.xml 
Device attached successfully
after attach, no rx* or tx* is added, this is expected
<interface type='hostdev' managed='yes'>
      <mac address='52:54:00:df:a1:f9'/>
      <driver name='vfio'/>
      <source>
        <address type='pci' domain='0x0000' bus='0x04' slot='0x10' function='0x4'/>
      </source>
      <model type='virtio'/>
      <alias name='hostdev2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
    </interface>

S2: 
Test vhostuser interface type, and do migration, all things work as expected

S3: 
negative test: set invalid value in qemu.conf like "rx_queue_size = 100000 tx_queue_size = 8"
# systemctl restart libvirtd
start a guest with virtio interface
# virsh start new
error: Failed to start domain new
error: internal error: qemu unexpectedly closed the monitor: 2018-02-08T11:25:56.289230Z qemu-kvm: -device virtio-net-pci,mq=on,vectors=12,rx_queue_size=100000,tx_queue_size=8,netdev=hostnet0,id=net0,mac=52:54:00:53:cc:3f,bus=pci.1,addr=0x0: Parameter 'rx_queue_size' expects uint16_t

set as "rx_queue_size = 100 tx_queue_size = 8"
# virsh start new
error: Failed to start domain new
error: internal error: qemu unexpectedly closed the monitor: 2018-02-08T11:27:06.575614Z qemu-kvm: -device virtio-net-pci,mq=on,vectors=12,rx_queue_size=100,tx_queue_size=8,netdev=hostnet0,id=net0,mac=52:54:00:53:cc:3f,bus=pci.1,addr=0x0: Invalid rx_queue_size (= 100), must be a power of 2 between 256 and 1024.

Comment 9 yalzhang@redhat.com 2018-02-08 13:38:34 UTC
Hi Michal, could you please help to check if the error message in step 3 for vhostuser type interface is accepted? As it is not so detailed as other type interfaces. Thank you very much!


Test with qemu-kvm-rhev which do not support tx_queue_size

# rpm -q libvirt qemu-kvm-rhev
libvirt-3.9.0-12.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.1.x86_64

1. set 'tx_queue_size = 1024 rx_queue_size = 1024' in qemu.conf
2. restart libvirtd
3. start guest
# virsh dumpxml new | grep /interface -B6
    <interface type='network'>
      <mac address='52:54:00:53:cc:3f'/>
      <source network='default'/>
      <model type='virtio'/>
      <driver name='vhost' queues='5'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

# virsh start new
error: Failed to start domain new
error: unsupported configuration: virtio tx_queue_size option is not supported with this QEMU binary

for vhostuser type interface:
# virsh dumpxml test | grep /interface -B5
    <interface type='vhostuser'>
      <mac address='52:54:00:93:51:dd'/>
      <source type='unix' path='/var/run/openvswitch/vhost-user2' mode='client'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

# virsh start test
error: Failed to start domain test
error: internal error: Error generating NIC -device string

Comment 10 Michal Privoznik 2018-02-08 14:23:32 UTC
(In reply to yalzhang from comment #9)
> Hi Michal, could you please help to check if the error message in step 3 for
> vhostuser type interface is accepted? As it is not so detailed as other type
> interfaces. Thank you very much!
> 
> 
> Test with qemu-kvm-rhev which do not support tx_queue_size
> 
> # rpm -q libvirt qemu-kvm-rhev
> libvirt-3.9.0-12.el7.x86_64
> qemu-kvm-rhev-2.9.0-16.el7_4.1.x86_64
> 
> 1. set 'tx_queue_size = 1024 rx_queue_size = 1024' in qemu.conf
> 2. restart libvirtd
> 3. start guest
> # virsh dumpxml new | grep /interface -B6
>     <interface type='network'>
>       <mac address='52:54:00:53:cc:3f'/>
>       <source network='default'/>
>       <model type='virtio'/>
>       <driver name='vhost' queues='5'/>
>       <address type='pci' domain='0x0000' bus='0x01' slot='0x00'
> function='0x0'/>
>     </interface>
> 
> # virsh start new
> error: Failed to start domain new
> error: unsupported configuration: virtio tx_queue_size option is not
> supported with this QEMU binary

This is expected. You requested configuration that qemu doesn't support. Instead of ignoring the requested setting silently libvirt errors out loudly.

> 
> for vhostuser type interface:
> # virsh dumpxml test | grep /interface -B5
>     <interface type='vhostuser'>
>       <mac address='52:54:00:93:51:dd'/>
>       <source type='unix' path='/var/run/openvswitch/vhost-user2'
> mode='client'/>
>       <model type='virtio'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
> function='0x0'/>
>     </interface>
> 
> # virsh start test
> error: Failed to start domain test
> error: internal error: Error generating NIC -device string

This is interesting - I'm unable to reproduce. Can you please attach full debug logs? Thanks.

Comment 11 yalzhang@redhat.com 2018-02-09 02:11:22 UTC
Created attachment 1393490 [details]
log for comment 10 start vm with vhostuser interface on unsupported qemu

Test comment 10 on packages:
libvirt-3.9.0-12.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.1.x86_64

The log shows:
# grep error libvirtd.log
2018-02-09 01:52:49.053+0000: 13705: error : qemuBuildNicDevStr:3877 : unsupported configuration: virtio tx_queue_size option is not supported with this QEMU binary
2018-02-09 01:52:49.053+0000: 13705: error : qemuBuildVhostuserCommandLine:8540 : internal error: Error generating NIC -device string

one more scenario with unsupported qemu:
1. no tx_queue_size or rx_queue_size setting in qemu.conf

2. start vm with vhostuser interface

3. set tx_queue_size & rx_queue_size in qemu.conf and restart libvirtd

4. attach-device to hotplug vhostuser interface, it will fail with
#  virsh attach-device test interface.xml 
error: Failed to attach device from interface.xml
error: unsupported configuration: virtio tx_queue_size option is not supported with this QEMU binary

It is expected.

Comment 12 Daniel Berrangé 2018-02-12 12:15:41 UTC
On further review, the consensus opinion is that this patch is flawed and should never have been added to libvirt. It is going to be reverted in RHEL builds and upstream. Putting back to assigned until the patch is reverted.

Comment 13 Franck Baudin 2018-02-12 12:25:18 UTC
(In reply to Daniel Berrange from comment #12)
> On further review, the consensus opinion is that this patch is flawed and
> should never have been added to libvirt. It is going to be reverted in RHEL
> builds and upstream. Putting back to assigned until the patch is reverted.

Do we have a consensus on the "proper" solution?

Comment 14 Daniel Berrangé 2018-02-12 12:27:09 UTC
(In reply to Franck Baudin from comment #13)
> Do we have a consensus on the "proper" solution?

If a mgmt app wants to have specific values for these settings, it should provide a way to set them itself. Libvirt isn't going to add config options to qemu.conf for arbirary guest XML elements that an mgmt app hasn't got around to supporting itself yet. IOW this is nova's job to solve.

Comment 16 Jiri Denemark 2018-02-13 12:51:22 UTC
Reverted in libvirt-3.9.0-13.el7.


Note You need to log in before you can comment on or make changes to this bug.