RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1925363 - RFE : Allow virtio-net-pci.page-per-vq to be enabled without using qemu:commandline XML
Summary: RFE : Allow virtio-net-pci.page-per-vq to be enabled without using qemu:comma...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jonathon Jongsma
QA Contact: yalzhang@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-05 02:19 UTC by Moshe Levi
Modified: 2022-05-17 13:03 UTC (History)
8 users (show)

Fixed In Version: libvirt-7.9.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-17 12:45:05 UTC
Type: Feature Request
Target Upstream Version: 7.9.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github autotest tp-libvirt pull 4054 0 None Merged Add cases about virtio driver option page_per_vq 2022-03-09 10:07:45 UTC
Red Hat Product Errata RHBA-2022:2390 0 None None None 2022-05-17 12:45:31 UTC

Description Moshe Levi 2021-02-05 02:19:07 UTC
Description of problem:
page-per-vq flag is import for vdpa with vhost-user performance see [1].
Currently there is no way in libvirt to set it in the xml page-per-vq except with qemu args 


  <devices>
   <interface type='vhostuser'>
      <mac address='fa:16:3e:92:6d:79'/>
      <source type='unix' path='/var/lib/vhost_sockets/sock7f9a971a-cf3' mode='server'/>
      <model type='virtio'/>
      <driver queues='4' rx_queue_size='512' tx_queue_size='512'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net0.page-per-vq=on'/>
  </qemu:commandline>

[1] - http://doc.dpdk.org/guides/sample_app_ug/vdpa.html look for page-per-vq

Comment 2 Jaroslav Suchanek 2021-02-22 09:50:35 UTC
(In reply to Moshe Levi from comment #0)
> Description of problem:
> 

Please provide full description of the problem. What is required and what outcome is expected. Also please add pointers to relevant upstream discussion if there is any.

The comment 0 is editable. You can fix it yourself or I will do it later.

Thanks.

Comment 3 Moshe Levi 2021-03-18 07:24:41 UTC
I update the description Let ne know if that enough

Comment 4 John Ferlan 2021-09-08 13:30:54 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 5 Jonathon Jongsma 2021-09-17 20:20:08 UTC
Han Han posted a proposed patch series a little while back:

https://listman.redhat.com/archives/libvir-list/2021-September/msg00087.html

Comment 6 Michal Privoznik 2021-10-15 07:43:05 UTC
Patches merged upstream:

d139171d80 qemu: Add support for virtio device option page-per-vq
388cdd11f3 conf: Add page_per_vq for driver element

v7.8.0-205-gd139171d80

Comment 9 yalzhang@redhat.com 2021-12-09 05:22:31 UTC
Test with libvirt-7.10.0-1.el9.x86_64 with interface as below:
# virsh dumpxml vm | grep /interface -B12
 <interface type='network'>
      <mac address='52:54:00:93:51:f1'/>
      <source network='default' portid='aa8f2465-749e-4d8d-b46f-b070bb75f939' bridge='virbr0' macTableManager='libvirt'/>
      <target dev='vnet7'/>
      <model type='virtio'/>
      <driver page_per_vq='on'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>

The qemu command line is expected:
-netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-pci,page-per-vq=on,netdev=hostnet0,id=net0,mac=52:54:00:93:51:f1,bus=pci.7,addr=0x0

Set verified: tested

Comment 12 yalzhang@redhat.com 2021-12-23 06:06:07 UTC
Test on libvirt-7.10.0-1.el9.x86_64 with qemu-kvm-6.2.0-1.el9.x86_64
1. Start vm with driver page_per_vq='on' or 'off', check qemu command line:
# virsh dumpxml rhel9 --inactive | grep /interface -B9
    <interface type='network'>
      <mac address='52:54:00:dc:04:44'/>
      <source network='default'/>
      <model type='virtio'/>
      <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off' queues='5' rx_queue_size='256' tx_queue_size='256' page_per_vq='on'>
        <host csum='off' gso='off' tso4='off' tso6='off' ecn='off' ufo='off' mrg_rxbuf='off'/>
        <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

-netdev tap,fds=24:25:26:27:28,id=hostnet0,vhost=on,vhostfds=29:30:31:32:33 
-device {"driver":"virtio-net-pci","page-per-vq":true,"tx":"bh","ioeventfd":true,"event_idx":false,"csum":false,"gso":false,"host_tso4":false,"host_tso6":false,"host_ecn":false,"host_ufo":false,"mrg_rxbuf":false,"guest_csum":false,"guest_tso4":false,"guest_tso6":false,"guest_ecn":false,"guest_ufo":false,"mq":true,"vectors":12,"rx_queue_size":256,"tx_queue_size":256,"netdev":"hostnet0","id":"net0","mac":"52:54:00:dc:04:44","bus":"pci.1","addr":"0x0"}

when boot with page_per_vq='off'
-netdev tap,fds=24:25:26:27:28,id=hostnet0,vhost=on,vhostfds=29:30:31:32:33 -device {"driver":"virtio-net-pci","page-per-vq":false,......

The result is as expected.

2. live update the interface for this option failed, and cold update succeed, which is expected.
# virsh update-device rhel9 update_on.xml 
error: Failed to update device from update_on.xml
error: Operation not supported: cannot modify virtio network device driver options

3.hotplug and hotunplug with the the page_per_vq setting, succeed.

Comment 13 yalzhang@redhat.com 2022-02-16 03:02:50 UTC
Hi, I found the hotplug can not work properly, the interface can not be initialized by the guest OS, please help to check it, Thank you!

# rpm -q libvirt qemu-kvm
libvirt-8.0.0-4.el9.x86_64
qemu-kvm-6.2.0-8.el9.x86_64

guest kernel:5.14.0-55.el9.x86_64

1. Start a guest;

2. Hotplug an interface as below:
# cat net2.xml
<interface type="network">
	<source network="default"/>
	<model type="virtio"/>
	<mac address="52:54:27:20:4a:33"/>
	<driver page_per_vq="on"/>
</interface>

# virsh attach-device test net2.xml
Device attached successfully

3. On the guest, check the interface can not be initialized by the guest:
[root@localhost ~]# [   44.849453] pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000
[   44.850657] pci 0000:01:00.0: reg 0x14: [mem 0x00000000-0x00000fff]
[   44.851876] pci 0000:01:00.0: reg 0x20: [mem 0x00000000-0x007fffff 64bit pref]
[   44.853124] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
[   44.860040] pci 0000:01:00.0: BAR 4: no space for [mem size 0x00800000 64bit pref]
[   44.861323] pci 0000:01:00.0: BAR 4: failed to assign [mem size 0x00800000 64bit pref]
[   44.862637] pci 0000:01:00.0: BAR 6: assigned [mem 0xfe800000-0xfe83ffff pref]
[   44.863835] pci 0000:01:00.0: BAR 1: assigned [mem 0xfe840000-0xfe840fff]
[   44.866765] virtio-pci 0000:01:00.0: enabling device (0000 -> 0002)
[   44.869179] virtio-pci 0000:01:00.0: virtio_pci: leaving for legacy driver

[root@localhost ~]# ifconfig -a
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

4. Start vm with an interface set with page_per_vq="on" works well;
hotplug interface with page_per_vq="off" works well;
hotplug interface without page_per_vq="on" works well, too.

Comment 14 Jonathon Jongsma 2022-02-17 17:28:24 UTC
I've reproduced this behaviour on a rhel9 vm host (qemu 6.2.0) with a nested rhel9 guest, but could not reproduce it on my development laptop with f35 host (qemu 6.1.0) and a rhel9 guest. 

For the rhel9 host, the guest kernel log when page_per_vq is "on" is shown below and matches yours above:

Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: [1af4:1041] type 00 class 0x020000
Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: reg 0x14: [mem 0x00000000-0x00000fff]
Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: reg 0x20: [mem 0x00000000-0x007fffff 64bit pref]
Feb xx xx xx:xx localhost.localdomain kernel: pci 0000:06:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: BAR 4: no space for [mem size 0x00800000 64bit pref]
Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: BAR 4: failed to assign [mem size 0x00800000 64bit pref]
Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: BAR 6: assigned [mem Oxfde00000-0xfde3ffff pref]
Feb xx xx xx:xx localhost.localdomain kernel: pci 0000:06:00.0: BAR 1: assigned [mem Oxfde-40000-0xfde40fff]
Feb xx xx:xx:xx localhost.localdomain kernel: virtio-pci 0000:06:00.0: enabling device (0000 -> 0002)
Feb xx xx:xx:xx localhost.localdomain kernel: virtio-pci 0000:06:00.0: virtio_pci: leaving for legacy driver


When I test on the f35 host, the device is attached successfully regardless of the page_per_vq setting, and the kernel logs for the 'on' setting is as follows:

Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: [1af4:1041] type 00 class 0x020000
Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: reg 0x14: [mem 0x00000000-0x00000fff]
Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: reg 0x20: [mem 0x00000000-0x007fffff 64bit pref]
Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 4: no space for [mem size 0x00800000 64bit pref]
Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 4: failed to assign [mem size 0x00800000 64bit pref]
Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 6: assigned [mem 0xfe200000-0xfe23ffff pref]
Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 1: assigned [mem 0xfe240000-0xfe240fff]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: PCI bridge to [bus 04]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3:   bridge window [io  0x3000-0x3fff]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3:   bridge window [mem 0xfe200000-0xfe3fffff]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3:   bridge window [mem 0xfd800000-0xfd9fffff 64bit pref]
Feb 17 11:46:36 localhost.localdomain kernel: PCI: No. 2 try to assign unassigned res
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: resource 15 [mem 0xfd800000-0xfd9fffff 64bit pref] released
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: PCI bridge to [bus 04]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: BAR 15: assigned [mem 0x180000000-0x1807fffff 64bit pref]
Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x180000000-0x1807fffff 64bit pref]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: PCI bridge to [bus 04]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3:   bridge window [io  0x3000-0x3fff]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3:   bridge window [mem 0xfe200000-0xfe3fffff]
Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3:   bridge window [mem 0x180000000-0x1807fffff 64bit pref]
Feb 17 11:46:36 localhost.localdomain kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002)
Feb 17 11:46:37 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready

I don't believe that libvirt is doing anything different between these two cases, since I tested the same libvirt version on fedora that I was using on rhel9. We're really just passing the page-per-vq setting along to qemu. And since the guest is the same in both cases, that seems to indicate that the issue is not with the guest driver, etc. That leaves qemu as the most likely source of the behavior difference in my mind.

Comment 15 yalzhang@redhat.com 2022-02-20 08:42:02 UTC
Hi Jonathon, Thank you for your information, I have tried and found it related with the "pc-q35-rhel8.6.0" machine type, file a bug on qemu-kvm:
Bug 2056230 - Hotplug virtio interface with page_per_vq can not be initialized properly on vm with "pc-q35-rhel8.6.0" machine type

Comment 17 errata-xmlrpc 2022-05-17 12:45:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: libvirt), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2390


Note You need to log in before you can comment on or make changes to this bug.