Bug 1925363
Summary: | RFE : Allow virtio-net-pci.page-per-vq to be enabled without using qemu:commandline XML | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | Moshe Levi <moshele> |
Component: | libvirt | Assignee: | Jonathon Jongsma <jjongsma> |
libvirt sub component: | Networking | QA Contact: | yalzhang <yalzhang> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | unspecified | ||
Priority: | unspecified | CC: | dzheng, jdenemar, jjongsma, jsuchane, lmen, mprivozn, virt-maint, xuzhang |
Version: | 9.0 | Keywords: | FutureFeature, Triaged, Upstream |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-7.9.0-1.el9 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-05-17 12:45:05 UTC | Type: | Feature Request |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | 7.9.0 |
Embargoed: |
Description
Moshe Levi
2021-02-05 02:19:07 UTC
(In reply to Moshe Levi from comment #0) > Description of problem: > Please provide full description of the problem. What is required and what outcome is expected. Also please add pointers to relevant upstream discussion if there is any. The comment 0 is editable. You can fix it yourself or I will do it later. Thanks. I update the description Let ne know if that enough Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release. Han Han posted a proposed patch series a little while back: https://listman.redhat.com/archives/libvir-list/2021-September/msg00087.html Patches merged upstream: d139171d80 qemu: Add support for virtio device option page-per-vq 388cdd11f3 conf: Add page_per_vq for driver element v7.8.0-205-gd139171d80 Test with libvirt-7.10.0-1.el9.x86_64 with interface as below: # virsh dumpxml vm | grep /interface -B12 <interface type='network'> <mac address='52:54:00:93:51:f1'/> <source network='default' portid='aa8f2465-749e-4d8d-b46f-b070bb75f939' bridge='virbr0' macTableManager='libvirt'/> <target dev='vnet7'/> <model type='virtio'/> <driver page_per_vq='on'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </interface> The qemu command line is expected: -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-pci,page-per-vq=on,netdev=hostnet0,id=net0,mac=52:54:00:93:51:f1,bus=pci.7,addr=0x0 Set verified: tested Test on libvirt-7.10.0-1.el9.x86_64 with qemu-kvm-6.2.0-1.el9.x86_64 1. Start vm with driver page_per_vq='on' or 'off', check qemu command line: # virsh dumpxml rhel9 --inactive | grep /interface -B9 <interface type='network'> <mac address='52:54:00:dc:04:44'/> <source network='default'/> <model type='virtio'/> <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off' queues='5' rx_queue_size='256' tx_queue_size='256' page_per_vq='on'> <host csum='off' gso='off' tso4='off' tso6='off' ecn='off' ufo='off' mrg_rxbuf='off'/> <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/> </driver> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> -netdev tap,fds=24:25:26:27:28,id=hostnet0,vhost=on,vhostfds=29:30:31:32:33 -device {"driver":"virtio-net-pci","page-per-vq":true,"tx":"bh","ioeventfd":true,"event_idx":false,"csum":false,"gso":false,"host_tso4":false,"host_tso6":false,"host_ecn":false,"host_ufo":false,"mrg_rxbuf":false,"guest_csum":false,"guest_tso4":false,"guest_tso6":false,"guest_ecn":false,"guest_ufo":false,"mq":true,"vectors":12,"rx_queue_size":256,"tx_queue_size":256,"netdev":"hostnet0","id":"net0","mac":"52:54:00:dc:04:44","bus":"pci.1","addr":"0x0"} when boot with page_per_vq='off' -netdev tap,fds=24:25:26:27:28,id=hostnet0,vhost=on,vhostfds=29:30:31:32:33 -device {"driver":"virtio-net-pci","page-per-vq":false,...... The result is as expected. 2. live update the interface for this option failed, and cold update succeed, which is expected. # virsh update-device rhel9 update_on.xml error: Failed to update device from update_on.xml error: Operation not supported: cannot modify virtio network device driver options 3.hotplug and hotunplug with the the page_per_vq setting, succeed. Hi, I found the hotplug can not work properly, the interface can not be initialized by the guest OS, please help to check it, Thank you! # rpm -q libvirt qemu-kvm libvirt-8.0.0-4.el9.x86_64 qemu-kvm-6.2.0-8.el9.x86_64 guest kernel:5.14.0-55.el9.x86_64 1. Start a guest; 2. Hotplug an interface as below: # cat net2.xml <interface type="network"> <source network="default"/> <model type="virtio"/> <mac address="52:54:27:20:4a:33"/> <driver page_per_vq="on"/> </interface> # virsh attach-device test net2.xml Device attached successfully 3. On the guest, check the interface can not be initialized by the guest: [root@localhost ~]# [ 44.849453] pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 [ 44.850657] pci 0000:01:00.0: reg 0x14: [mem 0x00000000-0x00000fff] [ 44.851876] pci 0000:01:00.0: reg 0x20: [mem 0x00000000-0x007fffff 64bit pref] [ 44.853124] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 44.860040] pci 0000:01:00.0: BAR 4: no space for [mem size 0x00800000 64bit pref] [ 44.861323] pci 0000:01:00.0: BAR 4: failed to assign [mem size 0x00800000 64bit pref] [ 44.862637] pci 0000:01:00.0: BAR 6: assigned [mem 0xfe800000-0xfe83ffff pref] [ 44.863835] pci 0000:01:00.0: BAR 1: assigned [mem 0xfe840000-0xfe840fff] [ 44.866765] virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) [ 44.869179] virtio-pci 0000:01:00.0: virtio_pci: leaving for legacy driver [root@localhost ~]# ifconfig -a lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 4. Start vm with an interface set with page_per_vq="on" works well; hotplug interface with page_per_vq="off" works well; hotplug interface without page_per_vq="on" works well, too. I've reproduced this behaviour on a rhel9 vm host (qemu 6.2.0) with a nested rhel9 guest, but could not reproduce it on my development laptop with f35 host (qemu 6.1.0) and a rhel9 guest. For the rhel9 host, the guest kernel log when page_per_vq is "on" is shown below and matches yours above: Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: [1af4:1041] type 00 class 0x020000 Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: reg 0x14: [mem 0x00000000-0x00000fff] Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: reg 0x20: [mem 0x00000000-0x007fffff 64bit pref] Feb xx xx xx:xx localhost.localdomain kernel: pci 0000:06:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: BAR 4: no space for [mem size 0x00800000 64bit pref] Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: BAR 4: failed to assign [mem size 0x00800000 64bit pref] Feb xx xx:xx:xx localhost.localdomain kernel: pci 0000:06:00.0: BAR 6: assigned [mem Oxfde00000-0xfde3ffff pref] Feb xx xx xx:xx localhost.localdomain kernel: pci 0000:06:00.0: BAR 1: assigned [mem Oxfde-40000-0xfde40fff] Feb xx xx:xx:xx localhost.localdomain kernel: virtio-pci 0000:06:00.0: enabling device (0000 -> 0002) Feb xx xx:xx:xx localhost.localdomain kernel: virtio-pci 0000:06:00.0: virtio_pci: leaving for legacy driver When I test on the f35 host, the device is attached successfully regardless of the page_per_vq setting, and the kernel logs for the 'on' setting is as follows: Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: [1af4:1041] type 00 class 0x020000 Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: reg 0x14: [mem 0x00000000-0x00000fff] Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: reg 0x20: [mem 0x00000000-0x007fffff 64bit pref] Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 4: no space for [mem size 0x00800000 64bit pref] Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 4: failed to assign [mem size 0x00800000 64bit pref] Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 6: assigned [mem 0xfe200000-0xfe23ffff pref] Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 1: assigned [mem 0xfe240000-0xfe240fff] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: PCI bridge to [bus 04] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: bridge window [io 0x3000-0x3fff] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: bridge window [mem 0xfd800000-0xfd9fffff 64bit pref] Feb 17 11:46:36 localhost.localdomain kernel: PCI: No. 2 try to assign unassigned res Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: resource 15 [mem 0xfd800000-0xfd9fffff 64bit pref] released Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: PCI bridge to [bus 04] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: BAR 15: assigned [mem 0x180000000-0x1807fffff 64bit pref] Feb 17 11:46:36 localhost.localdomain kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x180000000-0x1807fffff 64bit pref] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: PCI bridge to [bus 04] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: bridge window [io 0x3000-0x3fff] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Feb 17 11:46:36 localhost.localdomain kernel: pcieport 0000:00:02.3: bridge window [mem 0x180000000-0x1807fffff 64bit pref] Feb 17 11:46:36 localhost.localdomain kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 17 11:46:37 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready I don't believe that libvirt is doing anything different between these two cases, since I tested the same libvirt version on fedora that I was using on rhel9. We're really just passing the page-per-vq setting along to qemu. And since the guest is the same in both cases, that seems to indicate that the issue is not with the guest driver, etc. That leaves qemu as the most likely source of the behavior difference in my mind. Hi Jonathon, Thank you for your information, I have tried and found it related with the "pc-q35-rhel8.6.0" machine type, file a bug on qemu-kvm: Bug 2056230 - Hotplug virtio interface with page_per_vq can not be initialized properly on vm with "pc-q35-rhel8.6.0" machine type Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (new packages: libvirt), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:2390 |