Bug 2065381

Summary: Libvirt multiqueue support for vDPA [rhel-9.1.0]
Product: Red Hat Enterprise Linux 9 Reporter: RHEL Program Management Team <pgm-rhel-tools>
Component: libvirtAssignee: Jonathon Jongsma <jjongsma>
libvirt sub component: Networking QA Contact: yalzhang <yalzhang>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: aadam, chhu, dzheng, egallen, jdenemar, jjongsma, jsuchane, lmen, lulu, lvivier, pezhang, pvlasin, virt-maint, xuzhang, yalzhang, yanqzhan, yicui
Version: 9.0Keywords: FutureFeature, Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-8.2.0-1.el9 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 2024406 Environment:
Last Closed: 2022-11-15 10:03:40 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version: 8.2.0
Embargoed:
Bug Depends On: 2024406    
Bug Blocks:    

Comment 1 Jiri Denemark 2022-03-18 11:50:12 UTC
Pushed upstream as

commit a5e659f071ae5f5fc9aadb46ad7c31736425f8cf
Author:     Jonathon Jongsma <jjongsma>
AuthorDate: Tue Mar 1 16:55:21 2022 -0600
Commit:     Jonathon Jongsma <jjongsma>
CommitDate: Wed Mar 9 16:23:02 2022 -0600

    qemu: support multiqueue for vdpa net device
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2024406
    
    Signed-off-by: Jonathon Jongsma <jjongsma>
    Reviewed-by: Martin Kletzander <mkletzan>

Comment 2 yalzhang@redhat.com 2022-04-12 06:33:33 UTC
Test with below packages:
# rpm -q libvirt qemu-kvm kernel iproute 
libvirt-8.2.0-1.el9.x86_64
qemu-kvm-6.2.0-12.el9.x86_64
kernel-5.14.0-77.kpq0.el9_0.x86_64
iproute-5.15.0-2.2.el9_0.x86_64

Start vm with multiqueue:
1. create the vdpa devices with multiqueues
# sh ovs_init.sh  0000:5e:00.0  4
(the script includes command "vdpa dev add name vdpa${i} mgmtdev pci/$pci_addr mac 00:11:22:33:44:${i}${i} max_vqp 8")

2. start vm with multiqueue:
# virsh edit rhel
...
<vcpu placement='static'>4</vcpu>
...
<interface type='vdpa'>
      <mac address='00:11:22:33:44:00'/>
      <source dev='/dev/vhost-vdpa-0'/>
      <model type='virtio'/>
      <driver queues='8'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>

# virsh start rhel 
Domain 'rhel' started

3. login vm and check the queues:
[root@localhost ~]# ip l show enp7s0
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 00:11:22:33:44:00 brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# ethtool -l enp7s0
Channel parameters for enp7s0:
Pre-set maximums:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	8
Current hardware settings:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	4
[root@localhost ~]# ethtool -L enp7s0 combined 6 
[root@localhost ~]# ethtool -l enp7s0
Channel parameters for enp7s0:
Pre-set maximums:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	8
Current hardware settings:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	6

4. On the host, configure the ip address for enp94s0f0np0_br
# ifconfig enp94s0f0np0_br 100.100.100.100/24
On the guest, configure the interface's ip as 100.100.100.50/24
On host, ping guest:
# ping 100.100.100.50 -c 3
PING 100.100.100.50 (100.100.100.50) 56(84) bytes of data.
64 bytes from 100.100.100.50: icmp_seq=1 ttl=64 time=0.228 ms
64 bytes from 100.100.100.50: icmp_seq=2 ttl=64 time=0.272 ms
......

hotplug vdpa interface with multiqueue:
1. start vm without vdpa interface;
2. hotplug vdpa interface with multiqueue:
# cat vdpa_interface.xml 
<interface type='vdpa'>
      <mac address='00:11:22:33:44:00'/>
      <source dev='/dev/vhost-vdpa-0'/>
      <model type='virtio'/>
      <driver queues='8'/>
    </interface>
# virsh attach-device rhel vdpa_interface.xml 
Device attached successfully

login vm and check:
[root@localhost ~]# ip l show enp7s0
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 00:11:22:33:44:00 brd ff:ff:ff:ff:ff:ff

[root@localhost ~]# ethtool -l enp7s0
Channel parameters for enp7s0:
Pre-set maximums:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	8
Current hardware settings:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	4
[root@localhost ~]# 
[root@localhost ~]# ethtool -L enp7s0  combined 8 
[root@localhost ~]# ethtool -l enp7s0
Channel parameters for enp7s0:
Pre-set maximums:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	8
Current hardware settings:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	8

[root@localhost ~]# ping 100.100.100.100
PING 100.100.100.100 (100.100.100.100) 56(84) bytes of data.
64 bytes from 100.100.100.100: icmp_seq=1 ttl=64 time=25.1 ms

--- 100.100.100.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 25.120/25.120/25.120/0.000 ms

Comment 5 yalzhang@redhat.com 2022-04-19 08:31:04 UTC
Test the scenarios in comment 2 again, the result is as expected.

Comment 7 errata-xmlrpc 2022-11-15 10:03:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: libvirt security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:8003