RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1396578 - RFE: Backport virtio-net multi-queue enablement by default patch
Summary: RFE: Backport virtio-net multi-queue enablement by default patch
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Maxime Coquelin
QA Contact: xiywang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-18 16:24 UTC by Marko Myllynen
Modified: 2021-09-09 12:00 UTC (History)
14 users (show)

Fixed In Version: kernel-3.10.0-568.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-02 04:33:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:1842 0 normal SHIPPED_LIVE Important: kernel security, bug fix, and enhancement update 2017-08-01 18:22:09 UTC

Description Marko Myllynen 2016-11-18 16:24:07 UTC
Description of problem:
On OpenStack / OVS-DPDK setups virtio-net multi-queue is needed for scalability, see for example [1][2][3]. Neil Horman points out that instead of manually configuring queues with ethtool(8) kernel could set this up by default:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0f13b66b01c6e2ec4913a7812414183844d1cc4f

Please backport to allow automated virtio-net multi-queue setup for scalability and to remove the need for the mandatory user-space configuration by default.

Thanks.

1) http://verticalindustriesblog.redhat.com/scaling-virtual-machine-network-performance-for-network-intensive-workloads/
2) https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html
3) https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html

Version-Release number of selected component (if applicable):
kernel-3.10.0-514.el7

Comment 3 jason wang 2016-11-24 14:51:37 UTC
(In reply to Marko Myllynen from comment #0)
> Description of problem:
> On OpenStack / OVS-DPDK setups virtio-net multi-queue is needed for
> scalability, see for example [1][2][3]. Neil Horman points out that instead
> of manually configuring queues with ethtool(8) kernel could set this up by
> default:
> 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/
> ?id=0f13b66b01c6e2ec4913a7812414183844d1cc4f
> 
> Please backport to allow automated virtio-net multi-queue setup for
> scalability and to remove the need for the mandatory user-space
> configuration by default.
> 

I'm afraid this won't work. I will draft a patch to enable multiqueue by default for upstream (the patch was just as simple as this).

Thanks

Comment 4 Amnon Ilan 2016-11-24 17:26:01 UTC
Note that it does not make sense to take what is configured in Qemu 
automatically into the guest, since the qemu config defines the maximum 
value. e.g. if qemu is configured for 64 queues, is it a reasonable 
default for the guest?
I would set the default number in the guest to the smaller value 
between {qemu#, 4}
The guest admin can change it later on up to the max.

Comment 7 jason wang 2016-11-29 06:49:14 UTC
David Miller has applied the patch upstream:

https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=4490001029012539937ff02778fe6180613fa949

Which tries to enable as much #queues as #vcpus.

Comment 9 Rafael Aquini 2017-02-17 13:16:32 UTC
Patch(es) committed on kernel repository and an interim kernel build is undergoing testing

Comment 11 Rafael Aquini 2017-02-20 18:15:25 UTC
Patch(es) available on kernel-3.10.0-568.el7

Comment 13 xiywang 2017-03-10 09:26:55 UTC
1. # /usr/libexec/qemu-kvm \
-name rhel7.4 -cpu IvyBridge -m 4096 -realtime mlock=off -smp 4 \
-drive file=/home/rhel7.4.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,snapshot=off -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 \
-netdev tap,id=hostnet0,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,queues=8 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a1:d0:5f,vectors=18,mq=on \
-monitor stdio -device qxl-vga,id=video0 -serial unix:/tmp/console,server,nowait -vnc :1 -spice port=5900,disable-ticketing

2. in guest
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	8
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	4

3. # ethtool -L eth0 combined 8

4. # ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	8
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	8

5. validation test
# ethtool -L eth0 combined 9
Cannot set device channel parameters: Invalid argument
# ethtool -L eth0 combined -2
no channel parameters changed, aborting
current values: tx 0 rx 0 other 0 combined 8
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	8
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	8

Comment 14 xiywang 2017-03-10 09:30:52 UTC
host kernel:
3.10.0-588.el7.x86_64
host qemu:
qemu-kvm-rhev-2.8.0-5.el7.x86_64

guest kernel:
3.10.0-598.el7.x86_64

Comment 15 xiywang 2017-03-10 09:38:28 UTC
-smp 4
-netdev tap,id=hostnet0,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,queues=4
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a1:d0:5f,vectors=10,mq=on

# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	4
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	4

# ethtool -L eth0 combined 2
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	4
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	2

Comment 16 xiywang 2017-03-10 09:43:35 UTC
-smp 4 
-netdev tap,id=hostnet0,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,queues=2 
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a1:d0:5f,vectors=6,mq=on

# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	2
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	2

# ethtool -L eth0 combined 4
Cannot set device channel parameters: Invalid argument

# ethtool -L eth0 combined 1
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	2
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	1

# ethtool -L eth0 combined -2
no channel parameters changed, aborting
current values: tx 0 rx 0 other 0 combined 1
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	2
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	1

Comment 17 xiywang 2017-05-08 02:36:54 UTC
Verified on 
host&guest 3.10.0-663.el7.x86_64
qemu-kvm-rhev-2.9.0-2.el7.x86_64

Comment 18 xiywang 2017-06-05 08:02:13 UTC
Hi Maxime,

Is the patch also expected to work with macvtap backend?
I tested on macvtap backend but mq is not enabled automatically.

1. boot a guest
/usr/libexec/qemu-kvm -name rhel7.4 -sandbox on -cpu Opteron_G5 -m 4096 -realtime mlock=off -smp 5 \
-drive file=/home/rhel7.4.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,snapshot=off -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 \
-netdev tap,id=hostnet0,vhost=on,fd=1024 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=5e:af:75:e4:97:29,vectors=10,mq=on 1024<>/dev/tap12 \
-monitor stdio -device qxl-vga,id=video0 -serial unix:/tmp/console,server,nowait -vnc :1 -spice port=5900,disable-ticketing

2. check mq in guest
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	1
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	1

Comment 19 Maxime Coquelin 2017-07-13 09:56:55 UTC
Hi Xiyue,

Just tested it with macvtap and it seems to work for me.
In my case, I run mainline kernel on host because I cannot reboot the machine right now.

Host kernel version:  4.11.0-rc2
Guest kernel version: 3.10.0-685.el7.x86_64
Qemu version: qemu-kvm-rhev-2.9.0-12.el7

Below, I set a maximum of 4 queue pairs, and set 3 vCPUs, so the default number of queues is set to 3 at guest boot time:

net-xml:
# virsh net-dumpxml macvtap-net
<network connections='1'>
  <name>macvtap-net</name>
  <uuid>e7e6343f-0dc9-4e4a-a8d0-65e42b39870d</uuid>
  <forward dev='em1' mode='bridge'>
    <interface dev='em1' connections='1'/>
  </forward>
</network>

domain-xml:
<domain type='kvm'>
...
  <vcpu placement='static'>3</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='6'/>
    <vcpupin vcpu='2' cpuset='7'/>
    <emulatorpin cpuset='0'/>
  </cputune>
...
<devices>
    <interface type='network'>
      <mac address='52:54:00:58:d7:bd'/>
      <source network='macvtap-net'/>
      <model type='virtio'/>
      <driver name='vhost' queues='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
  </devices>
</domain>

On guest side:

# ip addr
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 52:54:00:58:d7:bd brd ff:ff:ff:ff:ff:ff
    inet 10.19.159.3/21 brd 10.19.159.255 scope global dynamic eth0
       valid_lft 85882sec preferred_lft 85882sec
    inet6 2620:52:0:1398:5054:ff:fe58:d7bd/64 scope global noprefixroute dynamic 
       valid_lft 2591983sec preferred_lft 604783sec
    inet6 fe80::5054:ff:fe58:d7bd/64 scope link 
       valid_lft forever preferred_lft forever

# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	4
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	3

Comment 20 Pei Zhang 2017-07-17 07:48:39 UTC
(In reply to xiywang from comment #18)
> Hi Maxime,
> 
> Is the patch also expected to work with macvtap backend?
> I tested on macvtap backend but mq is not enabled automatically.
> 
> 1. boot a guest
> /usr/libexec/qemu-kvm -name rhel7.4 -sandbox on -cpu Opteron_G5 -m 4096
> -realtime mlock=off -smp 5 \
> -drive
> file=/home/rhel7.4.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,
> snapshot=off -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0
> \
> -netdev tap,id=hostnet0,vhost=on,fd=1024 -device

Seems 'queues=' otpion is missed.


Best Regards,
Pei
> virtio-net-pci,netdev=hostnet0,id=net0,mac=5e:af:75:e4:97:29,vectors=10,
> mq=on 1024<>/dev/tap12 \
> -monitor stdio -device qxl-vga,id=video0 -serial
> unix:/tmp/console,server,nowait -vnc :1 -spice port=5900,disable-ticketing
> 
> 2. check mq in guest
> # ethtool -l eth0
> Channel parameters for eth0:
> Pre-set maximums:
> RX:		0
> TX:		0
> Other:		0
> Combined:	1
> Current hardware settings:
> RX:		0
> TX:		0
> Other:		0
> Combined:	1

Comment 21 xiywang 2017-07-17 09:22:02 UTC
(In reply to Pei Zhang from comment #20)
> (In reply to xiywang from comment #18)
> > Hi Maxime,
> > 
> > Is the patch also expected to work with macvtap backend?
> > I tested on macvtap backend but mq is not enabled automatically.
> > 
> > 1. boot a guest
> > /usr/libexec/qemu-kvm -name rhel7.4 -sandbox on -cpu Opteron_G5 -m 4096
> > -realtime mlock=off -smp 5 \
> > -drive
> > file=/home/rhel7.4.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,
> > snapshot=off -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0
> > \
> > -netdev tap,id=hostnet0,vhost=on,fd=1024 -device
> 
> Seems 'queues=' otpion is missed.
> 
> 
> Best Regards,
> Pei
> > virtio-net-pci,netdev=hostnet0,id=net0,mac=5e:af:75:e4:97:29,vectors=10,
> > mq=on 1024<>/dev/tap12 \
> > -monitor stdio -device qxl-vga,id=video0 -serial
> > unix:/tmp/console,server,nowait -vnc :1 -spice port=5900,disable-ticketing
> > 
> > 2. check mq in guest
> > # ethtool -l eth0
> > Channel parameters for eth0:
> > Pre-set maximums:
> > RX:		0
> > TX:		0
> > Other:		0
> > Combined:	1
> > Current hardware settings:
> > RX:		0
> > TX:		0
> > Other:		0
> > Combined:	1

queues should not be used with fd=
qemu-kvm: -netdev tap,id=hostnet0,vhost=on,fd=9,queues=4: ifname=, script=, downscript=, vnet_hdr=, helper=, queues=, fds=, and vhostfds= are invalid with fd=

Comment 23 xiywang 2017-07-18 02:11:36 UTC
Test passed on macvtap backend.

1. boot a guest with smp4, queues 2, vectors 6
/usr/libexec/qemu-kvm -name rhel7.4 -sandbox on -cpu Opteron_G5 -m 4096 -realtime mlock=off -smp 4 \
-drive file=/home/rhel74-64-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,snapshot=off -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 \
-netdev tap,id=hostnet0,vhost=on,fds=10:11 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:a2:3d:50:b7:ca,vectors=6,mq=on 10<>/dev/tap28 11<>/dev/tap28 \
-monitor stdio -device qxl-vga,id=video0 -serial unix:/tmp/console,server,nowait -vnc :1 -spice port=5900,disable-ticketing

2. get ethtool status in guest
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	2
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	2

3. change mq in guest
# ethtool -L eth0 combined 1
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	2
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	1

Comment 24 xiywang 2017-07-18 02:18:45 UTC
Hi Maxime,
Sorry for my careless.
Last time I forgot to use fds=xx:xx to enable mqs when i tested with macvtap backend.
After using fds= I can get mq enabled by default in gueset.
Thanks a lot.

Comment 25 errata-xmlrpc 2017-08-02 04:33:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:1842


Note You need to log in before you can comment on or make changes to this bug.