Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1861436

Summary: packed=on:guest fails receive packets with "network" interface type and 'qemu' driver
Product: Red Hat Enterprise Linux 9 Reporter: Pei Zhang <pezhang>
Component: qemu-kvmAssignee: lulu <lulu>
qemu-kvm sub component: Networking QA Contact: Pei Zhang <pezhang>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: medium    
Priority: medium CC: aadam, ailan, chayang, eperezma, jasowang, jinzhao, juzhang, leiyang, virt-maint
Version: unspecifiedKeywords: Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1861434 Environment:
Last Closed: 2021-12-17 05:32:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1861434    
Bug Blocks: 1897024    

Description Pei Zhang 2020-07-28 15:40:21 UTC
+++ This bug was initially created as a clone of Bug #1861434 +++

Description of problem:
Boot VM with virtio-net-pci, using "network" interface type and 'qemu'  driver, and enable packed=on. Guest will fail receive packets. 

Version-Release number of selected component (if applicable):
qemu-kvm-4.2.0-31.module+el8.3.0+7437+4bb96e0d.x86_64.rpm
4.18.0-227.el8.x86_64
dpdk-20.05.tar.xz

How reproducible:
100%

Steps to Reproduce:
1. Boot VM with virtio-net-pci, using "network" interface type and 'qemu'  driver, and enable packed=on.

<domain type="kvm" xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
  </devices>
  ...
    <interface type="bridge">
      <mac address="88:66:da:5f:dd:01" />
      <source bridge="switch" />
      <model type="virtio" />
      <address bus="0x01" domain="0x0000" function="0x0" slot="0x00" type="pci" />
    </interface>
    <interface type="network">
      <mac address="28:66:da:5f:ee:01" />
      <source network="default" />
      <model type="virtio" />
      <driver name='qemu' />
      <address bus="0x06" domain="0x0000" function="0x0" slot="0x00" type="pci" />
    </interface>
    <interface type="network">
      <mac address="28:66:da:5f:ee:02" />
      <source network="default" />
      <model type="virtio" />
      <driver name='qemu' />
      <address bus="0x07" domain="0x0000" function="0x0" slot="0x00" type="pci" />
    </interface>
  ...
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net1.packed=on'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net2.packed=on'/>
  </qemu:commandline>

2. In both host and guest, compile upstream dpdk, which enables eth_af_packet device

# tar -xvf dpdk-20.05.tar.xz 
# cd dpdk-20.05
# make config T=x86_64-native-linux-gcc
# cd builds
# make

3. In host, start dpdk's testpmd with txonly to generate packets.

# ./dpdk-20.05/build/app/testpmd \
	-l 2,4 \
	--socket-mem 1024 \
	--vdev=eth_af_packet0,iface=vnet1 \
	--file-prefix tx  \
	-- \
	--forward-mode=txonly --stats-period 1 -a

4. In guest, start dpdk's testpmd with rxonly to receive packets. But fails receiving packets well.

# ./dpdk-20.05/build/app/testpmd \
	-l 4,5 \
	--socket-mem 1024 \
	--vdev=eth_af_packet0,iface=enp6s0 \
	--file-prefix rx  \
	-- \
	--forward-mode=rxonly --stats-period 1 -a


Actual results:
Guest fails receive packets.

Expected results:
Guest should receive packets well.

Additional info:
1. Guest can send packets well, only receiving packets function doesn't work well.

2. Without packed=on, guest can receive packets well.

Comment 1 jason wang 2020-07-31 07:15:21 UTC
Hi:

Have you used vhost-net or not? Is this an regression?

Reduce the priority and severity since vhost-net doesn't support packed virtqueue.

Thanks

Comment 2 Eugenio Pérez Martín 2020-07-31 07:22:39 UTC
Hi Jason.

The behavior is observed on the virtual NICs with `driver name='qemu'`, so is using the qemu's implementation. The bug description could be misleading because it includes a vhost-net interface, but it was used for management, not testing.

I was able to reproduce from the first commit in qemu introducing packed vq, with DPDK's testpmd using AF_PACKET vdev (as in the bz description).

If the traffic rate is moderate, the device is able to exchange packets, download big files... with no issues. I didn't investigate further to locate better the bug.

Comment 3 jason wang 2020-08-04 05:44:24 UTC
(In reply to Eugenio Pérez Martín from comment #2)
> Hi Jason.
> 
> The behavior is observed on the virtual NICs with `driver name='qemu'`, so
> is using the qemu's implementation. The bug description could be misleading
> because it includes a vhost-net interface, but it was used for management,
> not testing.
> 
> I was able to reproduce from the first commit in qemu introducing packed vq,
> with DPDK's testpmd using AF_PACKET vdev (as in the bz description).
> 
> If the traffic rate is moderate, the device is able to exchange packets,
> download big files... with no issues. I didn't investigate further to locate
> better the bug.

I see, will try to reproduce.

Anyway, move to 8.4 first since vhost-net doesn't support packed vq right now.

Thanks

Comment 4 Pei Zhang 2020-08-10 08:42:39 UTC
As Eugenio has helped to provid the info in Comment 2. So remove the needinfo+ from me.

Best regards,

Pei

Comment 11 John Ferlan 2021-09-09 12:37:20 UTC
*** Bug 1861434 has been marked as a duplicate of this bug. ***

Comment 12 John Ferlan 2021-09-09 12:39:48 UTC
Bulk update: Move RHEL8 bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 13 jason wang 2021-12-07 06:23:03 UTC
Please try upstream Qemu. We got several fixes from packed virtqueue recently.

Thanks

Comment 14 Pei Zhang 2021-12-17 05:32:19 UTC
Thanks Jason's info. This issue has gone with qemu-kvm-6.2.0-1.el9.x86_64.

Following steps in Description, VM can receive packets well:

Host testpmd:

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 524169021      TX-dropped: 9839043       TX-total: 534008064
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


VM testpmd:

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 44367445       RX-dropped: 0             RX-total: 44367445
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 44367445       RX-dropped: 0             RX-total: 44367445
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


Note:
1. There is no need to transfer packed option by "qemu:commandline" with libvirt. Libvirt has already supported packed='on' in the driver section.  Just like below.

    <interface type='network'>
      <mac address='18:66:da:5f:dd:02'/>
      <source network='default' portid='ba4f8dc9-c215-4e99-b1d9-599d40a08eaf' bridge='virbr0'/>
      <target dev='vnet16'/>
      <model type='virtio'/>
      <driver name='qemu' packed='on'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>
    <interface type='network'>
      <mac address='18:66:da:5f:dd:03'/>
      <source network='default' portid='3172fab0-c746-4160-8ad7-d0020f3c778e' bridge='virbr0'/>
      <target dev='vnet17'/>
      <model type='virtio'/>
      <driver name='qemu' packed='on'/>
      <alias name='net2'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>


So close this bug as CurrentRelease.