Bug 1844468 - packed=on:boot guest with vhost-user and vIOMMU, reboot guest will cause guest hang when there are packets flow
Summary: packed=on:boot guest with vhost-user and vIOMMU, reboot guest will cause gues...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.3
Assignee: Eugenio Pérez Martín
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks: 1852906 1897025
TreeView+ depends on / blocked
 
Reported: 2020-06-05 13:35 UTC by Pei Zhang
Modified: 2021-02-01 10:43 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1852906 (view as bug list)
Environment:
Last Closed: 2021-02-01 10:41:27 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Full XML (5.16 KB, application/xml)
2020-06-05 13:35 UTC, Pei Zhang
no flags Details

Description Pei Zhang 2020-06-05 13:35:06 UTC
Created attachment 1695429 [details]
Full XML

Description of problem:
Boot guest with vhost-user, vIOMMU and packed=on. Then sending Moongen packets from another server, guest can receive these packets. Reboot guest will cause guest hang.  

Version-Release number of selected component (if applicable):
4.18.0-211.el8.x86_64
qemu-kvm-5.0.0-0.scrmod+el8.3.0+6893+614302c0.wrb200603.x86_64
libvirt-6.4.0-1.scrmod+el8.3.0+6893+614302c0.x86_64
openvswitch2.13-2.13.0-35.el8fdp.x86_64

How reproducible:
100%

Steps to Reproduce:

1. Boot ovs with vhostuserclient ports
# ovs-vsctl show
39b1fab4-13cf-4f16-ad64-b2292c79adcb
    Bridge ovsbr1
        datapath_type: netdev
        Port dpdk1
            Interface dpdk1
                type: dpdk
                options: {dpdk-devargs="0000:5e:00.1", n_rxq="1"}
        Port ovsbr1
            Interface ovsbr1
                type: internal
        Port vhost-user1
            Interface vhost-user1
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser1.sock"}
    Bridge ovsbr0
        datapath_type: netdev
        Port ovsbr0
            Interface ovsbr0
                type: internal
        Port dpdk0
            Interface dpdk0
                type: dpdk
                options: {dpdk-devargs="0000:5e:00.0", n_rxq="1"}
        Port vhost-user0
            Interface vhost-user0
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser0.sock"}

2. Boot guest with vhost-user,enable packed=on and vIOMMU. Full XML is attached.
  <devices>
    <interface type='bridge'>
      <mac address='88:66:da:5f:dd:01'/>
      <source bridge='switch'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='18:66:da:5f:dd:22'/>
      <source type='unix' path='/tmp/vhostuser0.sock' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='18:66:da:5f:dd:23'/>
      <source type='unix' path='/tmp/vhostuser1.sock' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>
    <iommu model='intel'>
      <driver intremap='on' caching_mode='on' iotlb='on'/>
    </iommu>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net1.packed=on'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net2.packed=on'/>
  </qemu:commandline>


3. In another host, start MoonGen to generate packets flow to guest

# ./build/MoonGen examples/l2-load-latency.lua 0 1 640

4. In guest, we can observer packets increase of the vhost-user nics

== results of time stamp1:

# ifconfig
...
enp6s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::2a56:a65d:9906:c56f  prefixlen 64  scopeid 0x20<link>
        ether 28:66:da:5f:dd:22  txqueuelen 1000  (Ethernet)
        RX packets 1763373  bytes 105803028 (100.9 MiB)
        RX errors 0  dropped 1763373  overruns 0  frame 3
        TX packets 26  bytes 3788 (3.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp7s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::876a:9327:63bb:812f  prefixlen 64  scopeid 0x20<link>
        ether 28:66:da:5f:dd:23  txqueuelen 1000  (Ethernet)
        RX packets 1721804  bytes 103311192 (98.5 MiB)
        RX errors 0  dropped 1721804  overruns 0  frame 2
        TX packets 26  bytes 3788 (3.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

== results of time stamp2:

# ifconfig
...
enp6s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::2a56:a65d:9906:c56f  prefixlen 64  scopeid 0x20<link>
        ether 28:66:da:5f:dd:22  txqueuelen 1000  (Ethernet)
        RX packets 6071299  bytes 364278588 (347.4 MiB)
        RX errors 0  dropped 6071299  overruns 0  frame 3
        TX packets 28  bytes 4168 (4.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp7s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::876a:9327:63bb:812f  prefixlen 64  scopeid 0x20<link>
        ether 28:66:da:5f:dd:23  txqueuelen 1000  (Ethernet)
        RX packets 5929404  bytes 355767192 (339.2 MiB)
        RX errors 0  dropped 5929404  overruns 0  frame 2
        TX packets 28  bytes 4168 (4.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


5. Reboot guest, guest hang.

Actual results:
Guest hang.

Expected results:
Guest should not hang.

Additional info:
1. Without packet=on, this issue is gone. Everything works well.

vhost-user + vIOMMU, no packet=on    Works well

2. Without vIOMMU, this issue is gone. Everything works well.

vhost-user + packet=on, no vIOMMU    Works well

3. I understand vhost-user + vIOMMU + guest virtio-net kernel driver is not a recommend configuration to customers (Maxime explained this situation in 1572879#c13), because it has lower performance. However guest hang should be a problem.

Comment 1 Pei Zhang 2020-06-05 13:38:30 UTC
This bug was found when handling Bug 1601355.

Comment 2 Pei Zhang 2020-06-05 13:46:24 UTC
Additional info(continued):

4. When there is no packets flow, the issue is gone. Everything works well.

vhost-user + vIOMMU + guest virtio-net kernel driver, but no packets flow in guest.   works.

Comment 7 Pei Zhang 2021-02-01 10:41:27 UTC
This issue has gone with latest rhel8.4-av. VM keeps working well after many rebooting.

Versions:
qemu-img-5.2.0-4.module+el8.4.0+9676+589043b9.x86_64.
4.18.0-278.rt7.43.el8.dt4.x86_64
tuned-2.15.0-1.el8.noarch
libvirt-7.0.0-3.module+el8.4.0+9709+a99efd61.x86_64
python3-libvirt-6.10.0-1.module+el8.4.0+8948+a39b3f3a.x86_64
openvswitch2.13-2.13.0-86.el8fdp.x86_64
dpdk-20.11-1.el8.x86_64


So I would close this bz as "CURRENTRELEASE". Feel free to let me know if any concerns. Thanks.


Note You need to log in before you can comment on or make changes to this bug.