Bug 1869973 - Boot guest with vhost-user over virtio-net vDPA, hot plug/unplug vhost-user device several times will cause guest hang.
Summary: Boot guest with vhost-user over virtio-net vDPA, hot plug/unplug vhost-user d...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 8.4
Assignee: lulu@redhat.com
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks: 1897025
TreeView+ depends on / blocked
 
Reported: 2020-08-19 07:05 UTC by Pei Zhang
Modified: 2021-03-02 03:49 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-03-02 03:49:13 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
L1 guest full XML (5.16 KB, application/xml)
2020-08-19 07:11 UTC, Pei Zhang
no flags Details
L2 guest full XML (4.79 KB, application/xml)
2020-08-19 07:13 UTC, Pei Zhang
no flags Details

Description Pei Zhang 2020-08-19 07:05:51 UTC
Description of problem:
This bug is not tested with physical vDPA cards. We are testing with virtio-net vDPA in nested virtualization environment.

Boot L2 guest with vhost-user over virtio-net vDPA, then hot plug/unplug vhost-user device several times( > 2), L2 guest will hang. 


Version-Release number of selected component (if applicable):
4.18.0-232.el8.x86_64
qemu-kvm-5.1.0-2.module+el8.3.0+7652+b30e6901.x86_64
openvswitch2.13-2.13.0-54.el8fdp.x86_64
libvirt-6.6.0-2.module+el8.3.0+7567+dc41c0a9.x86_64
https://gitlab.com/mcoquelin/dpdk-next-virtio.git


How reproducible:
90%


Steps to Reproduce:
1. In host, boot ovs with 2 vhost-user ports

2. In host, boot L1 guest with 2 vhost-user ports, 12 CPUs and 16G memory. Full L1 guest XML is attached.

3. In L1 guest, compile dpdk which support virtio-net vDPA

# git clone https://gitlab.com/mcoquelin/dpdk-next-virtio.git dpdk
# cd dpdk/
# git checkout remotes/origin/virtio_vdpa_v1
# export RTE_SDK=`pwd`
# export RTE_TARGET=x86_64-native-linuxapp-gcc
# make -j2 install T=$RTE_TARGET DESTDIR=install
# cd examples/vdpa
# make

4. In L1 guest, bind NICs to vfio

# modprobe vfio
# modprobe vfio-pci
# dpdk-devbind --bind=vfio-pci 0000:06:00.0
# dpdk-devbind --bind=vfio-pci 0000:07:00.0

5. In L1 guest, start vDPA application, boot 2 vDPA vhost-user ports

# cd /root/dpdk/examples/vdpa/build
# ./vdpa -l 1,2 -n 4 --socket-mem 1024 -w 0000:06:00.0,vdpa=1 -w 0000:07:00.0,vdpa=1 -- --interactive --client

vdpa> list
device id	device address	queue num	supported features
0		0000:06:00.0	1		0x370bfe7a6
1		0000:07:00.0	1		0x370bfe7a6

vdpa> create /tmp/vdpa-socket0 0000:06:00.0
VHOST_CONFIG: vhost-user client: socket created, fd: 37
VHOST_CONFIG: failed to connect to /tmp/vdpa-socket0: Connection refused
VHOST_CONFIG: /tmp/vdpa-socket0: reconnecting...
vdpa> create /tmp/vdpa-socket1 0000:07:00.0
VHOST_CONFIG: vhost-user client: socket created, fd: 40
VHOST_CONFIG: failed to connect to /tmp/vdpa-socket1: Connection refused
VHOST_CONFIG: /tmp/vdpa-socket1: reconnecting...

6. In L1 guest, boot L2 guest with above vDPA ports. Full L2 guest XML will be attached.

    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:02'/>
      <source type='unix' path='/tmp/vdpa-socket0' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' />
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:03'/>
      <source type='unix' path='/tmp/vdpa-socket1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' />
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>

7. In L1 guest, hot unplug/plug vhost-user devices from L2 guest several times

# cat vhost-user-nic1.xml 
    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:02'/>
      <source type='unix' path='/tmp/vdpa-socket0' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>

# cat vhost-user-nic2.xml 
    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:03'/>
      <source type='unix' path='/tmp/vdpa-socket1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>


(Repeat below commands several times)
# virsh detach-device rhel8.3_L2 vhost-user-nic1.xml
# virsh detach-device rhel8.3_L2 vhost-user-nic2.xml

# virsh attach-device rhel8.3_L2 vhost-user-nic1.xml
# virsh attach-device rhel8.3_L2 vhost-user-nic2.xml


8. Check status. L2 guest hang. Meanwhile, L1 guest vDPA application segfault.

L2 guest lost response.

(In L1 guest)
# ./vdpa -l 1,2 -n 4 --socket-mem 1024 -w 0000:06:00.0,vdpa=1 -w 0000:07:00.0,vdpa=1 -- --interactive --client 
...
new port /tmp/vdpa-socket0, did: 0
VHOST_CONFIG: virtio is now ready for processing.
VIRTIO_VDPA virtio_vdpa_start(): Multiqueue configured but send command failed, this is too late now...
VIRTIO_VDPA virtio_vdpa_dev_config(): vDPA (0): software relay is used.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:48
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: /tmp/vdpa-socket1: connected
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xc20
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
Segmentation fault (core dumped)

# dmesg(In L1 guest)
[ 9959.553984] vhost-events[2629]: segfault at 0 ip 00007f881e90e1a5 sp 00007f8817ffcfe8 error 4 in libc-2.28.so[7f881e7b5000+1b9000]
[ 9959.555780] Code: 03 00 00 0f 82 cc 03 00 00 49 89 d3 89 f8 31 d2 c5 c5 ef ff 09 f0 25 ff 0f 00 00 3d 80 0f 00 00 0f 8f ef 03 00 00 c5 fe 6f 0f <c5> f5 74 06 c5 fd da c1 c5 fd 74 c7 c5 fd d7 c8 85 c9 0f 84 83 00

Actual results:
Guest hang after several hot plug/unplug vhost-user over virtio-net vDPA. 

Expected results:
Guest should not hang.

Additional info:
1. The virtio-net vDPA setup is following https://www.redhat.com/en/blog/vdpa-hands-proof-pudding

2. We boot L2 guest without vIOMMU, this is because Bug 1861244 exists.

Comment 1 Pei Zhang 2020-08-19 07:11:29 UTC
Created attachment 1711808 [details]
L1 guest full XML

Comment 2 Pei Zhang 2020-08-19 07:13:05 UTC
Created attachment 1711811 [details]
L2 guest full XML

Comment 3 lulu@redhat.com 2020-09-21 05:35:14 UTC
after discussed with pei, We plan to move this to 8.4

Comment 5 Pei Zhang 2021-03-02 03:47:52 UTC
Booting vhost-user with physical vDPA NIC, hot plug/unplug keep working well.

vDPA NIC:
3b:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]

Versions:
4.18.0-291.el8.x86_64
qemu-kvm-5.2.0-9.module+el8.4.0+10182+4161bd91.x86_64
libvirt-7.0.0-6.module+el8.4.0+10144+c3d3c217.x86_64

Steps:
1. Check module

# lsmod | grep mlx5
mlx5_ib               372736  2
ib_uverbs             159744  9 i40iw,rdma_ucm,mlx5_ib
ib_core               385024  15 rdma_cm,ib_ipoib,rpcrdma,ib_srpt,iw_cm,ib_iser,ib_umad,ib_isert,i40iw,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm
mlx5_core            1208320  1 mlx5_ib
mlxfw                  28672  1 mlx5_core
pci_hyperv_intf        16384  1 mlx5_core
tls                   102400  1 mlx5_core

2. The vDPA NIC should be mlx5_core driver
# dpdk-devbind.py --status

Network devices using kernel driver
===================================
0000:3b:00.0 'MT2892 Family [ConnectX-6 Dx] 101d' if=ens1f0 drv=mlx5_core unused= 
0000:3b:00.1 'MT2892 Family [ConnectX-6 Dx] 101d' if=ens1f1 drv=mlx5_core unused= 

3. Boot dpdk's testpmd 
dpdk-testpmd \
	-l 2,4,6,8,10,12,14,16,18 \
	--socket-mem 1024,1024 \
	-n 4  \
	--vdev 'net_vhost0,iface=/tmp/vhost-user1,queues=2,client=1,iommu-support=1' \
	--vdev 'net_vhost1,iface=/tmp/vhost-user2,queues=2,client=1,iommu-support=1'  \
	-d /usr/lib64/librte_net_vhost.so  \
	-- \
	--portmask=f \
	-i \
	--rxd=512 --txd=512 \
	--rxq=2 --txq=2 \
	--nb-cores=8 \
	--forward-mode=io
EAL: Detected 24 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:3b:00.0 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:3b:00.1 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=211456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 0C:42:A1:D1:D0:24
Configuring Port 1 (socket 0)
Port 1: 0C:42:A1:D1:D0:25
Configuring Port 2 (socket 0)
VHOST_CONFIG: vhost-user client: socket created, fd: 62
VHOST_CONFIG: failed to connect to /tmp/vhost-user1: No such file or directory
VHOST_CONFIG: /tmp/vhost-user1: reconnecting...
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 0)
VHOST_CONFIG: vhost-user client: socket created, fd: 65
VHOST_CONFIG: failed to connect to /tmp/vhost-user2: No such file or directory
VHOST_CONFIG: /tmp/vhost-user2: reconnecting...
Port 3: 56:48:4F:53:54:03
Checking link statuses...
Done


testpmd> set portlist 0,2,1,3
testpmd> start 
io packet forwarding - ports=4 - cores=8 - streams=8 - NUMA support enabled, MP allocation mode: native
Logical Core 4 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
Logical Core 6 (socket 0) forwards packets on 1 streams:
  RX P=2/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
Logical Core 8 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
Logical Core 10 (socket 0) forwards packets on 1 streams:
  RX P=3/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 12 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=1 (socket 0) -> TX P=2/Q=1 (socket 0) peer=02:00:00:00:00:02
Logical Core 14 (socket 0) forwards packets on 1 streams:
  RX P=2/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
Logical Core 16 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=1 (socket 0) -> TX P=3/Q=1 (socket 0) peer=02:00:00:00:00:03
Logical Core 18 (socket 0) forwards packets on 1 streams:
  RX P=3/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01

  io packet forwarding packets/burst=32
  nb forwarding cores=8 - nb forwarding ports=4
  port 0: RX queue number: 2 Tx queue number: 2
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 2 Tx queue number: 2
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 2: RX queue number: 2 Tx queue number: 2
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 3: RX queue number: 2 Tx queue number: 2
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> VHOST_CONFIG: /tmp/vhost-user1: connected
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: /tmp/vhost-user2: connected
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:67
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:68
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:69
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:70
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:72
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:73
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:74
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:75
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
	 guest physical addr: 0x0
	 guest virtual  addr: 0x7f09c0000000
	 host  virtual  addr: 0x7f6040000000
	 mmap addr : 0x7f6040000000
	 mmap size : 0x80000000
	 mmap align: 0x40000000
	 mmap off  : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x180000000
	 guest physical addr: 0x100000000
	 guest virtual  addr: 0x7f0a40000000
	 host  virtual  addr: 0x7f5ec0000000
	 mmap addr : 0x7f5e40000000
	 mmap size : 0x200000000
	 mmap align: 0x40000000
	 mmap off  : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:78
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:79
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:67
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:80
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:68
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:81
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:69
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:82

Port 2: queue state event

Port 2: queue state event
VHOST_CONFIG: virtio is now ready for processing.

Port 2: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2

Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3

Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
	 guest physical addr: 0x0
	 guest virtual  addr: 0x7f09c0000000
	 host  virtual  addr: 0x7f5dc0000000
	 mmap addr : 0x7f5dc0000000
	 mmap size : 0x80000000
	 mmap align: 0x40000000
	 mmap off  : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x180000000
	 guest physical addr: 0x100000000
	 guest virtual  addr: 0x7f0a40000000
	 host  virtual  addr: 0x7f5c40000000
	 mmap addr : 0x7f5bc0000000
	 mmap size : 0x200000000
	 mmap align: 0x40000000
	 mmap off  : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:84
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:85
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:72
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:86
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:73
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:87
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:74
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:88
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3

Port 3: queue state event

Port 3: queue state event

Port 3: queue state event
VHOST_CONFIG: virtio is now ready for processing.

Port 3: link state change event

Port 3: queue state event


4. Start VM with vhost-user

    <interface type='vhostuser'>
      <mac address='18:66:da:5f:dd:22'/>
      <source type='unix' path='/tmp/vhost-user1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2' rx_queue_size='1024' iommu='on' ats='on'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='18:66:da:5f:dd:23'/>
      <source type='unix' path='/tmp/vhost-user2' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2' rx_queue_size='1024' iommu='on' ats='on'/>
      <alias name='net2'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>

5. Hot plug/unplug vhost-user several times. Both host and guest keep working well.

while true;do
virsh detach-device rhel8.4 nic1.xml&&
virsh detach-device rhel8.4 nic2.xml&&
sleep 5&&
virsh attach-device rhel8.4 nic2.xml&&
virsh attach-device rhel8.4 nic1.xml&&
sleep 5;
done

6. In VM, set vhost-user NIC with a tmp IP
# ifconfig enp6s0 192.168.1.2/24

7. In another host which connects back-to-back, set NIC with a temp IP
# ifconfig enp6s0f0 192.168.1.1/24

8. The ping through vhost-user NIC works well.
# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.187 ms

Comment 6 Pei Zhang 2021-03-02 03:49:13 UTC
This bug was tracking the emulator vDPA devices with vhost-uesr issue. However in the previous vDPA meeting, we agreed that customer may not use this scenario. And as Comment 3, this issue cannot be reproduced with physical vDPA NIC. So I would close this bug as WONTFIX.


Note You need to log in before you can comment on or make changes to this bug.