Bug 1861244 - Booting qemu with vhost-user enabling vIOMMU over virtio-net vDPA fails
Summary: Booting qemu with vhost-user enabling vIOMMU over virtio-net vDPA fails
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 8.4
Assignee: lulu@redhat.com
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks: 1897025
TreeView+ depends on / blocked
 
Reported: 2020-07-28 07:04 UTC by Pei Zhang
Modified: 2021-03-02 03:28 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-03-02 03:22:18 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
L1 guest full XML (5.15 KB, application/xml)
2020-07-28 07:04 UTC, Pei Zhang
no flags Details
L2 guest full XML (4.83 KB, application/xml)
2020-07-28 07:10 UTC, Pei Zhang
no flags Details

Description Pei Zhang 2020-07-28 07:04:31 UTC
Created attachment 1702613 [details]
L1 guest full XML

Description of problem:

This bug is not tested with physical vDPA cards. We are testing with virtio-net vDPA in nested virtualization environment.

Boot vhost-user over virtio-net vDPA, guest can not boot up successfully.

Version-Release number of selected component (if applicable):
4.18.0-227.el8.x86_64
qemu-kvm-5.0.0-2.module+el8.3.0+7379+0505d6ca.x86_64
openvswitch2.11-2.11.3-60.el8fdp.x86_64
https://gitlab.com/mcoquelin/dpdk-next-virtio.git


How reproducible:
100%

Steps to Reproduce:
1. In host, boot ovs with 2 vhost-user ports

2. In host, boot L1 guest with 2 vhost-user ports, 12 CPUs and 16G memory. Full L1 guest XML is attached.

3. In L1 guest, compile dpdk which support virtio-net vDPA

# git clone https://gitlab.com/mcoquelin/dpdk-next-virtio.git dpdk
# cd dpdk/
# git checkout remotes/origin/virtio_vdpa_v1
# export RTE_SDK=`pwd`
# export RTE_TARGET=x86_64-native-linuxapp-gcc
# make -j2 install T=$RTE_TARGET DESTDIR=install
# cd examples/vdpa
# make


4. In L1 guest, bind NICs to vfio

# modprobe vfio
# modprobe vfio-pci
# dpdk-devbind --bind=vfio-pci 0000:06:00.0
# dpdk-devbind --bind=vfio-pci 0000:07:00.0

5. In L1 guest, start vDPA application, boot 2 vDPA vhost-user ports

# cd /root/dpdk/examples/vdpa/build
# ./vdpa -l 1,2 -n 4 --socket-mem 1024 -w 0000:06:00.0,vdpa=1 -w 0000:07:00.0,vdpa=1 -- --interactive --client

vdpa> list
device id	device address	queue num	supported features
0		0000:06:00.0	1		0x370bfe7a6
1		0000:07:00.0	1		0x370bfe7a6

vdpa> create /tmp/vdpa-socket0 0000:06:00.0
VHOST_CONFIG: vhost-user client: socket created, fd: 37
VHOST_CONFIG: failed to connect to /tmp/vdpa-socket0: Connection refused
VHOST_CONFIG: /tmp/vdpa-socket0: reconnecting...
vdpa> create /tmp/vdpa-socket1 0000:07:00.0
VHOST_CONFIG: vhost-user client: socket created, fd: 40
VHOST_CONFIG: failed to connect to /tmp/vdpa-socket1: Connection refused
VHOST_CONFIG: /tmp/vdpa-socket1: reconnecting...

6. In L1 guest, boot L2 guest with above vDPA ports. Full L2 guest XML will be attached.

    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:02'/>
      <source type='unix' path='/tmp/vdpa-socket0' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:03'/>
      <source type='unix' path='/tmp/vdpa-socket1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>

7. Guest fails booting, become hang. And qemu repeately print error info:

...
2020-07-28T03:41:18.192423Z qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend
2020-07-28T03:41:18.193506Z qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend
2020-07-28T03:41:18.194574Z qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend
2020-07-28T03:41:18.195641Z qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend
2020-07-28T03:41:18.196678Z qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend
2020-07-28T03:41:18.197730Z qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend
2020-07-28T03:41:18.198755Z qemu-kvm: failed to init vhost_net for queue 0
vhost lacks feature mask 8589934592 for backend
....


Actual results:
Guest fails booting.

Expected results:
Guest should work well.

Additional info:
1. The virtio-net vDPA setup is following https://www.redhat.com/en/blog/vdpa-hands-proof-pudding

2. I'm not sure if it's a real vDPA bug, as we are testing with emulated vDPA devices, not physical vDPA devices. However I still think it's necessary to report and track this issue in bugzilla to expose the possible issue earlier.

Comment 1 Pei Zhang 2020-07-28 07:10:50 UTC
Created attachment 1702614 [details]
L2 guest full XML

Comment 2 Pei Zhang 2020-08-12 09:45:03 UTC
Update:

After removing iommu support from L2 guest, this issue is gone, L2 guest can boot well.

Comment 3 Pei Zhang 2020-08-12 09:55:19 UTC
This should not be viommu nested issue. It might be vDPA driver + vhost-user + iommu issue. 

Change above step 5: creating a vhost-user sockets with iommu enabled like below, then L2 guest with iommu can be boot successfully.

In L1 guest, boot testpmd:

/usr/bin/testpmd \
	-l 1,2,3 \
	--socket-mem 1024 \
	-n 4 \
	-d /usr/lib64/librte_pmd_vhost.so  \
	--vdev 'net_vhost0,iface=/tmp/vhostuser0.sock,queues=1,client=1,iommu-support=1' \
	--vdev 'net_vhost1,iface=/tmp/vhostuser1.sock,queues=1,client=1,iommu-support=1' \
	-- \
	-i \
	--rxd=512 --txd=512 \
	--rxq=1 --txq=1 \
	--nb-cores=2 \
	--forward-mode=io

In L1 guest, boot L2 guest (same with step 6), L2 guest can boot successfully.

Comment 4 lulu@redhat.com 2020-09-08 06:19:33 UTC
(In reply to Pei Zhang from comment #3)
> This should not be viommu nested issue. It might be vDPA driver + vhost-user
> + iommu issue. 
> 
> Change above step 5: creating a vhost-user sockets with iommu enabled like
> below, then L2 guest with iommu can be boot successfully.
> 
> In L1 guest, boot testpmd:
> 
> /usr/bin/testpmd \
> 	-l 1,2,3 \
> 	--socket-mem 1024 \
> 	-n 4 \
> 	-d /usr/lib64/librte_pmd_vhost.so  \
> 	--vdev
> 'net_vhost0,iface=/tmp/vhostuser0.sock,queues=1,client=1,iommu-support=1' \
> 	--vdev
> 'net_vhost1,iface=/tmp/vhostuser1.sock,queues=1,client=1,iommu-support=1' \
> 	-- \
> 	-i \
> 	--rxd=512 --txd=512 \
> 	--rxq=1 --txq=1 \
> 	--nb-cores=2 \
> 	--forward-mode=io
> 
> In L1 guest, boot L2 guest (same with step 6), L2 guest can boot
> successfully.

Hi pei, I just want to confirm that  in this part you run the testpmd and boot L2 guest at the same time ?
And they all use the same port /tmp/vhostuser1.sock and /tmp/vhostuser2.sock ?

Comment 5 Pei Zhang 2020-09-08 06:30:24 UTC
(In reply to lulu from comment #4)
> (In reply to Pei Zhang from comment #3)
> > This should not be viommu nested issue. It might be vDPA driver + vhost-user
> > + iommu issue. 
> > 
> > Change above step 5: creating a vhost-user sockets with iommu enabled like
> > below, then L2 guest with iommu can be boot successfully.
> > 
> > In L1 guest, boot testpmd:
> > 
> > /usr/bin/testpmd \
> > 	-l 1,2,3 \
> > 	--socket-mem 1024 \
> > 	-n 4 \
> > 	-d /usr/lib64/librte_pmd_vhost.so  \
> > 	--vdev
> > 'net_vhost0,iface=/tmp/vhostuser0.sock,queues=1,client=1,iommu-support=1' \
> > 	--vdev
> > 'net_vhost1,iface=/tmp/vhostuser1.sock,queues=1,client=1,iommu-support=1' \
> > 	-- \
> > 	-i \
> > 	--rxd=512 --txd=512 \
> > 	--rxq=1 --txq=1 \
> > 	--nb-cores=2 \
> > 	--forward-mode=io
> > 
> > In L1 guest, boot L2 guest (same with step 6), L2 guest can boot
> > successfully.
> 
> Hi pei, I just want to confirm that  in this part you run the testpmd and
> boot L2 guest at the same time ?

Hi Cindy,

I run testpmd first, then boot L2 guest. After L2 guest boot up, the vhost-user sockets will be connected between them. 

> And they all use the same port /tmp/vhostuser1.sock and /tmp/vhostuser2.sock
> ?

Yes, they are using the same socket /tmp/vhostuser1.sock and /tmp/vhostuser2.sock.

Thanks.

Best regards,

Pei

Comment 6 lulu@redhat.com 2020-09-25 05:47:03 UTC
after discussed  with pei, We plan to move this to AV8.4

Comment 8 Pei Zhang 2021-03-02 03:19:53 UTC
Booting vhost-user with physical vDPA NIC, it works well.

vDPA NIC:
3b:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]

Versions:
4.18.0-291.el8.x86_64
qemu-kvm-5.2.0-9.module+el8.4.0+10182+4161bd91.x86_64
libvirt-7.0.0-6.module+el8.4.0+10144+c3d3c217.x86_64

Steps:
1. Check module

# lsmod | grep mlx5
mlx5_ib               372736  2
ib_uverbs             159744  9 i40iw,rdma_ucm,mlx5_ib
ib_core               385024  15 rdma_cm,ib_ipoib,rpcrdma,ib_srpt,iw_cm,ib_iser,ib_umad,ib_isert,i40iw,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm
mlx5_core            1208320  1 mlx5_ib
mlxfw                  28672  1 mlx5_core
pci_hyperv_intf        16384  1 mlx5_core
tls                   102400  1 mlx5_core

2. The vDPA NIC should be mlx5_core driver
# dpdk-devbind.py --status

Network devices using kernel driver
===================================
0000:3b:00.0 'MT2892 Family [ConnectX-6 Dx] 101d' if=ens1f0 drv=mlx5_core unused= 
0000:3b:00.1 'MT2892 Family [ConnectX-6 Dx] 101d' if=ens1f1 drv=mlx5_core unused= 

3. Boot dpdk's testpmd 
dpdk-testpmd \
	-l 2,4,6,8,10,12,14,16,18 \
	--socket-mem 1024,1024 \
	-n 4  \
	--vdev 'net_vhost0,iface=/tmp/vhost-user1,queues=2,client=1,iommu-support=1' \
	--vdev 'net_vhost1,iface=/tmp/vhost-user2,queues=2,client=1,iommu-support=1'  \
	-d /usr/lib64/librte_net_vhost.so  \
	-- \
	--portmask=f \
	-i \
	--rxd=512 --txd=512 \
	--rxq=2 --txq=2 \
	--nb-cores=8 \
	--forward-mode=io
EAL: Detected 24 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:3b:00.0 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:3b:00.1 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=211456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 0C:42:A1:D1:D0:24
Configuring Port 1 (socket 0)
Port 1: 0C:42:A1:D1:D0:25
Configuring Port 2 (socket 0)
VHOST_CONFIG: vhost-user client: socket created, fd: 62
VHOST_CONFIG: failed to connect to /tmp/vhost-user1: No such file or directory
VHOST_CONFIG: /tmp/vhost-user1: reconnecting...
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 0)
VHOST_CONFIG: vhost-user client: socket created, fd: 65
VHOST_CONFIG: failed to connect to /tmp/vhost-user2: No such file or directory
VHOST_CONFIG: /tmp/vhost-user2: reconnecting...
Port 3: 56:48:4F:53:54:03
Checking link statuses...
Done


testpmd> set portlist 0,2,1,3
testpmd> start 
io packet forwarding - ports=4 - cores=8 - streams=8 - NUMA support enabled, MP allocation mode: native
Logical Core 4 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
Logical Core 6 (socket 0) forwards packets on 1 streams:
  RX P=2/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
Logical Core 8 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
Logical Core 10 (socket 0) forwards packets on 1 streams:
  RX P=3/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 12 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=1 (socket 0) -> TX P=2/Q=1 (socket 0) peer=02:00:00:00:00:02
Logical Core 14 (socket 0) forwards packets on 1 streams:
  RX P=2/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
Logical Core 16 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=1 (socket 0) -> TX P=3/Q=1 (socket 0) peer=02:00:00:00:00:03
Logical Core 18 (socket 0) forwards packets on 1 streams:
  RX P=3/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01

  io packet forwarding packets/burst=32
  nb forwarding cores=8 - nb forwarding ports=4
  port 0: RX queue number: 2 Tx queue number: 2
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 2 Tx queue number: 2
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 2: RX queue number: 2 Tx queue number: 2
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 3: RX queue number: 2 Tx queue number: 2
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> VHOST_CONFIG: /tmp/vhost-user1: connected
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: /tmp/vhost-user2: connected
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:67
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:68
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:69
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:70
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:72
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:73
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:74
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:75
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
	 guest physical addr: 0x0
	 guest virtual  addr: 0x7f09c0000000
	 host  virtual  addr: 0x7f6040000000
	 mmap addr : 0x7f6040000000
	 mmap size : 0x80000000
	 mmap align: 0x40000000
	 mmap off  : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x180000000
	 guest physical addr: 0x100000000
	 guest virtual  addr: 0x7f0a40000000
	 host  virtual  addr: 0x7f5ec0000000
	 mmap addr : 0x7f5e40000000
	 mmap size : 0x200000000
	 mmap align: 0x40000000
	 mmap off  : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:78
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:79
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:67
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:80
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:68
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:81
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:69
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:82

Port 2: queue state event

Port 2: queue state event
VHOST_CONFIG: virtio is now ready for processing.

Port 2: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2

Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3

Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
	 guest physical addr: 0x0
	 guest virtual  addr: 0x7f09c0000000
	 host  virtual  addr: 0x7f5dc0000000
	 mmap addr : 0x7f5dc0000000
	 mmap size : 0x80000000
	 mmap align: 0x40000000
	 mmap off  : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x180000000
	 guest physical addr: 0x100000000
	 guest virtual  addr: 0x7f0a40000000
	 host  virtual  addr: 0x7f5c40000000
	 mmap addr : 0x7f5bc0000000
	 mmap size : 0x200000000
	 mmap align: 0x40000000
	 mmap off  : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:84
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:85
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:72
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:86
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:73
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:87
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:74
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:88
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3

Port 3: queue state event

Port 3: queue state event

Port 3: queue state event
VHOST_CONFIG: virtio is now ready for processing.

Port 3: link state change event

Port 3: queue state event


4. Start VM with vhost-user

    <interface type='vhostuser'>
      <mac address='18:66:da:5f:dd:22'/>
      <source type='unix' path='/tmp/vhost-user1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2' rx_queue_size='1024' iommu='on' ats='on'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='18:66:da:5f:dd:23'/>
      <source type='unix' path='/tmp/vhost-user2' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2' rx_queue_size='1024' iommu='on' ats='on'/>
      <alias name='net2'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>

5. In VM, set vhost-user NIC with a tmp IP
# ifconfig enp6s0 192.168.1.2/24

6. In another host which connects back-to-back, set NIC with a temp IP
# ifconfig enp6s0f0 192.168.1.1/24

7. The ping through vhost-user NIC works well.
# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.187 ms

Comment 9 Pei Zhang 2021-03-02 03:22:18 UTC
As Comment 8, this issue cannot be reproduced with physical vDPA NIC. Also in the previous vDPA meeting, we agreed that customer may not use the emulator vDPA nic. So I would close this bz as WONTFIX.


Note You need to log in before you can comment on or make changes to this bug.