Bug 1861244
| Summary: | Booting qemu with vhost-user enabling vIOMMU over virtio-net vDPA fails | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Pei Zhang <pezhang> | ||||||
| Component: | qemu-kvm | Assignee: | lulu <lulu> | ||||||
| qemu-kvm sub component: | Networking | QA Contact: | Pei Zhang <pezhang> | ||||||
| Status: | CLOSED WONTFIX | Docs Contact: | |||||||
| Severity: | high | ||||||||
| Priority: | high | CC: | aadam, chayang, jinzhao, juzhang, lulu, maxime.coquelin, virt-maint | ||||||
| Version: | 8.3 | Keywords: | Triaged | ||||||
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
||||||
| Target Release: | 8.4 | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2021-03-02 03:22:18 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Bug Depends On: | |||||||||
| Bug Blocks: | 1897025 | ||||||||
| Attachments: |
|
||||||||
|
Description
Pei Zhang
2020-07-28 07:04:31 UTC
Created attachment 1702614 [details]
L2 guest full XML
Update: After removing iommu support from L2 guest, this issue is gone, L2 guest can boot well. This should not be viommu nested issue. It might be vDPA driver + vhost-user + iommu issue. Change above step 5: creating a vhost-user sockets with iommu enabled like below, then L2 guest with iommu can be boot successfully. In L1 guest, boot testpmd: /usr/bin/testpmd \ -l 1,2,3 \ --socket-mem 1024 \ -n 4 \ -d /usr/lib64/librte_pmd_vhost.so \ --vdev 'net_vhost0,iface=/tmp/vhostuser0.sock,queues=1,client=1,iommu-support=1' \ --vdev 'net_vhost1,iface=/tmp/vhostuser1.sock,queues=1,client=1,iommu-support=1' \ -- \ -i \ --rxd=512 --txd=512 \ --rxq=1 --txq=1 \ --nb-cores=2 \ --forward-mode=io In L1 guest, boot L2 guest (same with step 6), L2 guest can boot successfully. (In reply to Pei Zhang from comment #3) > This should not be viommu nested issue. It might be vDPA driver + vhost-user > + iommu issue. > > Change above step 5: creating a vhost-user sockets with iommu enabled like > below, then L2 guest with iommu can be boot successfully. > > In L1 guest, boot testpmd: > > /usr/bin/testpmd \ > -l 1,2,3 \ > --socket-mem 1024 \ > -n 4 \ > -d /usr/lib64/librte_pmd_vhost.so \ > --vdev > 'net_vhost0,iface=/tmp/vhostuser0.sock,queues=1,client=1,iommu-support=1' \ > --vdev > 'net_vhost1,iface=/tmp/vhostuser1.sock,queues=1,client=1,iommu-support=1' \ > -- \ > -i \ > --rxd=512 --txd=512 \ > --rxq=1 --txq=1 \ > --nb-cores=2 \ > --forward-mode=io > > In L1 guest, boot L2 guest (same with step 6), L2 guest can boot > successfully. Hi pei, I just want to confirm that in this part you run the testpmd and boot L2 guest at the same time ? And they all use the same port /tmp/vhostuser1.sock and /tmp/vhostuser2.sock ? (In reply to lulu from comment #4) > (In reply to Pei Zhang from comment #3) > > This should not be viommu nested issue. It might be vDPA driver + vhost-user > > + iommu issue. > > > > Change above step 5: creating a vhost-user sockets with iommu enabled like > > below, then L2 guest with iommu can be boot successfully. > > > > In L1 guest, boot testpmd: > > > > /usr/bin/testpmd \ > > -l 1,2,3 \ > > --socket-mem 1024 \ > > -n 4 \ > > -d /usr/lib64/librte_pmd_vhost.so \ > > --vdev > > 'net_vhost0,iface=/tmp/vhostuser0.sock,queues=1,client=1,iommu-support=1' \ > > --vdev > > 'net_vhost1,iface=/tmp/vhostuser1.sock,queues=1,client=1,iommu-support=1' \ > > -- \ > > -i \ > > --rxd=512 --txd=512 \ > > --rxq=1 --txq=1 \ > > --nb-cores=2 \ > > --forward-mode=io > > > > In L1 guest, boot L2 guest (same with step 6), L2 guest can boot > > successfully. > > Hi pei, I just want to confirm that in this part you run the testpmd and > boot L2 guest at the same time ? Hi Cindy, I run testpmd first, then boot L2 guest. After L2 guest boot up, the vhost-user sockets will be connected between them. > And they all use the same port /tmp/vhostuser1.sock and /tmp/vhostuser2.sock > ? Yes, they are using the same socket /tmp/vhostuser1.sock and /tmp/vhostuser2.sock. Thanks. Best regards, Pei after discussed with pei, We plan to move this to AV8.4 Booting vhost-user with physical vDPA NIC, it works well.
vDPA NIC:
3b:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
Versions:
4.18.0-291.el8.x86_64
qemu-kvm-5.2.0-9.module+el8.4.0+10182+4161bd91.x86_64
libvirt-7.0.0-6.module+el8.4.0+10144+c3d3c217.x86_64
Steps:
1. Check module
# lsmod | grep mlx5
mlx5_ib 372736 2
ib_uverbs 159744 9 i40iw,rdma_ucm,mlx5_ib
ib_core 385024 15 rdma_cm,ib_ipoib,rpcrdma,ib_srpt,iw_cm,ib_iser,ib_umad,ib_isert,i40iw,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm
mlx5_core 1208320 1 mlx5_ib
mlxfw 28672 1 mlx5_core
pci_hyperv_intf 16384 1 mlx5_core
tls 102400 1 mlx5_core
2. The vDPA NIC should be mlx5_core driver
# dpdk-devbind.py --status
Network devices using kernel driver
===================================
0000:3b:00.0 'MT2892 Family [ConnectX-6 Dx] 101d' if=ens1f0 drv=mlx5_core unused=
0000:3b:00.1 'MT2892 Family [ConnectX-6 Dx] 101d' if=ens1f1 drv=mlx5_core unused=
3. Boot dpdk's testpmd
dpdk-testpmd \
-l 2,4,6,8,10,12,14,16,18 \
--socket-mem 1024,1024 \
-n 4 \
--vdev 'net_vhost0,iface=/tmp/vhost-user1,queues=2,client=1,iommu-support=1' \
--vdev 'net_vhost1,iface=/tmp/vhost-user2,queues=2,client=1,iommu-support=1' \
-d /usr/lib64/librte_net_vhost.so \
-- \
--portmask=f \
-i \
--rxd=512 --txd=512 \
--rxq=2 --txq=2 \
--nb-cores=8 \
--forward-mode=io
EAL: Detected 24 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:3b:00.0 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:3b:00.1 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=211456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 0C:42:A1:D1:D0:24
Configuring Port 1 (socket 0)
Port 1: 0C:42:A1:D1:D0:25
Configuring Port 2 (socket 0)
VHOST_CONFIG: vhost-user client: socket created, fd: 62
VHOST_CONFIG: failed to connect to /tmp/vhost-user1: No such file or directory
VHOST_CONFIG: /tmp/vhost-user1: reconnecting...
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 0)
VHOST_CONFIG: vhost-user client: socket created, fd: 65
VHOST_CONFIG: failed to connect to /tmp/vhost-user2: No such file or directory
VHOST_CONFIG: /tmp/vhost-user2: reconnecting...
Port 3: 56:48:4F:53:54:03
Checking link statuses...
Done
testpmd> set portlist 0,2,1,3
testpmd> start
io packet forwarding - ports=4 - cores=8 - streams=8 - NUMA support enabled, MP allocation mode: native
Logical Core 4 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
Logical Core 6 (socket 0) forwards packets on 1 streams:
RX P=2/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
Logical Core 8 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
Logical Core 10 (socket 0) forwards packets on 1 streams:
RX P=3/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 12 (socket 0) forwards packets on 1 streams:
RX P=0/Q=1 (socket 0) -> TX P=2/Q=1 (socket 0) peer=02:00:00:00:00:02
Logical Core 14 (socket 0) forwards packets on 1 streams:
RX P=2/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
Logical Core 16 (socket 0) forwards packets on 1 streams:
RX P=1/Q=1 (socket 0) -> TX P=3/Q=1 (socket 0) peer=02:00:00:00:00:03
Logical Core 18 (socket 0) forwards packets on 1 streams:
RX P=3/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
io packet forwarding packets/burst=32
nb forwarding cores=8 - nb forwarding ports=4
port 0: RX queue number: 2 Tx queue number: 2
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=512 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 1: RX queue number: 2 Tx queue number: 2
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=512 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 2: RX queue number: 2 Tx queue number: 2
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=512 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 3: RX queue number: 2 Tx queue number: 2
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=512 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
testpmd> VHOST_CONFIG: /tmp/vhost-user1: connected
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: /tmp/vhost-user2: connected
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:67
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:68
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:69
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:70
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:72
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:73
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:74
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:75
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
guest physical addr: 0x0
guest virtual addr: 0x7f09c0000000
host virtual addr: 0x7f6040000000
mmap addr : 0x7f6040000000
mmap size : 0x80000000
mmap align: 0x40000000
mmap off : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x180000000
guest physical addr: 0x100000000
guest virtual addr: 0x7f0a40000000
host virtual addr: 0x7f5ec0000000
mmap addr : 0x7f5e40000000
mmap size : 0x200000000
mmap align: 0x40000000
mmap off : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:78
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:79
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:67
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:80
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:68
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:81
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:69
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:82
Port 2: queue state event
Port 2: queue state event
VHOST_CONFIG: virtio is now ready for processing.
Port 2: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2
Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3
Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
guest physical addr: 0x0
guest virtual addr: 0x7f09c0000000
host virtual addr: 0x7f5dc0000000
mmap addr : 0x7f5dc0000000
mmap size : 0x80000000
mmap align: 0x40000000
mmap off : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x180000000
guest physical addr: 0x100000000
guest virtual addr: 0x7f0a40000000
host virtual addr: 0x7f5c40000000
mmap addr : 0x7f5bc0000000
mmap size : 0x200000000
mmap align: 0x40000000
mmap off : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:84
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:85
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:72
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:86
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x37060ff83
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:73
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:87
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:74
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:88
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3
Port 3: queue state event
Port 3: queue state event
Port 3: queue state event
VHOST_CONFIG: virtio is now ready for processing.
Port 3: link state change event
Port 3: queue state event
4. Start VM with vhost-user
<interface type='vhostuser'>
<mac address='18:66:da:5f:dd:22'/>
<source type='unix' path='/tmp/vhost-user1' mode='server'/>
<model type='virtio'/>
<driver name='vhost' queues='2' rx_queue_size='1024' iommu='on' ats='on'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='18:66:da:5f:dd:23'/>
<source type='unix' path='/tmp/vhost-user2' mode='server'/>
<model type='virtio'/>
<driver name='vhost' queues='2' rx_queue_size='1024' iommu='on' ats='on'/>
<alias name='net2'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</interface>
5. In VM, set vhost-user NIC with a tmp IP
# ifconfig enp6s0 192.168.1.2/24
6. In another host which connects back-to-back, set NIC with a temp IP
# ifconfig enp6s0f0 192.168.1.1/24
7. The ping through vhost-user NIC works well.
# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.187 ms
|