Bug 1803082
| Summary: | DPDK virtio_user lack of notifications make vhost_net+napi stops tx buffers | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Fast Datapath | Reporter: | Eugenio Pérez Martín <eperezma> |
| Component: | openvswitch2.13 | Assignee: | Eugenio Pérez Martín <eperezma> |
| Status: | CLOSED ERRATA | QA Contact: | Jean-Tsung Hsiao <jhsiao> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | FDP 20.A | CC: | aadam, ctrautma, dmarchan, eperezma, jhsiao, kfida, maxime.coquelin, qding, ralongi, tredaelli |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | openvswitch2.13-2.13.0-18.el8fdp | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-05-26 11:23:51 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1824825 | ||
| Bug Blocks: | |||
Indirectly included due to DPDK 19.11.1 rebase in bz1824825 (In reply to Eugenio Pérez Martín from comment #0) > Description of problem: > > > Version-Release number of selected component (if applicable): > DPDK 19.11 > > How reproducible: > Very likely, but not always. > > Steps to Reproduce: > Using the current testpmd vhost_user as: > > ./app/testpmd -l 6,7,8 --vdev='net_vhost1,iface=/tmp/vhost-user1' > --vdev='net_vhost2,iface=/tmp/vhost-user2' -- -a -i --rxq=1 --txq=1 > --txd=1024 --forward-mode=rxonly > Is this running on Host ? > And starting qemu using packed=on on the interface: > > -netdev vhost-user,chardev=charnet1,id=hostnet1 -device > virtio-net-pci,rx_queue_size=256,...,packed=on > > And start to tx in the guest using: Usually, I run guest with an xml file. So, can you provide me an equivalent xml file ? Or, give me a complete qemu command line ? Thanks! Jean > > ./dpdk/build/app/testpmd -l 1,2 --vdev=eth_af_packet0,iface=eth0 -- \ > --forward-mode=txonly --txq=1 --txd=256 --auto-start --txpkts 1500 \ > --stats-period 1 > > Actual results: > After first burst of packets (512 or a little more), sendto() will start to > return EBUSY. kernel NAPI is refusing to send more packets to virtio_net > device until it free old skbs. > > However, virtio_net driver is unable to free old buffers since host > does not return them in `vhost_flush_dequeue_packed` until shadow queue is > full except for MAX_PKT_BURST (32) packets. > > Sometimes we are lucky and reach this point, or packets are small enough to > fill the queue and flush, but if the packets and the virtqueue are big > enough, we will not be able to tx anymore. > > Expected results: > Guest's testpmd is able to transmit. > > Additional info: > DPDK Upstream bug: https://bugs.dpdk.org/show_bug.cgi?id=383 Hi Jean. (In reply to Jean-Tsung Hsiao from comment #9) > (In reply to Eugenio Pérez Martín from comment #0) > > Description of problem: > > > > > > Version-Release number of selected component (if applicable): > > DPDK 19.11 > > > > How reproducible: > > Very likely, but not always. > > > > Steps to Reproduce: > > Using the current testpmd vhost_user as: > > > > ./app/testpmd -l 6,7,8 --vdev='net_vhost1,iface=/tmp/vhost-user1' > > --vdev='net_vhost2,iface=/tmp/vhost-user2' -- -a -i --rxq=1 --txq=1 > > --txd=1024 --forward-mode=rxonly > > > > Is this running on Host ? Yes. This creates the sockets (/tmp/vhost-user*) for qemu to connect. > > > And starting qemu using packed=on on the interface: > > > > -netdev vhost-user,chardev=charnet1,id=hostnet1 -device > > virtio-net-pci,rx_queue_size=256,...,packed=on > > > > And start to tx in the guest using: > > Usually, I run guest with an xml file. > So, can you provide me an equivalent xml file ? > Or, give me a complete qemu command line ? > Thanks! > Jean > Sure, sorry. You can find one complete XML on https://bugzilla.redhat.com/show_bug.cgi?id=1754708, and a similar environment on https://bugzilla.redhat.com/show_bug.cgi?id=1601355#c34. Thanks! Please let me know if you need more information. > > > > > ./dpdk/build/app/testpmd -l 1,2 --vdev=eth_af_packet0,iface=eth0 -- \ > > --forward-mode=txonly --txq=1 --txd=256 --auto-start --txpkts 1500 \ > > --stats-period 1 > > > > Actual results: > > After first burst of packets (512 or a little more), sendto() will start to > > return EBUSY. kernel NAPI is refusing to send more packets to virtio_net > > device until it free old skbs. > > > > However, virtio_net driver is unable to free old buffers since host > > does not return them in `vhost_flush_dequeue_packed` until shadow queue is > > full except for MAX_PKT_BURST (32) packets. > > > > Sometimes we are lucky and reach this point, or packets are small enough to > > fill the queue and flush, but if the packets and the virtqueue are big > > enough, we will not be able to tx anymore. > > > > Expected results: > > Guest's testpmd is able to transmit. > > > > Additional info: > > DPDK Upstream bug: https://bugs.dpdk.org/show_bug.cgi?id=383 (In reply to Eugenio Pérez Martín from comment #10) > Hi Jean. > > (In reply to Jean-Tsung Hsiao from comment #9) > > (In reply to Eugenio Pérez Martín from comment #0) > > > Description of problem: > > > > > > > > > Version-Release number of selected component (if applicable): > > > DPDK 19.11 > > > > > > How reproducible: > > > Very likely, but not always. > > > > > > Steps to Reproduce: > > > Using the current testpmd vhost_user as: > > > > > > ./app/testpmd -l 6,7,8 --vdev='net_vhost1,iface=/tmp/vhost-user1' > > > --vdev='net_vhost2,iface=/tmp/vhost-user2' -- -a -i --rxq=1 --txq=1 > > > --txd=1024 --forward-mode=rxonly > > > > > > > Is this running on Host ? > > Yes. This creates the sockets (/tmp/vhost-user*) for qemu to connect. > > > > > > And starting qemu using packed=on on the interface: > > > > > > -netdev vhost-user,chardev=charnet1,id=hostnet1 -device > > > virtio-net-pci,rx_queue_size=256,...,packed=on > > > > > > And start to tx in the guest using: > > > > Usually, I run guest with an xml file. > > So, can you provide me an equivalent xml file ? > > Or, give me a complete qemu command line ? > > Thanks! > > Jean > > > > Sure, sorry. You can find one complete XML on > https://bugzilla.redhat.com/show_bug.cgi?id=1754708, > and a similar environment on > https://bugzilla.redhat.com/show_bug.cgi?id=1601355#c34. > > Thanks! Please let me know if you need more information. > Hi, Great! I'll study them. What's your IRC in case I need to talk to you for quick questions ? Thanks! Jean Hi Eugenio,
Before I put in the packed=on, I tried to bring up the guest, but got
"socket /tmp/vhost-user1: Permission denied" error.
NOTE: Before starting the guest, I already edited /etc/libvirt/qemu.cfg
to have : user = "root". And, libvirtd was restarted.
Please take a look to see what I have missed.
Thanks!
Jean
virsh # dumpxml guest-packed
<domain type='kvm'>
<name>guest-packed</name>
<uuid>f693e08d-3ed8-49e8-89a7-c99b37ca0aa0</uuid>
<metadata>
<libosinfo:libosinfo
xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://redhat.com/rhel/8.2"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>5</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<vcpupin vcpu='2' cpuset='9'/>
<vcpupin vcpu='3' cpuset='3'/>
<vcpupin vcpu='4' cpuset='11'/>
<emulatorpin cpuset='4,12'/>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-q35-rhel7.6.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Haswell-noTSX-IBRS</model>
<vendor>Intel</vendor>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='abm'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<numa>
<cell id='0' cpus='0-4' memory='8388608' unit='KiB'
memAccess='shared'/>
</numa>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/master.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00'
function='0x0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00'
function='0x0'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x8'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x9'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0xa'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0xb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0xc'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0xd'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0xe'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x6'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00'
function='0x0'/>
</controller>
<interface type='vhostuser'>
<mac address='52:54:00:83:b5:89'/>
<source type='unix' path='/tmp/vhost-user1' mode='server'/>
<target dev='vhost0'/>
<model type='virtio'/>
<driver name='vhost' queues='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='52:54:00:24:ca:f4'/>
<source type='unix' path='/tmp/vhost-user2' mode='server'/>
<target dev='vhost1'/>
<model type='virtio'/>
<driver name='vhost' queues='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:25:80:18'/>
<source bridge='virbr0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<serial type='file'>
<source path='/tmp/master.console'/>
<target type='isa-serial' port='1'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00'
function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'/>
<seclabel type='dynamic' model='dac' relabel='yes'/>
</domain>
virsh # start guest-packed
error: Failed to start domain guest-packed
error: internal error: process exited while connecting to monitor:
2020-05-18T20:19:25.839148Z qemu-kvm: -chardev
socket,id=charnet0,path=/tmp/vhost-user1,server: Failed to unlink socket
/tmp/vhost-user1: Permission denied
virsh #
[root@netqe30 images]# ll /tmp/vh*
srwxrwxr-x. 1 root root 0 May 18 16:58 /tmp/vhost-user1
srwxr-xr-x. 1 root root 0 May 18 16:54 /tmp/vhost-user2
[root@netqe30 images]#
(In reply to Jean-Tsung Hsiao from comment #13) > Hi Eugenio, > > Before I put in the packed=on, I tried to bring up the guest, but got > "socket /tmp/vhost-user1: Permission denied" error. > > NOTE: Before starting the guest, I already edited /etc/libvirt/qemu.cfg > to have : user = "root". And, libvirtd was restarted. > > Please take a look to see what I have missed. > > Thanks! > > Jean > > virsh # dumpxml guest-packed > <domain type='kvm'> > <name>guest-packed</name> > <uuid>f693e08d-3ed8-49e8-89a7-c99b37ca0aa0</uuid> > <metadata> > <libosinfo:libosinfo > xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> > <libosinfo:os id="http://redhat.com/rhel/8.2"/> > </libosinfo:libosinfo> > </metadata> > <memory unit='KiB'>8388608</memory> > <currentMemory unit='KiB'>8388608</currentMemory> > <memoryBacking> > <hugepages> > <page size='1048576' unit='KiB'/> > </hugepages> > </memoryBacking> > <vcpu placement='static'>5</vcpu> > <cputune> > <vcpupin vcpu='0' cpuset='0'/> > <vcpupin vcpu='1' cpuset='1'/> > <vcpupin vcpu='2' cpuset='9'/> > <vcpupin vcpu='3' cpuset='3'/> > <vcpupin vcpu='4' cpuset='11'/> > <emulatorpin cpuset='4,12'/> > </cputune> > <resource> > <partition>/machine</partition> > </resource> > <os> > <type arch='x86_64' machine='pc-q35-rhel7.6.0'>hvm</type> > <boot dev='hd'/> > </os> > <features> > <acpi/> > <apic/> > </features> > <cpu mode='custom' match='exact' check='full'> > <model fallback='forbid'>Haswell-noTSX-IBRS</model> > <vendor>Intel</vendor> > <feature policy='require' name='vme'/> > <feature policy='require' name='ss'/> > <feature policy='require' name='vmx'/> > <feature policy='require' name='f16c'/> > <feature policy='require' name='rdrand'/> > <feature policy='require' name='hypervisor'/> > <feature policy='require' name='arat'/> > <feature policy='require' name='tsc_adjust'/> > <feature policy='require' name='umip'/> > <feature policy='require' name='md-clear'/> > <feature policy='require' name='stibp'/> > <feature policy='require' name='arch-capabilities'/> > <feature policy='require' name='ssbd'/> > <feature policy='require' name='xsaveopt'/> > <feature policy='require' name='pdpe1gb'/> > <feature policy='require' name='abm'/> > <feature policy='require' name='skip-l1dfl-vmentry'/> > <numa> > <cell id='0' cpus='0-4' memory='8388608' unit='KiB' > memAccess='shared'/> > </numa> > </cpu> > <clock offset='utc'> > <timer name='rtc' tickpolicy='catchup'/> > <timer name='pit' tickpolicy='delay'/> > <timer name='hpet' present='no'/> > </clock> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <pm> > <suspend-to-mem enabled='no'/> > <suspend-to-disk enabled='no'/> > </pm> > <devices> > <emulator>/usr/libexec/qemu-kvm</emulator> > <disk type='file' device='disk'> > <driver name='qemu' type='qcow2'/> > <source file='/var/lib/libvirt/images/master.qcow2'/> > <target dev='vda' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x04' slot='0x00' > function='0x0'/> > </disk> > <controller type='usb' index='0' model='qemu-xhci' ports='15'> > <address type='pci' domain='0x0000' bus='0x02' slot='0x00' > function='0x0'/> > </controller> > <controller type='sata' index='0'> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' > function='0x2'/> > </controller> > <controller type='pci' index='0' model='pcie-root'/> > <controller type='pci' index='1' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='1' port='0x8'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='2' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='2' port='0x9'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x1'/> > </controller> > <controller type='pci' index='3' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='3' port='0xa'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x2'/> > </controller> > <controller type='pci' index='4' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='4' port='0xb'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x3'/> > </controller> > <controller type='pci' index='5' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='5' port='0xc'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x4'/> > </controller> > <controller type='pci' index='6' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='6' port='0xd'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x5'/> > </controller> > <controller type='pci' index='7' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='7' port='0xe'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x6'/> > </controller> > <controller type='virtio-serial' index='0'> > <address type='pci' domain='0x0000' bus='0x03' slot='0x00' > function='0x0'/> > </controller> > <interface type='vhostuser'> > <mac address='52:54:00:83:b5:89'/> > <source type='unix' path='/tmp/vhost-user1' mode='server'/> > <target dev='vhost0'/> > <model type='virtio'/> > <driver name='vhost' queues='2'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' > function='0x0'/> > </interface> > <interface type='vhostuser'> > <mac address='52:54:00:24:ca:f4'/> > <source type='unix' path='/tmp/vhost-user2' mode='server'/> > <target dev='vhost1'/> > <model type='virtio'/> > <driver name='vhost' queues='2'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x09' > function='0x0'/> > </interface> > <interface type='bridge'> > <mac address='52:54:00:25:80:18'/> > <source bridge='virbr0'/> > <model type='virtio'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' > function='0x0'/> > </interface> > <serial type='pty'> > <target type='isa-serial' port='0'> > <model name='isa-serial'/> > </target> > </serial> > <serial type='file'> > <source path='/tmp/master.console'/> > <target type='isa-serial' port='1'> > <model name='isa-serial'/> > </target> > </serial> > <console type='pty'> > <target type='serial' port='0'/> > </console> > <channel type='unix'> > <target type='virtio' name='org.qemu.guest_agent.0'/> > <address type='virtio-serial' controller='0' bus='0' port='1'/> > </channel> > <input type='mouse' bus='ps2'/> > <input type='keyboard' bus='ps2'/> > <memballoon model='virtio'> > <address type='pci' domain='0x0000' bus='0x05' slot='0x00' > function='0x0'/> > </memballoon> > <rng model='virtio'> > <backend model='random'>/dev/urandom</backend> > <address type='pci' domain='0x0000' bus='0x06' slot='0x00' > function='0x0'/> > </rng> > </devices> > <seclabel type='dynamic' model='selinux' relabel='yes'/> > <seclabel type='dynamic' model='dac' relabel='yes'/> > </domain> > > virsh # start guest-packed > error: Failed to start domain guest-packed > error: internal error: process exited while connecting to monitor: > 2020-05-18T20:19:25.839148Z qemu-kvm: -chardev > socket,id=charnet0,path=/tmp/vhost-user1,server: Failed to unlink socket > /tmp/vhost-user1: Permission denied > > virsh # > > > [root@netqe30 images]# ll /tmp/vh* > srwxrwxr-x. 1 root root 0 May 18 16:58 /tmp/vhost-user1 > srwxr-xr-x. 1 root root 0 May 18 16:54 /tmp/vhost-user2 > [root@netqe30 images]# Hi Jean. Can you try disabling selinux with `setenforce 0`? Thanks! (In reply to Eugenio Pérez Martín from comment #14) > (In reply to Jean-Tsung Hsiao from comment #13) > > Hi Eugenio, > > > > Before I put in the packed=on, I tried to bring up the guest, but got > > "socket /tmp/vhost-user1: Permission denied" error. > > > > NOTE: Before starting the guest, I already edited /etc/libvirt/qemu.cfg > > to have : user = "root". And, libvirtd was restarted. > > > > Please take a look to see what I have missed. > > > > Thanks! > > > > Jean > > > > Hi Jean. > > Can you try disabling selinux with `setenforce 0`? > > Thanks! I already did. Please check the following comand log: root@netqe30 ~]# getenforce Permissive [root@netqe30 ~]# virsh start guest-packed error: Failed to start domain guest-packed error: internal error: process exited while connecting to monitor: -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on 2020-05-19 10:50:36.862+0000: Domain id=8 is tainted: high-privileges 2020-05-19T10:50:36.915066Z qemu-kvm: -chardev socket,id=charnet0,path=/tmp/vhost-user1: Failed to connect socket /tmp/vhost-user1: Connection refused [root@netqe30 ~]# Hi Egenio,
I tried my old scrpt used in the "testpmd as a switch" project. Now, the ping test on Guest is working.
Getting 6000+ pps --- not bad.
Please check ping log from Guest, and testpmd log from Host attached below.
Thanks!
Jean
==========================
*** Guest ping ***
[root@localhost ~]# ping 10.0.0.2 -c 3
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.075 ms
--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 46ms
rtt min/avg/max/mdev = 0.061/0.068/0.075/0.011 ms
[root@localhost ~]# ip netns exec server ping 10.0.0.1 -c 3
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.104 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.073 ms
--- 10.0.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 42ms
rtt min/avg/max/mdev = 0.073/0.092/0.104/0.017 ms
[root@localhost ~]# ping -f 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
.
--- 10.0.0.2 ping statistics ---
19949505 packets transmitted, 19949505 received, 0% packet loss, time 3876ms
rtt min/avg/max/mdev = 0.013/0.073/0.616/0.012 ms, ipg/ewma 0.159/0.083 ms
[root@localhost ~]#
*** Host testpmd ***
[root@netqe30 ~]# /bin/testpmd -l 0,5,13,7,15 --socket-mem=4096,4096 -n
4 --vdev 'net_vhost0,iface=/tmp/vhost-user1' --vdev
'net_vhost1,iface=/tmp/vhost-user2' -- --portmask=f -i --rxq=1
--txq=1 --nb-cores=4 --forward-mode=io
EAL: Detected 56 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: PCI device 0000:19:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:19:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:19:00.2 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:19:00.3 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:3b:00.0 on NUMA socket 0
EAL: probe driver: 1077:8070 net_qede
EAL: PCI device 0000:3b:00.1 on NUMA socket 0
EAL: probe driver: 1077:8070 net_qede
EAL: PCI device 0000:5e:00.0 on NUMA socket 0
EAL: probe driver: 19ee:4000 net_nfp_pf
EAL: PCI device 0000:5f:00.0 on NUMA socket 0
EAL: probe driver: 15b3:1013 net_mlx5
EAL: PCI device 0000:5f:00.1 on NUMA socket 0
EAL: probe driver: 15b3:1013 net_mlx5
VHOST_CONFIG: vhost-user server: socket created, fd: 43
VHOST_CONFIG: bind to /tmp/vhost-user1
VHOST_CONFIG: vhost-user server: socket created, fd: 54
VHOST_CONFIG: bind to /tmp/vhost-user2
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=179456,
size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=179456,
size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 24:8A:07:87:22:CE
Configuring Port 1 (socket 0)
Port 1: 24:8A:07:87:22:CF
Configuring Port 2 (socket 0)
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 0)
Port 3: 56:48:4F:53:54:03
Checking link statuses...
Done
Error during enabling promiscuous mode for port 2: Operation not
supported - ignore
Error during enabling promiscuous mode for port 3: Operation not
supported - ignore
testpmd> start
io packet forwarding - ports=4 - cores=4 - streams=4 - NUMA support
enabled, MP allocation mode: native
Logical Core 5 (socket 1) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 7 (socket 1) forwards packets on 1 streams:
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
Logical Core 13 (socket 1) forwards packets on 1 streams:
RX P=2/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
Logical Core 15 (socket 1) forwards packets on 1 streams:
RX P=3/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
io packet forwarding packets/burst=32
nb forwarding cores=4 - nb forwarding ports=4
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 2: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 3: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
testpmd> VHOST_CONFIG: new vhost user connection is 55
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: new vhost user connection is 56
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcb7
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:58
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:59
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcb7
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:60
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:61
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcb7
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:63
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:64
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcb7
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:65
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:66
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x57060ff83
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
guest physical addr: 0x0
guest virtual addr: 0x7fcf80000000
host virtual addr: 0x7f8380000000
mmap addr : 0x7f8380000000
mmap size : 0x80000000
mmap align: 0x40000000
mmap off : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x180000000
guest physical addr: 0x100000000
guest virtual addr: 0x7fd000000000
host virtual addr: 0x7f8200000000
mmap addr : 0x7f8180000000
mmap size : 0x200000000
mmap align: 0x40000000
mmap off : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:69
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:70
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:58
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:71
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x57060ff83
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:59
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:72
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:60
VHOST_CONFIG: virtio is now ready for processing.
Port 2: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:73
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2
Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3
Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
Port 3: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
Port 3: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x57060ff83
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x80000000
guest physical addr: 0x0
guest virtual addr: 0x7fcf80000000
host virtual addr: 0x7f8100000000
mmap addr : 0x7f8100000000
mmap size : 0x80000000
mmap align: 0x40000000
mmap off : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x180000000
guest physical addr: 0x100000000
guest virtual addr: 0x7fd000000000
host virtual addr: 0x7f7f80000000
mmap addr : 0x7f7f00000000
mmap size : 0x200000000
mmap align: 0x40000000
mmap off : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:75
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:76
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:63
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:77
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x57060ff83
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:64
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:78
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:65
VHOST_CONFIG: virtio is now ready for processing.
Port 3: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:79
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2
Port 3: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3
Port 3: queue state event
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 2
########################
RX-packets: 141730 RX-missed: 0 RX-bytes: 13893420
RX-errors: 0
RX-nombuf: 0
TX-packets: 141692 TX-errors: 0 TX-bytes: 13886360
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 3
########################
RX-packets: 141692 RX-missed: 0 RX-bytes: 13886360
RX-errors: 0
RX-nombuf: 0
TX-packets: 141730 TX-errors: 0 TX-bytes: 13893420
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 2
########################
RX-packets: 161957 RX-missed: 0 RX-bytes: 15875666
RX-errors: 0
RX-nombuf: 0
TX-packets: 161919 TX-errors: 0 TX-bytes: 15868606
Throughput (since last show)
Rx-pps: 6290 Rx-bps: 4931968
Tx-pps: 6290 Tx-bps: 4931968
############################################################################
######################## NIC statistics for port 3
########################
RX-packets: 161919 RX-missed: 0 RX-bytes: 15868606
RX-errors: 0
RX-nombuf: 0
TX-packets: 161957 TX-errors: 0 TX-bytes: 15875666
Throughput (since last show)
Rx-pps: 6290 Rx-bps: 4932016
Tx-pps: 6290 Tx-bps: 4932016
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 2
########################
RX-packets: 178849 RX-missed: 0 RX-bytes: 17531082
RX-errors: 0
RX-nombuf: 0
TX-packets: 178811 TX-errors: 0 TX-bytes: 17524022
Throughput (since last show)
Rx-pps: 6294 Rx-bps: 4934904
Tx-pps: 6294 Tx-bps: 4934904
############################################################################
######################## NIC statistics for port 3
########################
RX-packets: 178811 RX-missed: 0 RX-bytes: 17524022
RX-errors: 0
RX-nombuf: 0
TX-packets: 178849 TX-errors: 0 TX-bytes: 17531082
Throughput (since last show)
Rx-pps: 6294 Rx-bps: 4934880
Tx-pps: 6294 Tx-bps: 4934880
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 2
########################
RX-packets: 223514 RX-missed: 0 RX-bytes: 21908196
RX-errors: 0
RX-nombuf: 0
TX-packets: 223476 TX-errors: 0 TX-bytes: 21901136
Throughput (since last show)
Rx-pps: 6291 Rx-bps: 4932584
Tx-pps: 6291 Tx-bps: 4932584
############################################################################
######################## NIC statistics for port 3
########################
RX-packets: 223476 RX-missed: 0 RX-bytes: 21901136
RX-errors: 0
RX-nombuf: 0
TX-packets: 223514 TX-errors: 0 TX-bytes: 21908196
Throughput (since last show)
Rx-pps: 6291 Rx-bps: 4932600
Tx-pps: 6291 Tx-bps: 4932600
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 2
########################
RX-packets: 2075449 RX-missed: 0 RX-bytes: 203396762
RX-errors: 0
RX-nombuf: 0
TX-packets: 2075411 TX-errors: 0 TX-bytes: 203389702
Throughput (since last show)
Rx-pps: 6301 Rx-bps: 4940648
Tx-pps: 6301 Tx-bps: 4940648
############################################################################
######################## NIC statistics for port 3
########################
RX-packets: 2075411 RX-missed: 0 RX-bytes: 203389702
RX-errors: 0
RX-nombuf: 0
TX-packets: 2075449 TX-errors: 0 TX-bytes: 203396762
Throughput (since last show)
Rx-pps: 6301 Rx-bps: 4940648
Tx-pps: 6301 Tx-bps: 4940648
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
######################## NIC statistics for port 2
########################
RX-packets: 2087621 RX-missed: 0 RX-bytes: 204589618
RX-errors: 0
RX-nombuf: 0
TX-packets: 2087583 TX-errors: 0 TX-bytes: 204582558
Throughput (since last show)
Rx-pps: 6292 Rx-bps: 4933264
Tx-pps: 6292 Tx-bps: 4933264
############################################################################
######################## NIC statistics for port 3
########################
RX-packets: 2087583 RX-missed: 0 RX-bytes: 204582558
RX-errors: 0
RX-nombuf: 0
TX-packets: 2087621 TX-errors: 0 TX-bytes: 204589618
Throughput (since last show)
Rx-pps: 6292 Rx-bps: 4933256
Tx-pps: 6292 Tx-bps: 4933256
##########################
Hi Eugenio,
I forgot to mention that "ping -f" ran about 1-hour plus before I killed it.
Attached below are configurations on Guest and Host.
Anything else to be tested before setting the status to VERIFIED.
Thanks!
Jean
*** Guest ***
[root@localhost ~]# uname -r
4.18.0-199.el8.x86_64
[root@localhost ~]# rpm -q dpdk
dpdk-19.11-4.el8.x86_64
[root@localhost ~]#
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 52:54:00:83:b5:89 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 scope global enp0s3
valid_lft forever preferred_lft forever
4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:25:80:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.202/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0
valid_lft 3428sec preferred_lft 3428sec
inet6 fe80::5054:ff:fe25:8018/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec server ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: enp0s9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 52:54:00:24:ca:f4 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 scope global enp0s9
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe24:caf4/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
*** Host ***
[root@netqe30 ~]# uname -r
4.18.0-193.el8.x86_64
[root@netqe30 ~]# rpm -q dpdk
dpdk-19.11-4.el8.x86_64
[root@netqe30 ~]#
[root@netqe30 ~]# virsh dumpxml guest-packed
<domain type='kvm' id='6' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>guest-packed</name>
<uuid>f693e08d-3ed8-49e8-89a7-c99b37ca0aa0</uuid>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://redhat.com/rhel/8-unknown"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>5</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<vcpupin vcpu='2' cpuset='9'/>
<vcpupin vcpu='3' cpuset='3'/>
<vcpupin vcpu='4' cpuset='11'/>
<emulatorpin cpuset='4,12'/>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-q35-5.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Haswell-noTSX-IBRS</model>
<vendor>Intel</vendor>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='abm'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<numa>
<cell id='0' cpus='0-4' memory='8388608' unit='KiB' memAccess='shared'/>
</numa>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/root/qemu/x86_64-softmmu/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/master.qcow2'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</controller>
<controller type='sata' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'>
<alias name='pcie.0'/>
</controller>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x8'/>
<alias name='pci.1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x9'/>
<alias name='pci.2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0xa'/>
<alias name='pci.3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0xb'/>
<alias name='pci.4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0xc'/>
<alias name='pci.5'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0xd'/>
<alias name='pci.6'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0xe'/>
<alias name='pci.7'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<interface type='vhostuser'>
<mac address='52:54:00:83:b5:89'/>
<source type='unix' path='/tmp/vhost-user1' mode='client'/>
<model type='virtio'/>
<driver name='vhost' queues='2'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='52:54:00:24:ca:f4'/>
<source type='unix' path='/tmp/vhost-user2' mode='client'/>
<target dev='vhost1'/>
<model type='virtio'/>
<driver name='vhost' queues='2'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:25:80:18'/>
<source bridge='virbr0'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/3'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<serial type='file'>
<source path='/tmp/master.console'/>
<target type='isa-serial' port='1'>
<model name='isa-serial'/>
</target>
<alias name='serial1'/>
</serial>
<console type='pty' tty='/dev/pts/3'>
<source path='/dev/pts/3'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-6-guest-packed/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='rng0'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c145,c971</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c145,c971</imagelabel>
</seclabel>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+0:+0</label>
<imagelabel>+0:+0</imagelabel>
</seclabel>
<qemu:commandline>
<qemu:arg value='-set'/>
<qemu:arg value='device.net0.packed=on'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.net1.packed=on'/>
</qemu:commandline>
</domain>
[root@netqe30 ~]#
(In reply to Jean-Tsung Hsiao from comment #17) > Hi Eugenio, > I forgot to mention that "ping -f" ran about 1-hour plus before I killed it. > Attached below are configurations on Guest and Host. > Anything else to be tested before setting the status to VERIFIED. > Thanks! > Jean > Sounds great! Thank you very much! > *** Guest *** > [root@localhost ~]# uname -r > 4.18.0-199.el8.x86_64 > [root@localhost ~]# rpm -q dpdk > dpdk-19.11-4.el8.x86_64 > [root@localhost ~]# > > [root@localhost ~]# ip a > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group > default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: enp0s3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN > group default qlen 1000 > link/ether 52:54:00:83:b5:89 brd ff:ff:ff:ff:ff:ff > inet 10.0.0.1/24 scope global enp0s3 > valid_lft forever preferred_lft forever > 4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state > UP group default qlen 1000 > link/ether 52:54:00:25:80:18 brd ff:ff:ff:ff:ff:ff > inet 192.168.122.202/24 brd 192.168.122.255 scope global dynamic > noprefixroute enp1s0 > valid_lft 3428sec preferred_lft 3428sec > inet6 fe80::5054:ff:fe25:8018/64 scope link noprefixroute > valid_lft forever preferred_lft forever > [root@localhost ~]# ip netns exec server ip a > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group > default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp0s9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN > group default qlen 1000 > link/ether 52:54:00:24:ca:f4 brd ff:ff:ff:ff:ff:ff > inet 10.0.0.2/24 scope global enp0s9 > valid_lft forever preferred_lft forever > inet6 fe80::5054:ff:fe24:caf4/64 scope link > valid_lft forever preferred_lft forever > [root@localhost ~]# > > *** Host *** > > [root@netqe30 ~]# uname -r > 4.18.0-193.el8.x86_64 > [root@netqe30 ~]# rpm -q dpdk > dpdk-19.11-4.el8.x86_64 > [root@netqe30 ~]# > > [root@netqe30 ~]# virsh dumpxml guest-packed > <domain type='kvm' id='6' > xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> > <name>guest-packed</name> > <uuid>f693e08d-3ed8-49e8-89a7-c99b37ca0aa0</uuid> > <metadata> > <libosinfo:libosinfo > xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> > <libosinfo:os id="http://redhat.com/rhel/8-unknown"/> > </libosinfo:libosinfo> > </metadata> > <memory unit='KiB'>8388608</memory> > <currentMemory unit='KiB'>8388608</currentMemory> > <memoryBacking> > <hugepages> > <page size='1048576' unit='KiB'/> > </hugepages> > </memoryBacking> > <vcpu placement='static'>5</vcpu> > <cputune> > <vcpupin vcpu='0' cpuset='0'/> > <vcpupin vcpu='1' cpuset='1'/> > <vcpupin vcpu='2' cpuset='9'/> > <vcpupin vcpu='3' cpuset='3'/> > <vcpupin vcpu='4' cpuset='11'/> > <emulatorpin cpuset='4,12'/> > </cputune> > <resource> > <partition>/machine</partition> > </resource> > <os> > <type arch='x86_64' machine='pc-q35-5.0'>hvm</type> > <boot dev='hd'/> > </os> > <features> > <acpi/> > <apic/> > </features> > <cpu mode='custom' match='exact' check='full'> > <model fallback='forbid'>Haswell-noTSX-IBRS</model> > <vendor>Intel</vendor> > <feature policy='require' name='vme'/> > <feature policy='require' name='ss'/> > <feature policy='require' name='vmx'/> > <feature policy='require' name='f16c'/> > <feature policy='require' name='rdrand'/> > <feature policy='require' name='hypervisor'/> > <feature policy='require' name='arat'/> > <feature policy='require' name='tsc_adjust'/> > <feature policy='require' name='umip'/> > <feature policy='require' name='md-clear'/> > <feature policy='require' name='stibp'/> > <feature policy='require' name='arch-capabilities'/> > <feature policy='require' name='ssbd'/> > <feature policy='require' name='xsaveopt'/> > <feature policy='require' name='pdpe1gb'/> > <feature policy='require' name='abm'/> > <feature policy='require' name='skip-l1dfl-vmentry'/> > <numa> > <cell id='0' cpus='0-4' memory='8388608' unit='KiB' > memAccess='shared'/> > </numa> > </cpu> > <clock offset='utc'> > <timer name='rtc' tickpolicy='catchup'/> > <timer name='pit' tickpolicy='delay'/> > <timer name='hpet' present='no'/> > </clock> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <pm> > <suspend-to-mem enabled='no'/> > <suspend-to-disk enabled='no'/> > </pm> > <devices> > <emulator>/root/qemu/x86_64-softmmu/qemu-system-x86_64</emulator> Just for the record, needed to use upstream qemu 5.0.0 version to accept device.net0.packed=on argument. > <disk type='file' device='disk'> > <driver name='qemu' type='qcow2'/> > <source file='/var/lib/libvirt/images/master.qcow2'/> > <backingStore/> > <target dev='vda' bus='virtio'/> > <alias name='virtio-disk0'/> > <address type='pci' domain='0x0000' bus='0x04' slot='0x00' > function='0x0'/> > </disk> > <controller type='usb' index='0' model='qemu-xhci' ports='15'> > <alias name='usb'/> > <address type='pci' domain='0x0000' bus='0x02' slot='0x00' > function='0x0'/> > </controller> > <controller type='sata' index='0'> > <alias name='ide'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' > function='0x2'/> > </controller> > <controller type='pci' index='0' model='pcie-root'> > <alias name='pcie.0'/> > </controller> > <controller type='pci' index='1' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='1' port='0x8'/> > <alias name='pci.1'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='2' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='2' port='0x9'/> > <alias name='pci.2'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x1'/> > </controller> > <controller type='pci' index='3' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='3' port='0xa'/> > <alias name='pci.3'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x2'/> > </controller> > <controller type='pci' index='4' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='4' port='0xb'/> > <alias name='pci.4'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x3'/> > </controller> > <controller type='pci' index='5' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='5' port='0xc'/> > <alias name='pci.5'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x4'/> > </controller> > <controller type='pci' index='6' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='6' port='0xd'/> > <alias name='pci.6'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x5'/> > </controller> > <controller type='pci' index='7' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='7' port='0xe'/> > <alias name='pci.7'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' > function='0x6'/> > </controller> > <controller type='virtio-serial' index='0'> > <alias name='virtio-serial0'/> > <address type='pci' domain='0x0000' bus='0x03' slot='0x00' > function='0x0'/> > </controller> > <interface type='vhostuser'> > <mac address='52:54:00:83:b5:89'/> > <source type='unix' path='/tmp/vhost-user1' mode='client'/> > <model type='virtio'/> > <driver name='vhost' queues='2'/> > <alias name='net0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' > function='0x0'/> > </interface> > <interface type='vhostuser'> > <mac address='52:54:00:24:ca:f4'/> > <source type='unix' path='/tmp/vhost-user2' mode='client'/> > <target dev='vhost1'/> > <model type='virtio'/> > <driver name='vhost' queues='2'/> > <alias name='net1'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x09' > function='0x0'/> > </interface> > <interface type='bridge'> > <mac address='52:54:00:25:80:18'/> > <source bridge='virbr0'/> > <target dev='vnet0'/> > <model type='virtio'/> > <alias name='net2'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' > function='0x0'/> > </interface> > <serial type='pty'> > <source path='/dev/pts/3'/> > <target type='isa-serial' port='0'> > <model name='isa-serial'/> > </target> > <alias name='serial0'/> > </serial> > <serial type='file'> > <source path='/tmp/master.console'/> > <target type='isa-serial' port='1'> > <model name='isa-serial'/> > </target> > <alias name='serial1'/> > </serial> > <console type='pty' tty='/dev/pts/3'> > <source path='/dev/pts/3'/> > <target type='serial' port='0'/> > <alias name='serial0'/> > </console> > <channel type='unix'> > <source mode='bind' > path='/var/lib/libvirt/qemu/channel/target/domain-6-guest-packed/org.qemu. > guest_agent.0'/> > <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> > <alias name='channel0'/> > <address type='virtio-serial' controller='0' bus='0' port='1'/> > </channel> > <input type='mouse' bus='ps2'> > <alias name='input0'/> > </input> > <input type='keyboard' bus='ps2'> > <alias name='input1'/> > </input> > <memballoon model='virtio'> > <alias name='balloon0'/> > <address type='pci' domain='0x0000' bus='0x05' slot='0x00' > function='0x0'/> > </memballoon> > <rng model='virtio'> > <backend model='random'>/dev/urandom</backend> > <alias name='rng0'/> > <address type='pci' domain='0x0000' bus='0x06' slot='0x00' > function='0x0'/> > </rng> > </devices> > <seclabel type='dynamic' model='selinux' relabel='yes'> > <label>system_u:system_r:svirt_t:s0:c145,c971</label> > <imagelabel>system_u:object_r:svirt_image_t:s0:c145,c971</imagelabel> > </seclabel> > <seclabel type='dynamic' model='dac' relabel='yes'> > <label>+0:+0</label> > <imagelabel>+0:+0</imagelabel> > </seclabel> > <qemu:commandline> > <qemu:arg value='-set'/> > <qemu:arg value='device.net0.packed=on'/> > <qemu:arg value='-set'/> > <qemu:arg value='device.net1.packed=on'/> > </qemu:commandline> > </domain> > > [root@netqe30 ~]# Another sanity test: Name space traffic going through OVS-dpdk bridge --- 9.3 Gb of netperf/UDP_STREAM throughput.
[root@localhost ~]# ip addr add 10.0.0.1/24 dev enp0s3; netperf -H 10.0.0.2 -p 12865 -t UDP_STREAM -l 60
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.2 () port 0 AF_INET
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
212992 65507 60.00 1063568 0 9289.48
212992 60.00 1062987 9284.40
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:f4:a4:26 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 scope global enp0s3
valid_lft forever preferred_lft forever
4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:0b:51:d6 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.202/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0
valid_lft 3246sec preferred_lft 3246sec
inet6 fe80::5054:ff:fe0b:51d6/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec server ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:89:04:34 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 scope global enp0s9
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe89:434/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
[root@localhost ~]# uname -r
4.18.0-202.el8.x86_64
[root@localhost ~]#
[root@netqe7 jhsiao]# ovs-vsctl show
1c1c3b83-f393-4823-ba5a-945391a64b31
Bridge ovsbr0
datapath_type: netdev
Port vhost-user1
Interface vhost-user1
type: dpdkvhostuserclient
options: {n_rxq="1", vhost-server-path="/tmp/vhost-user1"}
Port ovsbr0
Interface ovsbr0
type: internal
Port vhost-user2
Interface vhost-user2
type: dpdkvhostuserclient
options: {n_rxq="1", vhost-server-path="/tmp/vhost-user2"}
ovs_version: "2.13.0"
[root@netqe7 jhsiao]#
[root@netqe7 jhsiao]# rpm -q openvswitch2.13
openvswitch2.13-2.13.0-25.el8fdp.x86_64
[root@netqe7 jhsiao]# rpm -q qemu-kvm
qemu-kvm-4.2.0-19.module+el8.3.0+6473+93e27135.x86_64
[root@netqe7 jhsiao]# uname -r
4.18.0-200.el8.x86_64
[root@netqe7 jhsiao]#
Attached below are guest xml file and OVS-dpdk script.
*** Guest xml file ***
<domain type='kvm' id='4'>
<name>guest-packed</name>
<uuid>a3a9ea64-0fc9-46f3-af59-94ac0e01c423</uuid>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://redhat.com/rhel/8-unknown"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>5</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<vcpupin vcpu='2' cpuset='3'/>
<vcpupin vcpu='3' cpuset='5'/>
<vcpupin vcpu='4' cpuset='7'/>
<emulatorpin cpuset='8,10'/>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-q35-rhel8.2.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Haswell-noTSX-IBRS</model>
<vendor>Intel</vendor>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='abm'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<numa>
<cell id='0' cpus='0-4' memory='8388608' unit='KiB' memAccess='shared'/>
</numa>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/images/master.qcow2' index='1'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</controller>
<controller type='sata' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'>
<alias name='pcie.0'/>
</controller>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x8'/>
<alias name='pci.1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x9'/>
<alias name='pci.2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0xa'/>
<alias name='pci.3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0xb'/>
<alias name='pci.4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0xc'/>
<alias name='pci.5'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0xd'/>
<alias name='pci.6'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0xe'/>
<alias name='pci.7'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<interface type='vhostuser'>
<mac address='52:54:00:f4:a4:26'/>
<source type='unix' path='/tmp/vhost-user1' mode='server'/>
<target dev='vhost-user1'/>
<model type='virtio'/>
<driver name='vhost'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='52:54:00:89:04:34'/>
<source type='unix' path='/tmp/vhost-user2' mode='server'/>
<target dev='vhost-user2'/>
<model type='virtio'/>
<driver name='vhost'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:0b:51:d6'/>
<source bridge='virbr0'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/2'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<serial type='file'>
<source path='/tmp/master.console'/>
<target type='isa-serial' port='1'>
<model name='isa-serial'/>
</target>
<alias name='serial1'/>
</serial>
<console type='pty' tty='/dev/pts/2'>
<source path='/dev/pts/2'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-guest-packed/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='rng0'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c544,c625</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c544,c625</imagelabel>
</seclabel>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+1001</label>
<imagelabel>+107:+1001</imagelabel>
</seclabel>
</domain>
[root@netqe7 jhsiao]#
*** OVS-dpdk script ***
root@netqe7 jhsiao]# cat ovs_dpdk_dpdkhostuserclient.sh
ovs-vsctl set Open_vSwitch . other_config={}
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x0004
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,4096"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x4040
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
# config ovs-dpdk bridge with dpdk0, dpdk1, vhost-user1 and vhost-user2
ovs-vsctl --if-exists del-br ovsbr0
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuserclient options:vhost-server-path=/tmp/vhost-user1 ofport_request=20 options:n_rxq=1
ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuserclient options:vhost-server-path=/tmp/vhost-user2 ofport_request=21 options:n_rxq=1
ovs-ofctl del-flows ovsbr0
ovs-ofctl add-flow ovsbr0 in_port=20,actions=output:21
ovs-ofctl add-flow ovsbr0 in_port=21,actions=output:20
ovs-ofctl dump-flows ovsbr0
[root@netqe7 jhsiao]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:2295 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |
Description of problem: Version-Release number of selected component (if applicable): DPDK 19.11 How reproducible: Very likely, but not always. Steps to Reproduce: Using the current testpmd vhost_user as: ./app/testpmd -l 6,7,8 --vdev='net_vhost1,iface=/tmp/vhost-user1' --vdev='net_vhost2,iface=/tmp/vhost-user2' -- -a -i --rxq=1 --txq=1 --txd=1024 --forward-mode=rxonly And starting qemu using packed=on on the interface: -netdev vhost-user,chardev=charnet1,id=hostnet1 -device virtio-net-pci,rx_queue_size=256,...,packed=on And start to tx in the guest using: ./dpdk/build/app/testpmd -l 1,2 --vdev=eth_af_packet0,iface=eth0 -- \ --forward-mode=txonly --txq=1 --txd=256 --auto-start --txpkts 1500 \ --stats-period 1 Actual results: After first burst of packets (512 or a little more), sendto() will start to return EBUSY. kernel NAPI is refusing to send more packets to virtio_net device until it free old skbs. However, virtio_net driver is unable to free old buffers since host does not return them in `vhost_flush_dequeue_packed` until shadow queue is full except for MAX_PKT_BURST (32) packets. Sometimes we are lucky and reach this point, or packets are small enough to fill the queue and flush, but if the packets and the virtqueue are big enough, we will not be able to tx anymore. Expected results: Guest's testpmd is able to transmit. Additional info: DPDK Upstream bug: https://bugs.dpdk.org/show_bug.cgi?id=383