Hide Forgot
Description of problem: We have our application running on FreeBSD OS with our user space Virt IO drivers (not DPDK). When we did L3 throughput test, we are seeing 3 to 4Gbps of throughput is observed with 1500 MTU with out TSO, LRO. We have seen plenty of CPU is left for vHost-net thread and also our applications. We have used proper NUMA bindings to schedule vHost thread, application vCPU and NIC affinity. With this, we would like to understand, what will be maximum performance with vHost-net an Intel 82599 NICs? Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
(In reply to ramanjaneyulu.talla from comment #0) > Description of problem: > > We have our application running on FreeBSD OS with our user space Virt IO > drivers (not DPDK). When we did L3 throughput test, we are seeing 3 to 4Gbps > of throughput is observed with 1500 MTU with out TSO, LRO. We have seen > plenty of CPU is left for vHost-net thread and also our applications. We > have used proper NUMA bindings to schedule vHost thread, application vCPU > and NIC affinity. > > With this, we would like to understand, what will be maximum performance > with vHost-net an Intel 82599 NICs? > > > > > Version-Release number of selected component (if applicable): > Please indicate what version of software running in the host: kernel, qemu, libvirt. Also, the kernel version in the guest. Also include the libvirt XML and qemu-kvm command line used. > > How reproducible: > Did you try the same L3 throughput test with DPDK and/or virtio-net and a RHEL guest? What results did you get? > > Steps to Reproduce: > 1. > 2. > 3. > > Actual results: > > > Expected results: > > > Additional info:
No response in over a month. Close this bug. I'll send this BZ to Red Hat's performance team, so they know someone was curious about the answer to this question. Cheers!