Description of problem: Multiple cpu-bound microbenchmarks show a significant performance advantage when using "sockets" vcpu topology instead of "cores" topology. For a single 2vcpu guest on RHEL KVM, a topology of "sockets=2,cores=1,threads=1" had the following workload advantages over a 2vcpu guest with a topology of "sockets=1,cores=2,threads=1": * factor: +10% * hackbench 20: +11% * hackbench 40: +6% * lat_ctx: +50-75% * lat_proc fork: -5% * lat_proc exec +5% * lat_proc shell: +35% * ramsmp 64K&up: +20-28% * specjbb2005 (WH1): +10% * netperf loopback: +50-150% It appears that Kubevirt does not recognize any "sockets=X" param in the VM yaml. I am still on CNV 1.1 for some customer work, but I don't believe this option is in any newer CNV version either according to current docs. Having a Kubevirt feature to define and pass vcpu topology to libvirt would be helpful for performance sensitive environments. Also it is important to note that currently RHEL KVM/libvirt defaults to incrementing the "sockets=X" param for the number of vcpus if a topology is not specified, while the current CNV default is different and increments the "cores=X" value since that is the only param recognized in the yaml. I am not sure if this difference in default behavior is intentional and/or favorable for some reasons, but at least having a 'sockets' option in the VM yaml will allow users to set a desired guest topology which can have impacts on the scheduler in the guest (can share more details on this as perf investigation continues). Version-Release number of selected component (if applicable): The comparison above was run on RHEL7.5 KVM as an example. Desired CNV feature details: Recognize a sockets param in the VM yaml, ex: spec: domain: cpu: cores: 1 sockets: 2 and translate to libvirt xml: <topology sockets='2' cores='1' threads='1'/> (which libvirt then translates to qemu): -smp 2,sockets=2,cores=1,threads=1
Karel's PR to allow vcpu topoology settings and use sockets by default has been approved, waiting for merge next. https://github.com/kubevirt/kubevirt/pull/1839
The patches were merged to kubevirt-0.12-alpha.3
Tests passed
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:0417