Bug 1653453 - "sockets" vcpu topology option needed for best performance
Summary: "sockets" vcpu topology option needed for best performance
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 1.1
Hardware: All
OS: Linux
high
high
Target Milestone: ---
: 1.4
Assignee: Martin Sivák
QA Contact: Xenia Lisovskaia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-26 21:59 UTC by Jenifer Abrams
Modified: 2019-02-26 13:24 UTC (History)
6 users (show)

Fixed In Version: kubevirt-0.12-alpha.3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-26 13:24:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:0417 0 None None None 2019-02-26 13:24:20 UTC

Description Jenifer Abrams 2018-11-26 21:59:35 UTC
Description of problem:
Multiple cpu-bound microbenchmarks show a significant performance advantage when using "sockets" vcpu topology instead of "cores" topology. 

For a single 2vcpu guest on RHEL KVM, a topology of "sockets=2,cores=1,threads=1" had the following workload advantages over a 2vcpu guest with a topology of "sockets=1,cores=2,threads=1":
  * factor: +10%
  * hackbench 20: +11%
  * hackbench 40: +6%
  * lat_ctx: +50-75%
  * lat_proc fork: -5% 
  * lat_proc exec +5%
  * lat_proc shell: +35%
  * ramsmp 64K&up: +20-28%
  * specjbb2005 (WH1): +10%
  * netperf loopback: +50-150%


It appears that Kubevirt does not recognize any "sockets=X" param in the VM yaml. I am still on CNV 1.1 for some customer work, but I don't believe this option is in any newer CNV version either according to current docs. 

Having a Kubevirt feature to define and pass vcpu topology to libvirt would be helpful for performance sensitive environments. Also it is important to note that currently RHEL KVM/libvirt defaults to incrementing the "sockets=X" param for the number of vcpus if a topology is not specified, while the current CNV default is different and increments the "cores=X" value since that is the only param recognized in the yaml. I am not sure if this difference in default behavior is intentional and/or favorable for some reasons, but at least having a 'sockets' option in the VM yaml will allow users to set a desired guest topology which can have impacts on the scheduler in the guest (can share more details on this as perf investigation continues).


Version-Release number of selected component (if applicable):
The comparison above was run on RHEL7.5 KVM as an example. 


Desired CNV feature details:

Recognize a sockets param in the VM yaml, ex:
spec:
  domain:
    cpu:
      cores: 1
      sockets: 2

and translate to libvirt xml:
  <topology sockets='2' cores='1' threads='1'/>
(which libvirt then translates to qemu):
  -smp 2,sockets=2,cores=1,threads=1

Comment 5 Jenifer Abrams 2019-01-04 17:34:48 UTC
Karel's PR to allow vcpu topoology settings and use sockets by default has been approved, waiting for merge next.
https://github.com/kubevirt/kubevirt/pull/1839

Comment 6 Martin Sivák 2019-01-09 15:41:54 UTC
The patches were merged to kubevirt-0.12-alpha.3

Comment 8 Xenia Lisovskaia 2019-01-30 13:28:13 UTC
Tests passed

Comment 13 errata-xmlrpc 2019-02-26 13:24:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:0417


Note You need to log in before you can comment on or make changes to this bug.