Bug 1462183 - [RFE] warn the user if trying to use hyperthreading on unsupported architecture
[RFE] warn the user if trying to use hyperthreading on unsupported architecture
Status: NEW
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt (Show other bugs)
4.1.3.2
x86_64 Linux
medium Severity medium (vote)
: ---
: ---
Assigned To: Michal Skrivanek
meital avital
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-16 07:50 EDT by jiyan
Modified: 2017-09-28 04:54 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Virt
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
Logs for the needinfo (11.21 MB, application/x-tar)
2017-06-19 00:46 EDT, jiyan
no flags Details

  None (edit)
Description jiyan 2017-06-16 07:50:28 EDT
Description of problem:
The cpu topology shows in guest by command 'lscpu' is different from the configuration in RHV-M and libvirt dumpxml file in register host 

Version-Release number of selected component (if applicable):
RHV-M server:
rhevm-4.1.3.2-0.1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.1.3.2-0.1.el7.noarch

RHV-M register host:
qemu-kvm-rhev-2.9.0-9.el7.x86_64
libvirt-3.2.0-9.el7.x86_64
kernel-3.10.0-679.el7.x86_64
vdsm-4.19.18-1.el7ev.x86_64

How reproducible:
100%

Steps to Reproduce:
1. In the RHV-M GUI, remove 'CPU' filter from 'none' scheduling policy, and make'cluster' select the 'none' scheduling policy.

2. In the RHV-M GUI, configure the data center with hosts and storage, then New a vm called vm1, confirm the vm can start successfully.

3. Configure 'system' configuration of vm as following, and check the vm can start the vm normally.
  Total Virtual CPUs:100
  Virtual Sockets:5
  Cores per Virtual Socket:4
  threads per Core:5

4.After step3 check the libvirt dumpxml file in register host and execute command 'lscpu' in guest:
In Host check libvirt dumpxml:
<vcpu placement='static' current='100'>160</vcpu>
  <cpu mode='custom' match='exact' check='full'>
    <topology sockets='8' cores='4' threads='5'/>
  </cpu>

In vm/Guest:
#lscpu
CPU(s):                100
On-line CPU(s) list:   0-99
Thread(s) per core:    1
Core(s) per socket:    20
Socket(s):             5


Actual results:
As step-4 shows.

Expected results:
The cpu topology shows in guest by command 'lscpu',the configuration in RHV-M and libvirt dumpxml file in register host should be consistent.

Additional info:
Comment 2 jiyan 2017-06-19 00:46 EDT
Created attachment 1289009 [details]
Logs for the needinfo

The attachment includes the following files:
log/RHV-server-engine.log
log/RHV-host-vdsm.log
log/RHV-host-qemu-vm1.log
log/RHV-host-libvirtd.log
log/RHV-guest-vm1-cpudmesg.log
log/RHV-guest-vm1-lscpu.log
Comment 3 Tomas Jelinek 2017-06-21 05:13:21 EDT
so, the qemu cmdline looks like:
-smp 100,maxcpus=160,sockets=8,cores=4,threads=5

(the sockets = 8 is there to allow hotplug)

The problem here is that the architecture is AMD which does not support hyperthreading, so qemu changes the "Thread(s) per core" to 1 and manipulates the "Core(s) per socket:" and "Socket(s):" to match the configured num of CPUs.

Turning this into an RFE to warn when the user when he is trying to use hyperthreading on an unsupported architecture.

Note You need to log in before you can comment on or make changes to this bug.