Bug 1462183 - [RFE] warn the user if trying to use hyperthreading on unsupported architecture
Summary: [RFE] warn the user if trying to use hyperthreading on unsupported architecture
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.1.3.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Michal Skrivanek
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-16 11:50 UTC by jiyan
Modified: 2018-07-06 16:17 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-06 16:17:59 UTC
oVirt Team: Virt
Embargoed:
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
Logs for the needinfo (11.21 MB, application/x-tar)
2017-06-19 04:46 UTC, jiyan
no flags Details

Description jiyan 2017-06-16 11:50:28 UTC
Description of problem:
The cpu topology shows in guest by command 'lscpu' is different from the configuration in RHV-M and libvirt dumpxml file in register host 

Version-Release number of selected component (if applicable):
RHV-M server:
rhevm-4.1.3.2-0.1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.1.3.2-0.1.el7.noarch

RHV-M register host:
qemu-kvm-rhev-2.9.0-9.el7.x86_64
libvirt-3.2.0-9.el7.x86_64
kernel-3.10.0-679.el7.x86_64
vdsm-4.19.18-1.el7ev.x86_64

How reproducible:
100%

Steps to Reproduce:
1. In the RHV-M GUI, remove 'CPU' filter from 'none' scheduling policy, and make'cluster' select the 'none' scheduling policy.

2. In the RHV-M GUI, configure the data center with hosts and storage, then New a vm called vm1, confirm the vm can start successfully.

3. Configure 'system' configuration of vm as following, and check the vm can start the vm normally.
  Total Virtual CPUs:100
  Virtual Sockets:5
  Cores per Virtual Socket:4
  threads per Core:5

4.After step3 check the libvirt dumpxml file in register host and execute command 'lscpu' in guest:
In Host check libvirt dumpxml:
<vcpu placement='static' current='100'>160</vcpu>
  <cpu mode='custom' match='exact' check='full'>
    <topology sockets='8' cores='4' threads='5'/>
  </cpu>

In vm/Guest:
#lscpu
CPU(s):                100
On-line CPU(s) list:   0-99
Thread(s) per core:    1
Core(s) per socket:    20
Socket(s):             5


Actual results:
As step-4 shows.

Expected results:
The cpu topology shows in guest by command 'lscpu',the configuration in RHV-M and libvirt dumpxml file in register host should be consistent.

Additional info:

Comment 2 jiyan 2017-06-19 04:46:20 UTC
Created attachment 1289009 [details]
Logs for the needinfo

The attachment includes the following files:
log/RHV-server-engine.log
log/RHV-host-vdsm.log
log/RHV-host-qemu-vm1.log
log/RHV-host-libvirtd.log
log/RHV-guest-vm1-cpudmesg.log
log/RHV-guest-vm1-lscpu.log

Comment 3 Tomas Jelinek 2017-06-21 09:13:21 UTC
so, the qemu cmdline looks like:
-smp 100,maxcpus=160,sockets=8,cores=4,threads=5

(the sockets = 8 is there to allow hotplug)

The problem here is that the architecture is AMD which does not support hyperthreading, so qemu changes the "Thread(s) per core" to 1 and manipulates the "Core(s) per socket:" and "Socket(s):" to match the configured num of CPUs.

Turning this into an RFE to warn when the user when he is trying to use hyperthreading on an unsupported architecture.

Comment 4 Martin Tessun 2018-07-06 16:17:59 UTC
Just did a test with a 2 Socket, 4 Cores, 5 Threads and got the same architecture displayed in the VM with lscpu.

WHile it is not sensible having that topology in case there is no hyperthreading it is still set up correctly in the VM.


Note You need to log in before you can comment on or make changes to this bug.