Description of problem: RHEV-M displays and uses the same values for hypervisor cores regardless of cluster setting for "Count Threads as Cores" Version-Release number of selected component (if applicable): RHEV-M 3.3 How reproducible: Very Steps to Reproduce: 1. Create cluster of hypervisors with threads 2. Change cluster setting for count threads as cores 3. Regardless of setting, same value shown for cores Actual results: The same values are shown for cores regardless of cluster preferences Expected results: Values for cores updated as per cluster preferences Additional info:
the same values are shown as the host view reflects the actual hw configuration (HT is enabled in BIOS), but the checkbox in Cluster should have the effect of letting you run the VM, as far as I can tell. Then it probably is a scheduling issue. It may help if you can attach the output of "virsh capabilities" from the host
Created attachment 882289 [details] output of virsh capabilities for each host
seems it's correctly reporting 2 sockets, 4 cores, 2 threads/core, the "vdsClient getVdsCaps" should return cpuSockets 2, cpuCores 4, cpuThreads 16. Right? If that's the case then scheduling should take into account the setting. Doron?
Do you still have "report_host_threads_as_cores=true" set in vdsm.conf? We have logic in place to support older configurations, and in such a case we revert to SMT off. Also, can you please provide the vdsm output for getVdsCaps?
thanks. So yes, it is correctly reported: cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5570 @ 2.93GHz' cpuSockets = '2' cpuSpeed = '2933.395' cpuThreads = '16' which corresponds to 2 sockets, 4 cores/socket, HT enabled as shown in the UI I'd suspect the scheduling side of things...
Can we get ovirt engine logs from the engine which refuses to start the vm despite the count threads as cores is activated?
I also noticed that it's reported against rhevm 3.3 would it be possible to test it with 3.4? Because I'm not able to reproduce it in 3.4 and 3.5. Also setting report_host_threads_as_cores config = true in vdsm.conf might work as a workaround for this issue in 3.3 (it forces vdsm to report cpu cores count the same as the thread count, which should bypass the policy which prevents running the vm)
Considering that we have minimum 2 open cases about this cosmetic issue, customers expecting this value to reflect the checkbox. I like the idea of 4 (8), where (8) is displayed only when checkbox is on.
this bug is propose to clone to 3.4.z, but missed the 3.4.4 builds. moving to 3.4.5 - please clone once ready.
I have host with 8 logical cpu, it say that: 2 sockets, 2 cores per socket and two threads per socket With enabled "count threads as cores" I see: CPU Cores per Socket: 2 (8) under UI, that not correct, because 8 that total number of threads and not per socket, so just change in code: fieldValue = ConstantsManager.getInstance().getMessages() .threadsAsCoresPerSocket(coresPerSocket, vds.getCpuThreads()); } to fieldValue = ConstantsManager.getInstance().getMessages() .threadsAsCoresPerSocket(coresPerSocket, vds.getCpuThreads() / vds.getCpuSockets()); } Checked on rhevm-3.5.0-0.23.beta.el6ev.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0158.html
Verified on rhevm-3.5.1-0.2.el6ev.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0888.html