Bug 1610461 - High Host CPU load for Windows 10 Guests (Update 1803) when idle
Summary: High Host CPU load for Windows 10 Guests (Update 1803) when idle
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.7
Hardware: x86_64
OS: Linux
urgent
medium
Target Milestone: rc
: ---
Assignee: Vitaly Kuznetsov
QA Contact: liunana
URL:
Whiteboard:
: 1623690 1628411 (view as bug list)
Depends On: 1631439
Blocks: 1624786 1638835 1644693 1651787 1690641
TreeView+ depends on / blocked
 
Reported: 2018-07-31 16:48 UTC by izyk
Modified: 2019-08-22 09:20 UTC (History)
34 users (show)

Fixed In Version: qemu-kvm-rhev-2.12.0-19.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1638835 1644693 1737702 (view as bug list)
Environment:
Last Closed: 2019-08-22 09:18:48 UTC
Target Upstream Version:


Attachments (Terms of Use)
Machine info (12.48 KB, text/plain)
2018-08-06 23:03 UTC, izyk
no flags Details
rhel-7.5 (229.84 KB, image/png)
2018-08-13 21:04 UTC, izyk
no flags Details
simple utility to check QPC performance (8.50 KB, application/x-ms-dos-executable)
2018-09-24 08:06 UTC, Vadim Rozenfeld
no flags Details
This is test result of xperf and iometer. (14.51 KB, application/x-gzip)
2018-10-16 11:05 UTC, FuXiangChun
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:2553 None None None 2019-08-22 09:20:10 UTC

Description izyk 2018-07-31 16:48:49 UTC
Description of problem:

High CPU load for Windows 10 (Update 1803) Guests when idle

Version-Release number of selected component (if applicable):

All 'qemu-kvm' and 'qemu-kvm-ev' (qemu-kvm-ev-2.10.0-21.el7_5.4.1)

How reproducible:

Install the last Windows10 on kvm virtual machine with one CPU.

Steps to Reproduce:
1. Install the last Windows10 on kvm virtual machine with one CPU.
2. Wait, idle time inside the Windows. (CPU load 0%)
3. See, in Linux virtual machine load 20-30%.

Actual results:
Linux virtual machine load 20-30%

Expected results:
Linux virtual machine load 0-5%


Additional info:
Need this possibility:
<hyperv>
    <synic state='on'/>
    <stimer state='on'/>
</hyperv>

It was disabled here:
https://bugzilla.redhat.com/show_bug.cgi?id=1336517

This is original post:
https://forum.proxmox.com/threads/high-cpu-load-for-windows-10-guests-when-idle.44531/

Comment 2 Vadim Rozenfeld 2018-08-06 22:01:34 UTC
Thank you for reporting this issue.

Can you please share the entire qemu command line? 
What is the host platform and the host kernel version in your case ?

Regards,
Vadim.

Comment 3 izyk 2018-08-06 23:03:09 UTC
Created attachment 1473791 [details]
Machine info

I have different machines with the same result, this testing machine. Please, see attachment.

Comment 4 izyk 2018-08-06 23:09:54 UTC
I have many io to ports 0x70,0x71 (rtc).

perf kvm --host stat live --event=ioport

Analyze events for all VMs, all VCPUs:

      IO Port Access    Samples  Samples%     Time%    Min Time    Max Time         Avg time

           0x70:POUT       4335    47.66%    79.41%      2.59us     84.33us      5.93us ( +-   1.12% )
            0x71:PIN       4332    47.63%    18.00%      0.98us     10.12us      1.34us ( +-   0.75% )
           0x608:PIN        266     2.92%     1.09%      1.00us      2.02us      1.32us ( +-   1.01% )
         0xc070:POUT         41     0.45%     0.37%      1.79us     12.49us      2.92us ( +-   9.54% )
          0x1f0:POUT         24     0.26%     0.26%      2.79us      6.17us      3.51us ( +-   6.09% )
         0xc0b0:POUT         14     0.15%     0.26%      2.79us     18.21us      5.98us ( +-  21.62% )
           0x1f7:PIN         12     0.13%     0.07%      1.00us      9.25us      1.82us ( +-  37.11% )
            0x64:PIN         10     0.11%     0.04%      1.08us      1.36us      1.17us ( +-   2.08% )
           0x3f6:PIN          6     0.07%     0.02%      1.09us      1.20us      1.12us ( +-   1.48% )
           0x1f2:PIN          5     0.05%     0.02%      1.01us      1.08us      1.06us ( +-   1.42% )

Comment 5 izyk 2018-08-06 23:12:20 UTC
And inside windows10 (1803) more than 2000 interrupts in second.
Inside windows10 (1709) and other windows only 140-200 interrupts in second in idle state.

Comment 6 Vadim Rozenfeld 2018-08-07 03:18:33 UTC
Can QE please reproduce and confirm this issue on rhel 7.5 and rhel 7.6?

Thanks,
Vadim.

Comment 7 juzhang 2018-08-07 03:29:54 UTC
Hi Zhiyi,

Could you have a try?

Best Regards,
Junyi

Comment 8 Guo, Zhiyi 2018-08-13 03:21:05 UTC
Hmm, I cannot reproduce this issue using rhel7.5 and rhel7.6.

libvirt xml used from https://bugzilla.redhat.com/show_bug.cgi?id=1610461#c3:
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:00:a0:a3'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <graphics type='vnc' port='5900' autoport='no' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>

My host info:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 158
Model name:            Intel(R) Xeon(R) CPU E3-1275 v6 @ 3.80GHz
Stepping:              9
CPU MHz:               799.938
CPU max MHz:           4200.0000
CPU min MHz:           800.0000
BogoMIPS:              7584.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp

Comment 9 Guo, Zhiyi 2018-08-13 10:29:43 UTC
Windows 10 version used:

win10 1709 x64: 
en_windows_10_multi-edition_version_1709_updated_sept_2017_x64_dvd_100090817.iso
md5 5e8bdef20c4b468f868f1f579197f7cf

win10 1803 x64:
en_windows_10_business_editions_version_1803_updated_march_2018_x64_dvd_12063333.iso
md5 28681742fe850aa4bfc7075811c5244b61d462cf

Windows guest configuration:
1)clean install
2)windows update disabled after clean install
3)local account login
4)set never sleep in power option

Test against rhel7.5 environment, packages:
qemu-kvm-rhev-2.10.0-21.el7_5.5.x86_64
3.10.0-862.10.1.el7.x86_64

Test against windows 10 1803, wait the guest entering idle state(wait 5 minutes), then I see the cpu usage will jump between 0 - 5% and the same behavior can be observed against win10 1709 also.

Test against rhel7.6 environment, packages:
qemu-kvm-rhev-2.12.0-10.el7.x86_64
3.10.0-931.el7.x86_64

Behaviors on rhel7.6 are similar as rhel7.5

Comment 10 Guo, Zhiyi 2018-08-13 10:32:25 UTC
The steps used for checking cpu usage:
1)Boot a configured windows 10 guest
2)Wait for guest entering idle state(wait 5 minutes)
3)Check cpu usage from task manager, record the cpu usage for 5 minutes

Comment 12 izyk 2018-08-13 21:04:04 UTC
Created attachment 1475672 [details]
rhel-7.5

I have installed rhel7.5 with the same issue.
But i have only:
qemu-kvm-rhev-2.10.0-21.el7_5.4.x86_64
3.10.0-862.9.1.el7.x86_64

Where i can get for testing:
qemu-kvm-rhev-2.10.0-21.el7_5.5.x86_64
3.10.0-862.10.1.el7.x86_64 ?

Comment 14 Nerijus Baliūnas 2018-08-14 07:49:39 UTC
(In reply to Guo, Zhiyi from comment #10)
> The steps used for checking cpu usage:
> 1)Boot a configured windows 10 guest
> 2)Wait for guest entering idle state(wait 5 minutes)
> 3)Check cpu usage from task manager, record the cpu usage for 5 minutes

You have to check host cpu usage, not guest.

Comment 15 Vadim Rozenfeld 2018-08-16 07:58:47 UTC
Just a quick update, can reproduce this issue (high msr and ioports accsess rate) on my development system (F28, kernel 4.17.0-rc2 qemu 2.12.91v3.0.0.0-rc1-10).
The interesting thing is that HvFlags are bit different on 1709 and 1803.

Comment 17 Nerijus Baliūnas 2018-08-16 08:44:47 UTC
Please rename this bug report to "High host CPU load for Windows 10 Guests (Update 1803) when idle" (add host).

Comment 18 Vitaly Kuznetsov 2018-08-16 08:54:31 UTC
I would also suggest to re-test with "hv_time,hv_relaxed" CPU flags added.

Comment 19 izyk 2018-08-16 10:28:15 UTC
(In reply to Vitaly Kuznetsov from comment #18)
> I would also suggest to re-test with "hv_time,hv_relaxed" CPU flags added.

This flags "hv_time,hv_relaxed" was "on" when i had testing.
And after i added the flags "hv_vpindex,hv_runtime,hv_synic,hv_stimer,hv_reset ". Behavior return in normal state. I did it with Fedora28, because RHEL7 haven't support of this flags.

Comment 20 Vitaly Kuznetsov 2018-08-16 10:43:25 UTC
(In reply to izyk from comment #19)
> (In reply to Vitaly Kuznetsov from comment #18)
> > I would also suggest to re-test with "hv_time,hv_relaxed" CPU flags added.
> 
> This flags "hv_time,hv_relaxed" was "on" when i had testing.
> And after i added the flags
> "hv_vpindex,hv_runtime,hv_synic,hv_stimer,hv_reset ". Behavior return in
> normal state. I did it with Fedora28, because RHEL7 haven't support of this
> flags.

Could you please help us identify what's really needed? In particular, what is the minimal subset of enlightenments which solve the issue? 

In particular, I'm pretty sure 'hv_reset' and 'hv_runtime' are not needed. 'hv_synic' may or may not be related, hv_stimer looks like a good candidate.

Comment 21 izyk 2018-08-16 11:37:43 UTC
(In reply to Vitaly Kuznetsov from comment #20)
> 
> Could you please help us identify what's really needed? In particular, what
> is the minimal subset of enlightenments which solve the issue? 
> 
> In particular, I'm pretty sure 'hv_reset' and 'hv_runtime' are not needed.
> 'hv_synic' may or may not be related, hv_stimer looks like a good candidate.

Only together the flags 'hv_synic,hv_stimer' resolve the problem.
Without one of both inside Windows is generated more than 2000 interrupts per second.

Comment 22 Vitaly Kuznetsov 2018-08-16 11:52:46 UTC
(In reply to izyk from comment #21)
> (In reply to Vitaly Kuznetsov from comment #20)
> > 
> > Could you please help us identify what's really needed? In particular, what
> > is the minimal subset of enlightenments which solve the issue? 
> > 
> > In particular, I'm pretty sure 'hv_reset' and 'hv_runtime' are not needed.
> > 'hv_synic' may or may not be related, hv_stimer looks like a good candidate.
> 
> Only together the flags 'hv_synic,hv_stimer' resolve the problem.
> Without one of both inside Windows is generated more than 2000 interrupts
> per second.

Thank you, it seems Windows need a periodic timer and without a synthetic one they're forced to program one-shot timer, the frequency has changed and now it's too expensive.

Could you also try what was suggested in Comment#16, remove "-no-hpet" from your qemu command line (or, if you're using libvirt change hpet settings to "<timer name='hpet' present='yes'/>")? Does this also resolve the issue?

Comment 23 izyk 2018-08-16 12:44:07 UTC
(In reply to Vitaly Kuznetsov from comment #22)
> Thank you, it seems Windows need a periodic timer and without a synthetic
> one they're forced to program one-shot timer, the frequency has changed and
> now it's too expensive.
> 
> Could you also try what was suggested in Comment#16, remove "-no-hpet" from
> your qemu command line (or, if you're using libvirt change hpet settings to
> "<timer name='hpet' present='yes'/>")? Does this also resolve the issue?

Yes, it's resolve but only on Fedora28:
Name         : qemu-system-x86-core
Epoch        : 2
Version      : 2.11.2
Release      : 1.fc28
Arch         : x86_64
Size         : 23 M
Source       : qemu-2.11.2-1.fc28.src.rpm


On Rhel7 and Centos7 "hpet" doesn't resolve:
Name        : qemu-kvm-ev
Arch        : x86_64
Epoch       : 10
Version     : 2.10.0
Release     : 21.el7_5.4.1
Size        : 11 M
Repo        : installed
From repo   : centos-qemu-ev

Comment 24 Vadim Rozenfeld 2018-08-16 14:02:28 UTC
(In reply to izyk from comment #23)
> (In reply to Vitaly Kuznetsov from comment #22)
> > Thank you, it seems Windows need a periodic timer and without a synthetic
> > one they're forced to program one-shot timer, the frequency has changed and
> > now it's too expensive.
> > 
> > Could you also try what was suggested in Comment#16, remove "-no-hpet" from
> > your qemu command line (or, if you're using libvirt change hpet settings to
> > "<timer name='hpet' present='yes'/>")? Does this also resolve the issue?
> 
> Yes, it's resolve but only on Fedora28:
> Name         : qemu-system-x86-core
> Epoch        : 2
> Version      : 2.11.2
> Release      : 1.fc28
> Arch         : x86_64
> Size         : 23 M
> Source       : qemu-2.11.2-1.fc28.src.rpm
> 
> 
> On Rhel7 and Centos7 "hpet" doesn't resolve:
> Name        : qemu-kvm-ev
> Arch        : x86_64
> Epoch       : 10
> Version     : 2.10.0
> Release     : 21.el7_5.4.1
> Size        : 11 M
> Repo        : installed
> From repo   : centos-qemu-ev

Couple of notes:
RHEL (and CentOS) used to be shipped HPET disabled at SeaBios (ACPI) level (You can check if it present as a System Device in Windows Device Manager)
HPET used to be a real performance killer because when enabled it will be chosen by HAL as a preferable system time stamp source.
Starting form Win8 hv_ralaxed flag is not required as long as (v)CPU hypervisor
flag is turned on.

Comment 25 izyk 2018-08-16 15:05:28 UTC
(In reply to Vadim Rozenfeld from comment #24)
> 
> Couple of notes:
> RHEL (and CentOS) used to be shipped HPET disabled at SeaBios (ACPI) level
> (You can check if it present as a System Device in Windows Device Manager)
> HPET used to be a real performance killer because when enabled it will be
> chosen by HAL as a preferable system time stamp source.
> Starting form Win8 hv_ralaxed flag is not required as long as (v)CPU
> hypervisor
> flag is turned on.

Yes, RHEL's guests haven't the "hpet" timer with or without "-no-hpet".
Though Fedora's guests follow this flag.

Comment 28 Lyas Spiehler 2018-09-19 02:35:37 UTC
I can reproduce this issue in "CentOS Linux release 7.5.1804 (Core)" running Windows 10 1803 as a guest. I've tried configuring hpet like so:
<timer name='hpet' present='yes'/>
and 
<timer name='hpet' present='no'/>
with no change in behavior.

If I add
<synic state='on'/>
or
<stimer state='on'/>
hyper-v enlightenments, I get the following errors respectively:

error: Failed to start domain win10
error: internal error: process exited while connecting to monitor: 2018-09-19T02:30:00.690756Z qemu-kvm: can't apply global Westmere-x86_64-cpu.hv-synic=on: Property '.hv-synic' not found

error: Failed to start domain win10
error: internal error: process exited while connecting to monitor: 2018-09-19T02:32:08.104681Z qemu-kvm: can't apply global Westmere-x86_64-cpu.hv-stimer=on: Property '.hv-stimer' not found

/usr/libexec/qemu-kvm -version                                                              QEMU emulator version 2.10.0(qemu-kvm-ev-2.10.0-21.el7_5.4.1)
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers

Is there any news on this issue?

Comment 29 Vadim Rozenfeld 2018-09-19 13:20:34 UTC
(In reply to Lyas Spiehler from comment #28)
> I can reproduce this issue in "CentOS Linux release 7.5.1804 (Core)" running
> Windows 10 1803 as a guest. I've tried configuring hpet like so:
> <timer name='hpet' present='yes'/>
> and 
> <timer name='hpet' present='no'/>
> with no change in behavior.
> 
> If I add
> <synic state='on'/>
> or
> <stimer state='on'/>
> hyper-v enlightenments, I get the following errors respectively:
> 
> error: Failed to start domain win10
> error: internal error: process exited while connecting to monitor:
> 2018-09-19T02:30:00.690756Z qemu-kvm: can't apply global
> Westmere-x86_64-cpu.hv-synic=on: Property '.hv-synic' not found
> 
> error: Failed to start domain win10
> error: internal error: process exited while connecting to monitor:
> 2018-09-19T02:32:08.104681Z qemu-kvm: can't apply global
> Westmere-x86_64-cpu.hv-stimer=on: Property '.hv-stimer' not found
> 
> /usr/libexec/qemu-kvm -version                                              
> QEMU emulator version 2.10.0(qemu-kvm-ev-2.10.0-21.el7_5.4.1)
> Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
> 
> Is there any news on this issue?

Unfortunately 7.5 (as well as 7.6) doesn't provide support for ".hv-stimer" feature. It is planned to be backported in 8.0 https://bugzilla.redhat.com/show_bug.cgi?id=1620588#c0

Another sad thing is that MS doesn't provide dynamic frequency divider reprogramming for RTC periodic interrupt anymore (at least on 1803 and RS5), which makes RTC a kind of useless for Windows VMs due to very high default (hardcoded) RTC periodic timer interrupts rate.

Comment 30 izyk 2018-09-19 14:08:50 UTC
(In reply to Vadim Rozenfeld from comment #29)
> Unfortunately 7.5 (as well as 7.6) doesn't provide support for ".hv-stimer"
> feature. It is planned to be backported in 8.0
> https://bugzilla.redhat.com/show_bug.cgi?id=1620588#c0
> 
> Another sad thing is that MS doesn't provide dynamic frequency divider
> reprogramming for RTC periodic interrupt anymore (at least on 1803 and RS5),
> which makes RTC a kind of useless for Windows VMs due to very high default
> (hardcoded) RTC periodic timer interrupts rate.

Is the hpet timer support will be returned?

Comment 31 Vadim Rozenfeld 2018-09-19 22:17:45 UTC
(In reply to izyk from comment #30)
> (In reply to Vadim Rozenfeld from comment #29)
> > Unfortunately 7.5 (as well as 7.6) doesn't provide support for ".hv-stimer"
> > feature. It is planned to be backported in 8.0
> > https://bugzilla.redhat.com/show_bug.cgi?id=1620588#c0
> > 
> > Another sad thing is that MS doesn't provide dynamic frequency divider
> > reprogramming for RTC periodic interrupt anymore (at least on 1803 and RS5),
> > which makes RTC a kind of useless for Windows VMs due to very high default
> > (hardcoded) RTC periodic timer interrupts rate.
> 
> Is the hpet timer support will be returned?

I don't know. But even if it is back, HPET is just another performance killer 
for Windows VMs. When it is active, Windows prefers it over any other system time
stamp sources, including "hv_time".

Comment 32 izyk 2018-09-21 10:52:58 UTC
(In reply to Vadim Rozenfeld from comment #29)
> Unfortunately 7.5 (as well as 7.6) doesn't provide support for ".hv-stimer"
> feature. It is planned to be backported in 8.0
> https://bugzilla.redhat.com/show_bug.cgi?id=1620588#c0
> 
> Another sad thing is that MS doesn't provide dynamic frequency divider
> reprogramming for RTC periodic interrupt anymore (at least on 1803 and RS5),
> which makes RTC a kind of useless for Windows VMs due to very high default
> (hardcoded) RTC periodic timer interrupts rate.

For return ".hv-stimer" support I need only one string patch - change 0 -> 1 here:
#if 0 /* Disabled for Red Hat Enterprise Linux */
    DEFINE_PROP_BOOL("hv-reset", X86CPU, hyperv_reset, false),
    DEFINE_PROP_BOOL("hv-vpindex", X86CPU, hyperv_vpindex, false),
    DEFINE_PROP_BOOL("hv-runtime", X86CPU, hyperv_runtime, false),
    DEFINE_PROP_BOOL("hv-synic", X86CPU, hyperv_synic, false),
    DEFINE_PROP_BOOL("hv-stimer", X86CPU, hyperv_stimer, false),
#endif
Is it right?
https://bugzilla.redhat.com/show_bug.cgi?id=1336517

Comment 33 izyk 2018-09-21 11:02:25 UTC
(In reply to Vadim Rozenfeld from comment #31)
> 
> I don't know. But even if it is back, HPET is just another performance
> killer 
> for Windows VMs. When it is active, Windows prefers it over any other system
> time
> stamp sources, including "hv_time".

I use the hpet with Fedora28 and last windows10 as KVM VM without any problem. 
It resolves the current issue.
What do you mean - "performance killer"?

Comment 34 Vitaly Kuznetsov 2018-09-21 12:53:04 UTC
(In reply to izyk from comment #32)
> (In reply to Vadim Rozenfeld from comment #29)
> > Unfortunately 7.5 (as well as 7.6) doesn't provide support for ".hv-stimer"
> > feature. It is planned to be backported in 8.0
> > https://bugzilla.redhat.com/show_bug.cgi?id=1620588#c0
> > 
> > Another sad thing is that MS doesn't provide dynamic frequency divider
> > reprogramming for RTC periodic interrupt anymore (at least on 1803 and RS5),
> > which makes RTC a kind of useless for Windows VMs due to very high default
> > (hardcoded) RTC periodic timer interrupts rate.
> 
> For return ".hv-stimer" support I need only one string patch - change 0 -> 1
> here:
> #if 0 /* Disabled for Red Hat Enterprise Linux */
>     DEFINE_PROP_BOOL("hv-reset", X86CPU, hyperv_reset, false),
>     DEFINE_PROP_BOOL("hv-vpindex", X86CPU, hyperv_vpindex, false),
>     DEFINE_PROP_BOOL("hv-runtime", X86CPU, hyperv_runtime, false),
>     DEFINE_PROP_BOOL("hv-synic", X86CPU, hyperv_synic, false),
>     DEFINE_PROP_BOOL("hv-stimer", X86CPU, hyperv_stimer, false),
> #endif
> Is it right?

This patch won't help you if you're running RHEL7 kernel: neither synic nor stimer is supported by it (currently).

Comment 35 Vadim Rozenfeld 2018-09-24 08:03:30 UTC
(In reply to izyk from comment #33)
> (In reply to Vadim Rozenfeld from comment #31)
> > 
> > I don't know. But even if it is back, HPET is just another performance
> > killer 
> > for Windows VMs. When it is active, Windows prefers it over any other system
> > time
> > stamp sources, including "hv_time".
> 
> I use the hpet with Fedora28 and last windows10 as KVM VM without any
> problem. 
> It resolves the current issue.
> What do you mean - "performance killer"?

HPET is an emulated device, which means that is is a kind of slow device comparing to paravirtualized devices. When HPWT is enabled it can be used as the system time stamp source when calling kernel mode KeQueryPerformanceCounter or user mode QueryPerformanceCounter functions. Those functions are the main vehicle
to measure and profile timing all around in hal, kernel and user space.
Apart from HPET, windows can use PM-Timer or hv_time as the system time stamp source. On my Win10 1803 system a single call of QueryPerformanceCounter function
takes 
0.005112 mS for HPET, 
0.005939 ms for PM-Timer
0.000055 ms for hv_time.

Different versions of Windows have differnt algorithms choosing the time stamp source. Some of them can pick up HPET even if hv_time enabled, some will rely on
"useplatformclock" ( bcdedit /set useplatformclock true ) parameter and prefer PM-Timer over hv_time.

I would suggest running qpcchk.exe which I'm going to post next to see and compare the results depending on your current configuration.

If QueryPerformanceFrequency reports 
3579545 Hz, then it is PM-Timer
100000000 Hz, then it is HPET
10000000 Hz - it's hv_time.

Vadim.

Comment 36 Vadim Rozenfeld 2018-09-24 08:06:35 UTC
Created attachment 1486331 [details]
simple utility to check QPC performance

Run this utility to check the system time stamp source on your system as will as
measure QueryPerformancCounter (QPC) call cost.

Comment 37 izyk 2018-09-24 09:55:14 UTC
(In reply to Vadim Rozenfeld from comment #35)
> Different versions of Windows have differnt algorithms choosing the time
> stamp source. Some of them can pick up HPET even if hv_time enabled, some
> will rely on
> "useplatformclock" ( bcdedit /set useplatformclock true ) parameter and
> prefer PM-Timer over hv_time.
> 
> I would suggest running qpcchk.exe which I'm going to post next to see and
> compare the results depending on your current configuration.
> 
> If QueryPerformanceFrequency reports 
> 3579545 Hz, then it is PM-Timer
> 100000000 Hz, then it is HPET
> 10000000 Hz - it's hv_time.
> 
> Vadim.


Thank you for the detailed answer. But, if I understand correctly, 
in the case of hv_time and PM_Timer, windows10(1803) uses RTC with a constant rate of 2000 interrupts per second.

In the case of the HPET timer(if it available), windows10(1803), emulate(or something similar) the RTC through the HPET and can generate a dynamic RTC interrupt rate.
And it's more effectively than 2000 RTC's interrupts per second. IMHO.

Comment 39 Vadim Rozenfeld 2018-09-26 04:13:16 UTC
(In reply to izyk from comment #37)
> (In reply to Vadim Rozenfeld from comment #35)
> > Different versions of Windows have differnt algorithms choosing the time
> > stamp source. Some of them can pick up HPET even if hv_time enabled, some
> > will rely on
> > "useplatformclock" ( bcdedit /set useplatformclock true ) parameter and
> > prefer PM-Timer over hv_time.
> > 
> > I would suggest running qpcchk.exe which I'm going to post next to see and
> > compare the results depending on your current configuration.
> > 
> > If QueryPerformanceFrequency reports 
> > 3579545 Hz, then it is PM-Timer
> > 100000000 Hz, then it is HPET
> > 10000000 Hz - it's hv_time.
> > 
> > Vadim.
> 
> 
> Thank you for the detailed answer. But, if I understand correctly, 
> in the case of hv_time and PM_Timer, windows10(1803) uses RTC with a
> constant rate of 2000 interrupts per second.
> 
> In the case of the HPET timer(if it available), windows10(1803), emulate(or
> something similar) the RTC through the HPET and can generate a dynamic RTC
> interrupt rate.
> And it's more effectively than 2000 RTC's interrupts per second. IMHO.

If you are running upstream kvm/qemu, then hv_stime + hv_time seems to be the best combination for 1803 and RS5. But if you are using 7.5 or 7.6 then both
hv_stime and HPET are not the option.

Your conclusion about RTC is absolutely right.

1709
Has a dedicated HalpRtcSetDivisor routine, reprogramming RTC timer
very frequently in a rage from 25h (2048 ticks per second) to
2ah (64 ticks per second) 

hal!HalpRtcSetDivisor+0x2a:
816513b1 0c20            or      al,20h
816513b3 6a01            push    1
816513b5 88450f          mov     byte ptr [ebp+0Fh],al
816513b8 8d450f          lea     eax,[ebp+0Fh]
816513bb 50              push    eax           <-- ticks rate 
816513bc 6a0a            push    0Ah
816513be 6a00            push    0
816513c0 ff15f4bb6581    call    dword ptr [hal!HalpTimerRtcApi+0x4
(8165bbf4)]
816513c6 5e              pop     esi
816513c7 5d              pop     ebp
816513c8 c20800          ret     8

0: kd> dd 8165bbf4
8165bbf4  81634008 00000000 00000000 00000000
8165bc04  00000000 00000000 00000000 00000000

0: kd> ln 81634008 
(81634008)   hal!HalpSetCmosData   |  (81634070)   hal!HalpReadCmosTime

hal!HalpSetCmosData+0x2e:
81634036 8ac2            mov     al,dl
81634038 e670            out     70h,al
8163403a 8a07            mov     al,byte ptr [edi]
8163403c e671            out     71h,al
8163403e 42              inc     edx
8163403f 47              inc     edi
81634040 49              dec     ecx


1803 
Has the same HalpRtcSetDivisor routine as well. But calls it only once
during RTC initialization  

_HalpRtcInitialize@4:
  800475C1: E8 05 00 00 00     call        _HalpRtcSetDivisor@8
  800475C6: 33 C0              xor         eax,eax
  800475C8: C2 04 00           ret         4


The divider value is hardcoded in this case

_HalpRtcSetDivisor@8:
  800475CB: 8B FF              mov         edi,edi
  800475CD: 55                 push        ebp
  800475CE: 8B EC              mov         ebp,esp
  800475D0: 51                 push        ecx
  800475D1: 6A 01              push        1
  800475D3: 8D 45 FF           lea         eax,[ebp-1]
  800475D6: C6 45 FF 25        mov         byte ptr [ebp-1],25h <-- 2048 ticks 
  800475DA: 50                 push        eax
  800475DB: 6A 0A              push        0Ah
  800475DD: 6A 00              push        0
  800475DF: FF 15 34 31 05 80  call        dword ptr ds:[80053134h]
  800475E5: 8B E5              mov         esp,ebp
  800475E7: 5D                 pop         ebp
  800475E8: C3                 ret

RS5
Even doesn't have a dedicated function to reprogram RTC timer frequency
dynamically. Programs divider once on RTC initialization with 2048
ticks per sec and never readjusts this value

HalpRtcInitialize:
  00000001C0041F80: 48 83 EC 38        sub         rsp,38h
  00000001C0041F84: 48 8B 05 75 61 02  mov         rax,qword ptr
[1C0068100h]
                    00
  00000001C0041F8B: 4C 8D 44 24 48     lea         r8,[rsp+48h]
  00000001C0041F90: 41 B9 01 00 00 00  mov         r9d,1
  00000001C0041F96: C6 44 24 48 25     mov         byte ptr[rsp+48h],25h <-- 2048 ticks

  00000001C0041F9B: 33 C9              xor         ecx,ecx
  00000001C0041F9D: 41 8D 51 09        lea         edx,[r9+9]
  00000001C0041FA1: FF 15 11 E9 02 00  call        qword ptr
[__guard_dispatch_icall_fptr]
  00000001C0041FA7: 33 C0              xor         eax,eax
  00000001C0041FA9: 48 83 C4 38        add         rsp,38h
  00000001C0041FAD: C3                 ret

In

Comment 42 Paul Gozart 2018-09-28 20:09:36 UTC
*** Bug 1628411 has been marked as a duplicate of this bug. ***

Comment 61 izyk 2018-10-12 14:06:15 UTC
(In reply to Vitaly Kuznetsov 2018-10-12 02:40:56 EDT 
> Keywords: FutureFeature)

Is it will be in the next version rhel 7.6?

Comment 64 Amnon Ilan 2018-10-15 09:50:18 UTC
*** Bug 1623690 has been marked as a duplicate of this bug. ***

Comment 68 FuXiangChun 2018-10-16 11:05:58 UTC
Created attachment 1494337 [details]
This is test result of xperf and iometer.

Comment 78 Michael 2018-12-03 06:39:10 UTC
Hi 

Using the fixed qemu-kvm-rhev version, the issue is gone. The reproduced step as follow:

[1] Boot a windows guest without any flags;
/usr/libexec/qemu-kvm -enable-kvm -M pc \
-cpu Opteron_G4
... ...


[2] use top command to monitor host cpu usage and save to a file;
e.g #top -p 6358 -n 1800 -d 2 -b >unfixed-top-pc-result (about 1 hour)

[3] calculate average value with this file;
# cat unfixed-top-pc-result |grep qemu-kvm|awk -F ' ' '{print $9;}'|awk '{sum+=$1} END {print "Average = ", sum/NR}'

Average =  35.6219  =====> Bug reproduced 


[4] Boot the guest with flags "hv_stimer","hv_synic","hv_time","hv_relaxed";
/usr/libexec/qemu-kvm -enable-kvm -M pc \
-cpu Opteron_G4,+kvm_pv_unhalt,hv_stimer,hv_synic,hv_time,hv_relaxed
... ...

[5] use top command to monitor host cpu usage and save to a file;
e.g #top -p 6358 -n 1800 -d 2 -b > fixed-top-pc-result (about 1 hour)

[6] calculate average value with this file;
# cat fixed-top-pc-result |grep qemu-kvm|awk -F ' ' '{print $9;}'|awk '{sum+=$1} END {print "Average = ", sum/NR}'

Average =  4.27028  ======> Bug fixed. 


Thus, mark this bug as verified. 

Version-Release number I used:
[1] kernel 3.10.0-970.el7.x86_64
[2] qemu-kvm-rhev-2.12.0-19.el7.x86_64

Comment 79 Gerrit Slomma 2019-03-11 13:10:45 UTC
I could see this in Scientific Linux 7.6 when running Windows 10 too. Doesn't matter if it is Version 1607 or 1809.
The load on my i5-2400 (4 cores) is around 10% usage on the host when the taskmanager of Windows 10 indicates idle.
The load on my x5675 (6 cores) is around 30% usage on the host for two VM when the taskmanager of Windows 10 indicates idle.
perf kvm -host top -p "pidof qemu-kvm" shows around 45% in vmx_vcpu_run for i5-2400.
The same applies to a second VM run on that host.
Intel rapl shows from 12 to 15 W power for the cpu-package of the i5-2400, would be around 5W if idle, load average is around 0.5.

features: acpi, apic hyperv (reladex state on, vapic state on, spinlocks state on)
hpet no
hypervclock yes

kernel 3.10.0-957.5.1.el7
qemu-kvm 1.5.3-160.el7_6.1

Comment 80 Vitaly Kuznetsov 2019-03-11 13:38:45 UTC
(In reply to Gerrit Slomma from comment #79)
> I could see this in Scientific Linux 7.6 when running Windows 10 too.
> Doesn't matter if it is Version 1607 or 1809.
> The load on my i5-2400 (4 cores) is around 10% usage on the host when the
> taskmanager of Windows 10 indicates idle.
> The load on my x5675 (6 cores) is around 30% usage on the host for two VM
> when the taskmanager of Windows 10 indicates idle.
> perf kvm -host top -p "pidof qemu-kvm" shows around 45% in vmx_vcpu_run for
> i5-2400.
> The same applies to a second VM run on that host.
> Intel rapl shows from 12 to 15 W power for the cpu-package of the i5-2400,
> would be around 5W if idle, load average is around 0.5.
> 
> features: acpi, apic hyperv (reladex state on, vapic state on, spinlocks state on)
> hpet no
> hypervclock yes

Have you enabled synic/stimer enlightenments?

Comment 81 Gerrit Slomma 2019-03-11 13:55:24 UTC
No, this is from a stock setup.
I could give it a try, but was of the impression from reading in this ticket those parameters are not available for Red Hat Enterprise Linux 7?

Comment 82 Vitaly Kuznetsov 2019-03-11 14:09:43 UTC
(In reply to Gerrit Slomma from comment #81)
> No, this is from a stock setup.
> I could give it a try, but was of the impression from reading in this ticket
> those parameters are not available for Red Hat Enterprise Linux 7?

These enlightenments were added to mitigate the issue but they're not enabled by default, you still need to do that manually.

Comment 83 Gerrit Slomma 2019-03-11 14:32:23 UTC
I checked that and can confirm that i get the same error as in comment #28 by Lyas Spiehler if i add 
    <synic state='on'/>
    <stimer state='on'/>
in hyperv-section regarding not foud properties.

Comment 84 Nerijus Baliūnas 2019-03-11 14:48:55 UTC
You are not using qemu-kvm-rhev or qemu-kvm-ev (according to comment #79 you use an ancient qemu-kvm version).

Comment 85 Gerrit Slomma 2019-03-11 16:37:07 UTC
No, i use the current qemu-kvm for Red Hat Enterprise Linux 7, released 2019-01-14 as per redhat network packet browser.
I only see one 2.12.0-41.el8+2104+3e32e6f8 for Red Hat Enterprise Linux 7.
I can't see any qemu-kvm-rhev or qemu-kvm-ev.
Since the problem obviously also exists in component qemu-kvm, should i open a new ticket for this component?

Comment 86 Paul Gozart 2019-03-19 22:16:48 UTC
(In reply to Gerrit Slomma from comment #85)
> No, i use the current qemu-kvm for Red Hat Enterprise Linux 7, released
> 2019-01-14 as per redhat network packet browser.
> I only see one 2.12.0-41.el8+2104+3e32e6f8 for Red Hat Enterprise Linux 7.
> I can't see any qemu-kvm-rhev or qemu-kvm-ev.
> Since the problem obviously also exists in component qemu-kvm, should i open
> a new ticket for this component?

Submitted bug 1690641 for component qemu-kvm in RHEL 7.6.
Paul

Comment 87 ict-sales 2019-08-06 03:45:52 UTC
I have same issue.
Does RedHat plan to make a statement or release errata on this issue?

Comment 90 errata-xmlrpc 2019-08-22 09:18:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2553


Note You need to log in before you can comment on or make changes to this bug.