This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 553462 - Extremely high physical CPU utilization with idle VM under qemu-kvm
Extremely high physical CPU utilization with idle VM under qemu-kvm
Status: CLOSED WONTFIX
Product: Fedora
Classification: Fedora
Component: qemu (Show other bugs)
13
x86_64 Linux
low Severity medium
: ---
: ---
Assigned To: Justin M. Forbes
Fedora Extras Quality Assurance
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-01-07 17:52 EST by spamgrinder
Modified: 2011-06-27 10:45 EDT (History)
20 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-06-27 10:45:38 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
output from "uname -a" on host system (118 bytes, text/plain)
2010-01-07 17:52 EST, spamgrinder
no flags Details
CPUinfo.txt, output from "cat /proc/cpuinfo" on host system (1.45 KB, text/plain)
2010-01-07 17:53 EST, spamgrinder
no flags Details
output from "ps -eaf |grep qemu-kvm", i.e. the command line which starts the VM (411 bytes, text/plain)
2010-01-07 17:54 EST, spamgrinder
no flags Details
output from "virsh dumpxml BugTest" (1.13 KB, text/xml)
2010-01-07 17:54 EST, spamgrinder
no flags Details
output from "lsmod |grep kvm" (75 bytes, text/plain)
2010-01-07 17:55 EST, spamgrinder
no flags Details
Screenshot of Virtual Machine Manager, overview (38.21 KB, image/png)
2010-01-07 17:58 EST, spamgrinder
no flags Details
Screenshot of Virtual Machine Manager, host details (42.62 KB, image/png)
2010-01-07 17:58 EST, spamgrinder
no flags Details
Screenshot of Virtual Machine Manager, VM console (16.71 KB, image/png)
2010-01-07 17:59 EST, spamgrinder
no flags Details
screenshot of where the VM is in the boot process when CPU goes to 100% (51.04 KB, image/png)
2010-02-04 13:23 EST, Dave Allan
no flags Details

  None (edit)
Description spamgrinder 2010-01-07 17:52:40 EST
Created attachment 382348 [details]
output from "uname -a" on host system

Description of problem:
Starting a very simple, idle VM via command line or Virtual Machine Manager GUI results in ~ 100% CPU utilization.

Version-Release number of selected component (if applicable):
  etherboot-zroms-kvm-5.4.4-17.fc11.noarch
  libvirt-0.6.2-19.fc11.x86_64
  libvirt-python-0.6.2-19.fc11.x86_64
  python-virtinst-0.400.3-12.fc11.noarch
  qemu-common-0.10.6-9.fc11.x86_64
  qemu-img-0.10.6-9.fc11.x86_64
  qemu-kvm-0.10.6-9.fc11.x86_64
  qemu-system-x86-0.10.6-9.fc11.x86_64
  virt-manager-0.7.0-8.fc11.x86_64

How reproducible:
  Very.


Steps to Reproduce:
1. Create a VM with with 1 CPU, x86_64 architecture, 512MB RAM, set to PXE Boot (i.e. no OS)
2. Start VM via command line or GUI
3. With VM at BIOS screen, press "F12" to enter boot menu and stop boot process.
4. Monitor Physical CPU utilization, note qemu-kvm process is using 100% of a core on a multi-core CPU.  (Note, if you assign 2 CPU's to the VM on a dual-core system, BOTH cores will use ~ 100%.)
  
Actual results:
qemu-kvm process uses ~ 100% of a CPU (core)

Expected results:
qemu-kvm process should be using very few physical CPU cycles.


Additional info:

This is a dual-core systems so the 50% utilization shown in the attached screenshots of Virtual Machine Manager is 100% of a core.
Comment 1 spamgrinder 2010-01-07 17:53:33 EST
Created attachment 382349 [details]
CPUinfo.txt, output from "cat /proc/cpuinfo" on host system
Comment 2 spamgrinder 2010-01-07 17:54:12 EST
Created attachment 382350 [details]
output from "ps -eaf |grep qemu-kvm", i.e. the command line which starts the VM
Comment 3 spamgrinder 2010-01-07 17:54:42 EST
Created attachment 382351 [details]
output from "virsh dumpxml BugTest"
Comment 4 spamgrinder 2010-01-07 17:55:32 EST
Created attachment 382355 [details]
output from "lsmod |grep kvm"
Comment 5 spamgrinder 2010-01-07 17:58:09 EST
Created attachment 382356 [details]
Screenshot of Virtual Machine Manager, overview
Comment 6 spamgrinder 2010-01-07 17:58:42 EST
Created attachment 382357 [details]
Screenshot of Virtual Machine Manager, host details
Comment 7 spamgrinder 2010-01-07 17:59:14 EST
Created attachment 382358 [details]
Screenshot of Virtual Machine Manager, VM console
Comment 8 spamgrinder 2010-01-07 18:15:03 EST
I should add that I only noticed this problem recently.

I have a VM which I run regularly on the host system and only recently had I noticed a problem with it using many physical cycles, even when it was idle. This prompted me to try the test case of "no OS at all" to try and reproduce the problem.

I have not changed the VM config in quite some time.  However I do regular updates of the host OS.
Comment 9 Dave Allan 2010-01-08 10:14:41 EST
I've reproduced this behavior on up to date F12 x86_64.  I created a VM using the attached BugTest.xml, and I also see 100% host cpu load when the VM is sitting at the boot menu.  My processor gets very hot, 70C and rising instead of the usual 57C, so I believe the CPU is actually working, and this is not simply misreporting load.  It's 100% reproducible for me.  I have no OS on the configured disk, and if I let it go to attempt to boot and fail, the CPU load falls to zero.
Comment 10 spamgrinder 2010-02-01 15:16:30 EST
I just did a clean install of F12 on the original hardware and repeated the above procedure to reproduce this bug. As this bug is consistently reproducible under F12, I'm changing the product version to reflect this.
Comment 11 Justin M. Forbes 2010-02-03 11:12:40 EST

*** This bug has been marked as a duplicate of bug 478317 ***
Comment 12 spamgrinder 2010-02-03 14:10:20 EST
What was this reasoning behind marking this as a duplicate of bug 478317?

That bug lists physical CPU as running at 20% with an idle guest.  It also states that removing the tablet device from the VM reduces that to 7%.

In this report physical CPU utilization is at 100%.  Also, I've just confirmed removing the tablet device shows no apparent reduction in physical CPU usage, i.e. it remains at 100%.

I'm not suggesting this can not be a duplicate.  However it's not clear that they are the same problem.  Would you please clarify why this has been marked as a duplicate?

Thank you.
Comment 13 Justin M. Forbes 2010-02-04 12:34:59 EST
Sorry about that, I misread the initial report.  Looking into it now.
Comment 14 Justin M. Forbes 2010-02-04 12:42:24 EST
This appears to be only while in pxe boot for some reason.  When I have it sitting at a pxe menu, CPU spins at 100%, when I choose a boot target, it pxe boots and CPU usage drops significantly.  Perhaps it is in the gpxe bios?
Comment 15 Dave Allan 2010-02-04 12:48:04 EST
I see it if I let it sit in the boot menu.
Comment 16 Dave Allan 2010-02-04 13:23:07 EST
Created attachment 388860 [details]
screenshot of where the VM is in the boot process when CPU goes to 100%

Here's a screenshot to clarify where the VM is in the boot process at the time that the CPU usage goes to 100%.
Comment 17 spamgrinder 2010-02-04 14:24:30 EST
It's not just in the boot process where physical CPU is at ~100%.  I used the PXE example as it demonstrates the problem in a simple environment where there is no OS/application to generate load.  However, I see the same problem after booting a VM with Microsoft Windows XP Pro.  In short, simply starting the VM causes physical CPU utilization to jump to 100%.

One thing I find interesting is the relationship between virtual CPU and physical core utilization.   The physical system described above has a dual-core CPU.  If I start a VM with 1 virtual CPU, the physical box goes to 50% overall utilization which is 100% of a core.  If I start a VM with 2 virtual CPUs, the physical box uses 100% on each core.  i.e. A virtual CPU uses 100% of a CPU core even when there should be little to know activity on the virtual CPU.
Comment 18 Amit Shah 2010-02-04 23:55:39 EST
(In reply to comment #17)
> It's not just in the boot process where physical CPU is at ~100%.  I used the
> PXE example as it demonstrates the problem in a simple environment where there
> is no OS/application to generate load.  However, I see the same problem after
> booting a VM with Microsoft Windows XP Pro.  In short, simply starting the VM
> causes physical CPU utilization to jump to 100%.

Windows, at boot-up, touches all the pages in RAM and zeroes them out. That could explain the 100% cpu usage at boot time. Can you report cpu usage after the desktop is shown and all apps are loaded?

> One thing I find interesting is the relationship between virtual CPU and
> physical core utilization.   The physical system described above has a
> dual-core CPU.  If I start a VM with 1 virtual CPU, the physical box goes to
> 50% overall utilization which is 100% of a core.  If I start a VM with 2
> virtual CPUs, the physical box uses 100% on each core.  i.e. A virtual CPU uses
> 100% of a CPU core even when there should be little to know activity on the
> virtual CPU.    

That's not inconsistent: if a guest detects it has more cpus, it will try using all of them. With KVM, each vcpu is handled by a thread. If the guest gives work to both the vcpus, both the threads will be active.
Comment 19 spamgrinder 2010-02-05 01:19:19 EST
CPU usage goes to 100$ at VM start and stays there regardless of if an OS is booted or not.  To answer your specific question, it remains at 100% well after the the VM with Win XP Pro has booted, apps have loaded and the system has had time to settle.  The guest side then reports very little CPU utilization but the host CPU still shows 100%
Comment 20 spamgrinder 2010-02-22 11:23:37 EST
I just did a yum update to the test system.  This included an updated kernel and various qemu/kvm/libvirt related packages.  Unfortunately, the problem still exists.

The following is the latest list of packages on the test system:

> uname -a
Linux host.name.obscured 2.6.31.12-174.2.22.fc12.x86_64 #1 SMP Fri Feb 19 18:55:03 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

> rpm -qa | egrep "kvm|qemu|virt" |sort
gpxe-roms-qemu-0.9.9-1.20091018git.fc12.noarch
libvirt-0.7.1-15.fc12.x86_64
libvirt-client-0.7.1-15.fc12.x86_64
libvirt-python-0.7.1-15.fc12.x86_64
python-virtinst-0.500.1-2.fc12.noarch
qemu-common-0.11.0-13.fc12.x86_64
qemu-img-0.11.0-13.fc12.x86_64
qemu-kvm-0.11.0-13.fc12.x86_64
qemu-kvm-tools-0.11.0-13.fc12.x86_64
qemu-system-x86-0.11.0-13.fc12.x86_64
virt-manager-0.8.2-1.fc12.noarch
Comment 21 Fedora Admin XMLRPC Client 2010-03-09 12:19:30 EST
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.
Comment 22 David Swegen 2010-04-01 05:47:16 EDT
I believe I am seeing this same problem too: At VM BIOS the CPU is at 100%, and when running an idling OS on the VM the host sits around 80-100%. 

Interestingly enough it seems that if I reboot the VM the host will then sit at ~25-45% CPU when the VM is back up again (I need to test this some more to be certain this is consistent behaviour).
Comment 23 Nenad Opsenica 2010-04-12 06:03:03 EDT
Similar problem happens to me with Centos 5.4 and Win XPpro VM on one of the hardware machines. But it is strange that WinXP on other machine, configured almost identically, behaves well and does not eat up CPU. 

etherboot-zroms-kvm-5.4.4-10.el5.centos
kvm-tools-83-105.el5_4.28
kmod-kvm-83-105.el5_4.28
kvm-83-105.el5_4.28
kvm-qemu-img-83-105.el5_4.28
kernel-2.6.18-164.15.1.el5

# cat /proc/cpuinfo
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 16
model           : 4
model name      : AMD Phenom(tm) II X4 955 Processor
stepping        : 2
cpu MHz         : 800.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 5
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc nonstop_tsc pni cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy altmovcr8 abm sse4a misalignsse 3dnowprefetch osvw
bogomips        : 6400.29
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate [8]

processor       : 1
vendor_id       : AuthenticAMD
cpu family      : 16
model           : 4
model name      : AMD Phenom(tm) II X4 955 Processor
stepping        : 2
cpu MHz         : 800.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
apicid          : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 5
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc nonstop_tsc pni cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy altmovcr8 abm sse4a misalignsse 3dnowprefetch osvw
bogomips        : 6400.45
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate [8]

processor       : 2
vendor_id       : AuthenticAMD
cpu family      : 16
model           : 4
model name      : AMD Phenom(tm) II X4 955 Processor
stepping        : 2
cpu MHz         : 800.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
apicid          : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 5
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc nonstop_tsc pni cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy altmovcr8 abm sse4a misalignsse 3dnowprefetch osvw
bogomips        : 6400.40
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate [8]

processor       : 3
vendor_id       : AuthenticAMD
cpu family      : 16
model           : 4
model name      : AMD Phenom(tm) II X4 955 Processor
stepping        : 2
cpu MHz         : 800.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4
apicid          : 3
fpu             : yes
fpu_exception   : yes
cpuid level     : 5
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc nonstop_tsc pni cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy altmovcr8 abm sse4a misalignsse 3dnowprefetch osvw
bogomips        : 6400.42
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate [8]
Comment 24 David Swegen 2010-06-04 12:30:58 EDT
Upgrading to Fedora 13 has brought the CPU usage down to a much more sane level. With one VM idling it goes between 6-12%, which is a vast improvement over what I had under F12.
Comment 25 spamgrinder 2010-06-18 10:22:11 EDT
I've done a fresh install of F13 on the same hardware used for the original report back in January.  The system has the latest updates as of today, June 18, 2010.

There has been no change.  CPU (core) utilization is still at 100%, even for a VM which is running but has not yet booted an OS. (See previous postings for details.)
Comment 26 Bug Zapper 2010-11-03 21:35:26 EDT
This message is a reminder that Fedora 12 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 12.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '12'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 12's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 12 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 27 Dave Allan 2010-11-04 21:42:48 EDT
Moving to F13 per comment 25.  I will attempt to repro on F14 shortly.
Comment 28 Dave Allan 2010-12-14 18:24:00 EST
I tried to reproduce the high CPU utilization while at the boot menu in F14 and I am not able to.  I see consistent 15% or so, but nothing like what I saw on earlier releases.  spamgrinder, does it repro for you on F14?
Comment 29 Matti Lehti 2010-12-15 16:02:40 EST
I never saw had this problem with F13, but noticed it some time after upgrading to F14.
Installed qemu version:
Name        : qemu
Arch        : x86_64
Epoch       : 2
Version     : 0.13.0
Release     : 1.fc14

Don't know if it is relevant, but I noticed following behaviour in Freebsd VM:
1. Boot to booloader menu -> physical CPU usage of qemu-kvm is > 95%
2. Stop freedsb VM bootloader timer -> cpu load drop below 20 %
3. Boot freesbd VM or escape to bootloader prompt -> cpu load jumps to > 95 %
Comment 30 Chris Lloyd 2011-04-21 06:16:24 EDT
There seems to be a huge improvement with the latest set of RPM upgrades to Fedora 14. Far lower CPU usage on each idle VM process.

libvirt-0.8.3-9.fc14.x86_64
libvirt-client-0.8.3-9.fc14.x86_64
gpxe-roms-qemu-1.0.1-3.fc14.noarch
qemu-kvm-0.13.0-1.fc14.x86_64
qemu-img-0.13.0-1.fc14.x86_64
qemu-common-0.13.0-1.fc14.x86_64
qemu-system-x86-0.13.0-1.fc14.x86_64
kernel-2.6.35.12-88.fc14.x86_64
Comment 31 Bug Zapper 2011-06-02 12:58:17 EDT
This message is a reminder that Fedora 13 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 13.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '13'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 13's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 13 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 32 Bug Zapper 2011-06-27 10:45:38 EDT
Fedora 13 changed to end-of-life (EOL) status on 2011-06-25. Fedora 13 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.

Note You need to log in before you can comment on or make changes to this bug.