Bug 695024 - Very slow performance of OpenBSD guest with qemu-kvm-0.13.0-1.fc13.x86_64 from updates
Summary: Very slow performance of OpenBSD guest with qemu-kvm-0.13.0-1.fc13.x86_64 fro...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: qemu
Version: 13
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Justin M. Forbes
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-04-09 20:25 UTC by Mikolaj Kucharski
Modified: 2013-01-09 11:50 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-06-27 11:51:42 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Mikolaj Kucharski 2011-04-09 20:25:40 UTC
Description of problem:

After upgrade of qemu-kvm from qemu-kvm-0.12.3-8.fc13.x86_64 to qemu-kvm-0.13.0-1.fc13.x86_64 and full reboot of the machine (to be sure that I have latest kernel and libvirtd used new emulator, but reboot was not essential to face the issue) all my OpenBSD quests are very slow and result in high CPU usage in the parent:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 2250 qemu      20   0  742m  81m 3536 S 99.8  2.1 285:32.51 qemu-kvm


Version-Release number of selected component (if applicable):

new, slow, bad:
# rpm -q qemu-kvm
qemu-kvm-0.13.0-1.fc13.x86_64

old, fast, good:
# rpm -q qemu-kvm
qemu-kvm-0.12.3-8.fc13.x86_64


How reproducible:

install F13 x86_64, install libvirt, qemu-kvm, python-virtinst and all updates.


Steps to Reproduce:
1. install os, libvirt, qemu-kvm, python-virtinst and all updates:

# yum -q clean all
# yum upgrade
Setting up Upgrade Process
No Packages marked for Update

2. get OpenBSD 4.8 i386 install iso

# wget -O /var/lib/libvirt/boot/install48.iso ftp://ftp.heanet.ie/pub/OpenBSD/4.8/i386/install48.iso
# restorecon -R /var/lib/libvirt/boot/

3. install guest OS

# virt-install \
        --connect qemu:///system \
        --name openbsd \
        --ram 512 \
        --disk path=/var/lib/libvirt/images/openbsd.dsk,cache=writeback,size=10 \
        --cdrom /var/lib/libvirt/boot/install48.iso \
        --network bridge=virbr1,model=e1000 \
        --vnc \
        --noautoconsole \
        --hvm \
        --accelerate \
        --os-variant openbsd4

4. Ramdisk kernel, used by OpenBSD installer will be working okay, with usual performance, parent CPU usage will be around at 5% to 10% when quest idle. This is normal.

5. After installation reboot installer and start OS

# virsh start openbsd

6. I'm using serial console to get to the guest, you can also ssh into it

# virsh console openbsd

Actual results:

OS will be really, really slow and qemu-kvm process will take close to 100% CPU usage in parent

Expected results:

OpenBSD guest OS when idle usually takes from 1% to 10% CPU on parent, guest OS is vast, very snappy.

Additional info:

When I run:

# yum downgrade qemu-kvm qemu-system-x86 qemu-common gpxe-roms-qemu

and shutdown OpenBSD quest (via 'halt -p' command) and then virsh start it again after above downgrade (qemu-kvm-2:0.12.3-8.fc13.x86_64) all is back to normal and OpenBSD is fast.

I can even run OpenBSD guest with one qemu-kvm 0.13 and on qemu-kvm 0.12 next to each other (install and run openbsd1 on old kvm, yum upgrade, install and run openbsd2 on new kvm) and when they are running together I can see that one is fast other is very slow. Here are idle guest OSes. High CPU is new qemu-kvm, low is old qemu-kvm:

 1966 qemu      20   0  357m  54m 3536 S 86.8  1.4 344:24.96 qemu-kvm
31482 qemu      20   0  739m  84m 3476 R  9.3  2.1   4:22.52 qemu-kvm

Comment 1 Mikolaj Kucharski 2011-04-09 20:46:11 UTC
Tested with following guests:

OpenBSD 4.9 (GENERIC) #671: Wed Mar  2 07:09:00 MST 2011
    deraadt.org:/usr/src/sys/arch/i386/compile/GENERIC

OpenBSD 4.8 (GENERIC) #136: Mon Aug 16 09:06:23 MDT 2010
    deraadt.org:/usr/src/sys/arch/i386/compile/GENERIC


More info about parent:

# rpm -q fedora-release
fedora-release-13-1.noarch

# uname -r
2.6.34.8-68.fc13.x86_64

# grep model /proc/cpuinfo
model           : 67
model name      : AMD Athlon(tm) 64 X2 Dual Core Processor 6000+
model           : 67
model name      : AMD Athlon(tm) 64 X2 Dual Core Processor 6000+

# lsmod | grep kvm
kvm_amd                35750  9
kvm                   260338  1 kvm_amd

Comment 2 Mikolaj Kucharski 2011-04-09 20:53:24 UTC
I spelled 'guest' wrong so many times here :/

Comment 3 Mikolaj Kucharski 2011-04-10 15:37:54 UTC
Also using different versions of qemu-kvm to avoid below error I've used machine type 'pc-0.12' for all my tests (<type arch='x86_64' machine='pc-0.12'>hvm</type>).

# virsh start openbsd
error: Failed to start domain openbsd
error: internal error Process exited while reading console log output: Supported machines are:
pc-0.12    Standard PC
pc-0.11    Standard PC, qemu 0.11
pc-0.10    Standard PC, qemu 0.10
isapc      ISA-only PC
pc         Standard PC (alias of fedora-13)
fedora-13  Standard PC (default)

Comment 4 Avi Kivity 2011-04-12 12:21:34 UTC
Works as expected for me on kvm.git 7a7ada1bfb95 qemu-kvm.git df85c051d780b.

Please try latest upstream versions of the kernel and qemu-kvm.  If the problem persists, please post kvm_stat output while the it occurs.

Comment 5 Mikolaj Kucharski 2011-04-12 12:52:51 UTC
Are there any pre-compiled binaries available for F13?

Comment 6 Avi Kivity 2011-04-12 13:20:33 UTC
You might try virt-preview:

http://fedoraproject.org/wiki/Virtualization_Preview_Repository

however, that doesn't cover the kernel.  Perhaps Rawhide's kernel package will work with F13 (or perhaps virt-preview's qemu-kvm will be sufficient).

Comment 7 Mikolaj Kucharski 2011-04-12 13:46:20 UTC
Thanks Avi for info, but 'virt-preview' repo contains RPMs with version for qemu which I'm reporting here as broken. Anyway, what I can see from

http://jforbes.fedorapeople.org/virt-preview/f13/x86_64/

version are exactly the same. I'll have a look at source RPMs from F14 and F15.

Comment 8 Bug Zapper 2011-05-30 10:45:28 UTC
This message is a reminder that Fedora 13 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 13.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '13'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 13's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 13 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 9 Bug Zapper 2011-06-27 11:51:42 UTC
Fedora 13 changed to end-of-life (EOL) status on 2011-06-25. Fedora 13 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.

Comment 10 Nigel Horne 2012-10-12 14:47:54 UTC
This is a bug in QEMU (or possibly in OpenBSD) not Redhat - I find the same problem under Debian.  FreeBSD and NetBSD run at native speed, OpenBSD is so slow that it seems as though it's being interpreted.

Comment 11 Avi Kivity 2012-10-15 15:34:39 UTC
Please use 'perf top' and 'kvm_stat' to try to see the cause of the slowness.

Comment 12 Nigel Horne 2012-11-19 18:12:39 UTC
perf top:


  9.90%  [kernel]                 [k] native_write_msr_safe
  8.81%  qemu-system-x86_64       [.] 0x1caae4
  7.59%  [kernel]                 [k] __kmalloc
  6.25%  [kernel]                 [k] native_read_msr_safe
  2.37%  [kernel]                 [k] copy_user_generic_string
  2.20%  [kvm_amd]                [k] svm_vcpu_put
  2.17%  [kernel]                 [k] timekeeping_get_ns
  1.65%  [kvm_amd]                [k] svm_vcpu_run
  1.60%  libglib-2.0.so.0.3200.4  [.] g_hash_table_lookup
  1.60%  [kvm]                    [k] kvm_vcpu_ioctl
  1.37%  [kvm]                    [k] kvm_set_msr_common
  1.35%  [kvm]                    [k] kvm_get_msr_common
  1.29%  [kvm_amd]                [k] paravirt_write_msr
  1.28%  [kvm]                    [k] kvm_arch_vcpu_ioctl
  1.26%  [kernel]                 [k] mutex_spin_on_owner
  1.01%  [kvm]                    [k] kvm_arch_vcpu_ioctl_run
  1.00%  [kvm_amd]                [k] svm_vcpu_load
  0.95%  [kernel]                 [k] sys_ioctl
  0.94%  [kvm_amd]                [k] svm_set_msr
  0.93%  libc-2.13.so             [.] __strcmp_sse2
  0.92%  [kernel]                 [k] system_call

kvm_stat:

kvm statistics

 kvm_mmio                                     10859    3320
 kvm_entry                                    10744    3265
 kvm_exit                                     10741    3265
 kvm_emulate_insn                             10356    3158
 kvm_page_fault                               10354    3153
 vcpu_match_mmio                              10329    3153
 kvm_apic                                      9819    2986
 kvm_userspace_exit                            9681    2953
 kvm_inj_virq                                   326     100
 kvm_apic_accept_irq                            325     100
 kvm_pio                                          4       1
 kvm_fpu                                          8       0
 kvm_exit(INTR)                                   4       0
 kvm_exit(HLT)                                    3       0
 kvm_exit(WRITE_CR0)                              2       0
 kvm_exit(INVD)                                   1       0
 kvm_exit(SHUTDOWN)                               1       0
 kvm_exit(LDTR_READ)                              1       0
 kvm_exit(NPF)                                    1       0
 kvm_exit(WRITE_DR5)                              1       0
 kvm_exit(WRITE_CR8)                              1       0
 kvm_exit(READ_DR3)                               1       0


Note You need to log in before you can comment on or make changes to this bug.