Bug 505695 - Poor KVM guest performance doing kernel builds (100+% overhead, w/ 8vcpu and virtio)
Summary: Poor KVM guest performance doing kernel builds (100+% overhead, w/ 8vcpu and ...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Fedora
Classification: Fedora
Component: qemu
Version: 11
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Glauber Costa
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: F12VirtTarget
TreeView+ depends on / blocked
 
Reported: 2009-06-12 23:22 UTC by erikj
Modified: 2009-09-17 12:48 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-08-07 12:46:55 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description erikj 2009-06-12 23:22:16 UTC
We have observed performance below where we hoped it would be for 
qemu-kvm virtual machines running a higher IO build-server style load.

I posted this to the kvm list looking for ideas.
http://www.spinics.net/lists/kvm/msg17365.html

Here is the message I sent.

From: Erik Jacobson <erikj>
Date: Fri, 12 Jun 2009 16:04:43 -0500
To: kvm.org            
Subject: slow guest performance with build load, looking for ideas        

We have been trying to test qemu-kvm virtual machines under an IO load.
The IO load is quite simple: A timed build of the linux kernel and modules.
I have found that virtual machines take more than twice as long to do this 
build as the host.  It doesn't seem to matter if I use virtio or not,  Using
the same device and same filesystem, the host is more than twice as fast.   

We're hoping that we can get some advice on how to address this issue.  If
there are any options I should add for our testing, we'd appreciate it.  I'm
also game to try development bits to see if they make a difference.  If it  
turns out "that is just the way it is right now", we'd like to know that    
too.                                                                        

For these tests, I used Fedora 11 as the virtualization server.  I did this
because it has recent bits.  I experimented with SLES11 and Fedora11 guests.

In general, I used virt-manager to do the setup and launching.  So the
qemu-kvm command lines are based on that (and this explains why they are
a bit long).  I then modified the qemu-kvm command line to perform other
variations of the test.  Example command lines can be found at the end of
this message.                                                            

I performed tests on two different systems to be sure it isn't related to
specific hardware.                                                       

------------------
------------------
kernel/sw versions
------------------
------------------
virt host (always fedora 11): 2.6.29.4-167.fc11.x86_64
guest (same as above for fedora 11 guests, SLES 11 GA kernel for SLES guests)
qemu-kvm: qemu-kvm-0.10.4-4.fc11.x86_64                                      
libvirt: libvirt-0.6.2-11.fc11.x86_64                                        

----------------
----------------
Test description
----------------
----------------
The test I ran in different scenarios was always the same:
Running a build of the linux kernel and modules and timing the result.
I decided on this test because we tend to make build servers out of new
hardware and software releases to help put them through their paces.   

In all cases, the work area used was on a device separate from the root.
A disk device was always feed for qemu-kvm to use entirely.  The roots were
disk images but the workarea was always a fully imported device.  One exception
were a couple test runs using nfs from the host mounted on the guest.          

The test build filesystem was always ext3 (except for the case of
nfs-from-host, where it was ext3 on the host and nfs on the guest).  The
filesystem was simply mounted by hand with the mount command and no special
options.                                                                   

The run would look something like this... Setup:
 $ cd /work/erikj/linux-2.6.29.4                
 $ cp arch/x86/configs/x86_64_defconfig .config 
 $ make oldconfig                               
 $ make -j12  [ but not counted in the test results ]

The part of the test repeated for each run
 $ make -j12 clean                        
 $ time (make -j12 && make -j12 modules)   # represents posted results

The results from the above timing are what are pasted in the results.

------------------
------------------
Testing on host 1:
------------------
------------------
Host distro: Fedora 11
Guest distro: Fedora 11 and SLES11
8 vcpus provided to guest, 2048 megabytes of memory

Virtualization host system information:
System type: SGI Altix XE 310, Supermicro X7DBT mainboard
Memory: 4 GB, DDR2, 667 MHz                              
CPUs: 8 core, Xeon 2.33GHz, 4096 KB cache size           
disk 1 (root, 50gb part): HDS725050KLA360  (500gb, 7200 rpm, SATA, 8.5ms seek)
disk 2 (work area): HDT722525DLA380 (250GB, 7200 rpm, SATA, 8.5ms seek)       

fedora11 host, no guest (baseline)
-----------------------           
  -> real  10m38.116s  43m25.553s  11m29.004s

fedora11 host, sles11 guest
---------------------------
 virtio, work area imported as a full device (not nfs)
  -> real  26m2.004s  user  99m29.177s  sys   30m31.586s

 virtio for root but workarea nfs-mounted from host
  -> real  68m37.306s  user  76m0.445s  sys   67m17.888s

fedora11 host, fedora11 guest
-----------------------------
 IDE emulation, no virtio, workarea device fully imported to guest for workara
  -> real  29m47.249s  user  59m1.583s  sys   41m34.281s                      

 Same as above, but with qemu cache=none parameter
  -> real  26m1.668s  user  66m14.812s  sys   46m21.366s

 virtio devices, device fully imported to guest for workarea, cache=none
  -> real  23m28.397s  user  68m27.730s  sys   47m50.256s               

 Didn't do NFS testing in this scenario.


------------------
------------------
Testing on host 2:
------------------
------------------
Host distro: Fedora 11
Guest distro: Fedora 11
8 vcpus provided to guest, 4096 megabytes of memory

System type: SGI Altix XE XE250, Supermicro X7DWN+ main board
Memory:8 1gb DDR2 667MHz DIMMs                               
CPUs: 8 Intel Xeon X5460, 3.16 GHz, 6144 KB cache            
disk1: LSI MegaRAID volume, 292gb, but root slice used is only 25gb
disk2: LSI MegaRAID volume, 100gb, full space used for build work area

fedora11 host, no guest (baseline)
-----------------------           
 -> real  6m25.008s   user  30m54.697s   sys   8m17.359s

fedora11 host, fedora11 guest
-----------------------------
  virtio, no cache= parameter supplied to qemu:
  -> real  19m46.770s   user  52m33.523s   sys   42m55.202s

  virtio guest, qemu cache=none parameter supplied:
  -> real  18m17.690s   user  51m3.223s   sys   41m22.047s

  IDE emulation , no cache parameter:
  -> real  22m41.472s   user  44m48.190s   sys   38m3.750s

  IDE emulation, qemu cache=none parameter supplied:
  -> real  19m53.111s   user  48m48.342s   sys  40m19.469s

---------------------------------------------
---------------------------------------------
Example qemu-kvm command lines for the tests:
---------------------------------------------
---------------------------------------------
virtio, no cache= parameter supplied to qemu:
Note: This is is also exactly the command that libvirt ran

/usr/bin/qemu-kvm -S -M pc -m 4096 -smp 8 -name f11-test \
  -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty \
  -pidfile /var/run/libvirt/qemu//f11-test.pid -boot c \   
  -drive file=,if=ide,media=cdrom,index=2 \                
  -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on \
  -drive file=/dev/sdb,if=virtio,index=1 \                                    
  -net nic,macaddr=54:52:00:46:48:0e,vlan=0,model=virtio \                    
  -net tap,fd=20,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb \
  -usbdevice tablet -vnc 127.0.0.1:0 -soundhw es1370                          


virtio guest, qemu cache=none parameter supplied:
Note: Command modified so that running qemu by hand worked including setting
up a tun interface for the network bridge to work right outside of libvirt. 
True with the following command lines too.                                  

/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test \
  -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty \
  -pidfile /var/run/libvirt/qemu//f11-test.pid -boot c \   
  -drive file=,if=ide,media=cdrom,index=2 \
  -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on \
  -drive file=/dev/sdb,if=virtio,cache=none,index=1 \
  -net nic,macaddr=54:52:00:46:48:0e,vlan=0,model=virtio \
  -net tap,script=no,vlan=0,ifname=tap0 -serial pty -parallel none -usb \
  -usbdevice tablet -soundhw es1370

IDE emulation, no cache parameter:
/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test \
  -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty \
  -pidfile /var/run/libvirt/qemu//f11-test.pid -boot c \
  -drive file=,if=ide,media=cdrom,index=2 \
  -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on \
  -drive file=/dev/sdb,if=ide,index=1 \
  -net nic,macaddr=54:52:00:46:48:0e,vlan=0,model=virtio \
  -net tap,script=no,vlan=0,ifname=tap0 -serial pty -parallel none -usb \
  -usbdevice tablet -soundhw es1370

IDE emulation, qemu cache=none parameter supplied:
/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test \
  -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty \
  -pidfile /var/run/libvirt/qemu//f11-test.pid -boot c \
  -drive file=,if=ide,media=cdrom,index=2 \
  -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on \
  -drive file=/dev/sdb,if=ide,cache=none,index=1 \
  -net nic,macaddr=54:52:00:46:48:0e,vlan=0,model=virtio \
  -net tap,script=no,vlan=0,ifname=tap0 -serial pty -parallel none -usb \
  -usbdevice tablet -soundhw es1370
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comment 1 erikj 2009-06-15 15:17:56 UTC
Avi Kivity has suggested re-testing with Nehalem CPUs as there are known
lock contention issues for build style load.  I'll be trying to locate a
system to run some tests in that scenario.

Comment 2 erikj 2009-06-18 23:11:42 UTC
Still bad results, even with the 5500 processor series on a multi-socket
system.  However, a series of problems with Fedora relating to this type
of processor in a multi-core socket rendered the result somewhat suspect.
Most of the issues in the bullets have BZs already.

Date: Thu, 18 Jun 2009 18:07:44 -0500
From: Erik Jacobson <erikj>
To: Avi Kivity <avi>
Cc: Erik Jacobson <erikj>, kvm.org
Subject: Re: slow guest performance with build load, looking for ideas

Hello.  I'll top-post since the quoted text is just for reference.

Sorry the follow-up testing took so long.  We're very low on 5500/Nehalem
resources at the moment and I had to track down lots of stuff before
getting to the test.

I ran some tests on a 2-socket, 8-core system.  I wasn't pleased with the
results for a couple reasons.  One, the issue of it being twice as slow
as the host with no guest was still present.

However, in trying to make use of this system using Fedora 11, I ran in to
several issues not directly related to virtualization.  So these test runs
have that grain of salt.  Example issues...
 * Node ordering is not sequential (Ie /sys/devices/sysstem/node/node0 and
   node2, but no node 1).  This caused tools based on libvirt and friends
   to be unhappy.  I worked around this by using qemu-kvm by hand directly.
   we found an LKML posting to address this issue; I didn't check if it made
   it in yet.
 * All cores show up as being associated with the first node (node0) even
   though half should be associated with the 2nd node (still researching that
   some).
 * In some of the timing runs on this system, the "real time" reported by
   the time command was off by 10 to 11 times.  Issues were found in
   the messages file that seemed to relate to this including HUGE time
   adjustments by NTP and kernel hrtimer 'interrupt too slow' messages.
   This specific problem seems to be intermittent.
 * None of the above problems were observed in 8-core/2-socket non-5500/
   Nehalem systems.  Of course, 2-socket non-Nehalem systems do not have
   multiple nodes listed under /sys.
 * I lose access to the resource today but can try to beg and plead again
   some time next week if folks have ideas to try.  Let me know.

So those are the grains of salt.  I've found that, when doing the timing by
hand instead of using the time command, the build time seems to be around
10 to 12 minutes.  I'm not sure how trustworthy the output from the time
command are in these trials.  In any event, that's still more than double
for host alone with no guests.

System:
SGI XE270, 8-core, Xeon X5570 (Nehalem), Hyperthreading turned off
Supermicro model: X8DTN
Disk1: root disk 147GB ST3146855SS 15K 16MB cache SAS
Disk2: work area disk 500GB HDS725050KLA360  7200rpm 16MB cache SATA
Distro: Everything Fedora11+released updates
Memory: 8 gb in 2048 DDR3 1066 MHZ 18JSF25672PY-1G1D1 DIMMs

Only Fedora11 was used (host and guest where applicable).
The first timing weirdness was done on a F11 guest with no updates
applied.  I later applied the updates and the timings seemed to get
worse, although I don't trust the values any more.

F11+released updates has these versions:
kernel-2.6.29.4-167.fc11.x86_64
qemu-kvm-0.10.5-2.fc11.x86_64


Test, as before, was simply this for a kernel build.  The .config file has
plenty of modules configured.
time (make -j12 && make -j12 modules)



host only, no guest, baseline
-----------------------------
trial 1:
real	5m44.823s
user	28m45.725s
sys	5m46.633s

trial 2:
real	5m34.438s
user	28m14.347s
sys	5m41.597s


guest, 8 vcpu, 4096 mem, virtio, no cache param, disk device supplied in full
-----------------------------------------------------------------------------
trial 1:
real	125m5.995s
user	31m23.790s
sys	9m17.602s


trial 2 (changed to 7168 mb memory for the guest):
real	120m48.431s
user	14m38.967s
sys	6m12.437s


That's real strange...  The 'time' command is showing whacked out results.

I then watched a run by hand and counted it at about 10 minutes.  However,
this third run had the proper time!  So whatever the weirdness is, it doesn't
happen every time:

real	9m49.802s
user	24m46.009s
sys	8m10.349s

I decided this could be related to ntp running as I saw this in messages:
Jun 18 16:34:23 localhost ntpd[1916]: time reset -0.229209 s
Jun 18 16:34:23 localhost ntpd[1916]: kernel time sync status change 0001
Jun 18 16:40:17 localhost ntpd[1916]: synchronized to 128.162.244.1, stratum 2

and earlier:

Jun 18 16:19:09 localhost ntpd[1916]: synchronized to 128.162.244.1, stratum 2
Jun 18 16:19:09 localhost ntpd[1916]: time reset +6609.851122 s
Jun 18 16:23:39 localhost ntpd[1916]: synchronized to 128.162.244.1, stratum 2
Jun 18 16:24:04 localhost kernel: hrtimer: interrupt too slow, forcing clock min delta to 62725995 ns


I then installed all F11 updates in the guest and tried again (host had
updates all along).  I got these strange results, strange because of the
timing difference.  I didn't "watch a non-computer clock" for these.

Timing from that was:
trial 1:
real	16m10.337s
user	28m27.604s
sys	9m12.772s

trial 2:
real	11m45.934s
user	25m4.432s
sys	8m2.189s


Here is the qemu-kvm command line used.  The -m was for the first run was
4096, and it was 7168 for the other runs.

# /usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty -pidfile /var/run/libvirt/qemu//f11-test.pid -drive file=/foo/f11/Fedora-11-x86_64-DVD.iso,if=virtio,media=cdrom,index=2 -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on -drive file=/dev/sdb,if=virtio,index=1 -net nic,macaddr=54:52:00:46:48:0e,model=virtio -net user -serial pty -parallel none -usb -usbdevice tablet -vnc cct201:1 -soundhw es1370 -redir tcp:5555::22


/proc/cpuinfo is pasted after the test results.




# cat /proc/cpuinfo
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
stepping	: 5
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 4
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips	: 5865.69
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
stepping	: 5
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 0
siblings	: 4
core id		: 1
cpu cores	: 4
apicid		: 2
initial apicid	: 2
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips	: 5865.76
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

processor	: 2
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
stepping	: 5
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 0
siblings	: 4
core id		: 2
cpu cores	: 4
apicid		: 4
initial apicid	: 4
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips	: 5823.99
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
stepping	: 5
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 0
siblings	: 4
core id		: 3
cpu cores	: 4
apicid		: 6
initial apicid	: 6
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips	: 5865.76
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

processor	: 4
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
stepping	: 5
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 1
siblings	: 4
core id		: 0
cpu cores	: 4
apicid		: 16
initial apicid	: 16
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips	: 5865.80
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

processor	: 5
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
stepping	: 5
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 1
siblings	: 4
core id		: 1
cpu cores	: 4
apicid		: 18
initial apicid	: 18
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips	: 5865.80
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

processor	: 6
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
stepping	: 5
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 1
siblings	: 4
core id		: 2
cpu cores	: 4
apicid		: 20
initial apicid	: 20
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips	: 5865.80
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

processor	: 7
vendor_id	: GenuineIntel
cpu family	: 6
model		: 26
model name	: Intel(R) Xeon(R) CPU           X5570  @ 2.93GHz
stepping	: 5
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 1
siblings	: 4
core id		: 3
cpu cores	: 4
apicid		: 22
initial apicid	: 22
fpu		: yes
fpu_exception	: yes
cpuid level	: 11
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips	: 5865.79
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:



On Sun, Jun 14, 2009 at 12:33:06PM +0300, Avi Kivity wrote:
> Erik Jacobson wrote:
>> We have been trying to test qemu-kvm virtual machines under an IO load.
>> The IO load is quite simple: A timed build of the linux kernel and modules.
>> I have found that virtual machines take more than twice as long to do this
>> build as the host.  It doesn't seem to matter if I use virtio or not,  Using
>> the same device and same filesystem, the host is more than twice as fast.
>>
>> We're hoping that we can get some advice on how to address this issue.  If
>> there are any options I should add for our testing, we'd appreciate it.  I'm
>> also game to try development bits to see if they make a difference.  If it
>> turns out "that is just the way it is right now", we'd like to know that
>> too.
>>
>> For these tests, I used Fedora 11 as the virtualization server.  I did this
>> because it has recent bits.  I experimented with SLES11 and Fedora11 guests.
>>
>> In general, I used virt-manager to do the setup and launching.  So the
>> qemu-kvm command lines are based on that (and this explains why they are
>> a bit long).  I then modified the qemu-kvm command line to perform other
>> variations of the test.  Example command lines can be found at the end of
>> this message.
>>
>> I performed tests on two different systems to be sure it isn't related to
>> specific hardware.
>>   
>
> What is the host cpu type?  On pre-Nehalem/Barcelona processors kvm has  
> poor scalability in mmu intensive workloads like kernel builds.
>
> -- 
> error compiling committee.c: too many arguments to function
-- 
Erik Jacobson - Linux System Software - SGI - Eagan, Minnesota

Comment 3 erikj 2009-06-20 03:48:19 UTC
2.6.30-git14 on the F11 host, 2.6.29.4-167.fc11 on the guest... No
improvements. FYI.

Comment 4 Mark McLoughlin 2009-06-22 11:55:17 UTC
Thanks for all the data Erik - IMHO, the most useful bits so far is:

fedora11 host, no guest (baseline)
-----------------------           
  -> real  10m38.116s

fedora11 host, fedora11 guest
-----------------------------
 virtio devices, device fully imported to guest for workarea, cache=none
  -> real  23m28.397s

Best to keep plugging away on kvm@vger seeing if we can gather more data to help us identify where the bottlenecks are.

(Note: if you're testing 2.6.30 kernels from rawhide, they have lots of debugging configured, so they're probably not too useful for performance comparisons)

Comment 5 Mark McLoughlin 2009-07-03 13:22:18 UTC
Erik, as in bug #509383, please try virtio-blk in rotational mode

Comment 6 erikj 2009-07-09 02:32:02 UTC
Same test system/details (but some newer f11 updates since the last post).
I'd say enabling block queue rotation made a difference, but not nearly as
big a difference as it did with the mkfs.ext3 operations.

There were other suggestions in the KVM thread that I haven't attempted yet.

Timing with the rotational stuff set to 1...

real    14m13.015s
user    29m42.162s
sys     8m37.416s

To confirm this was really better, I halted the virtual machine and restarted
it without doing setting the rotational values to 1.  I got this timing:

real    16m50.829s
user    29m33.933s
sys     9m4.905s


And finally, to confirm the numbers on the host with no guest running...
The same disk/filesystem, now mounted on the host instead of the guest, gave
this timing:

real    6m13.398s
user    26m56.061s
sys     5m34.477s


qemu-kvm command line, both guest runs:
/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty -pidfile /var/run/libvirt/qemu//f11-test.pid -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on -drive file=/dev/sdb,if=virtio,index=1 -drive file=/var/lib/libvirt/images/test.img,if=virtio,index=2 -net nic,macaddr=54:52:00:46:48:0e,model=virtio -net user -serial pty -parallel none -usb -usbdevice tablet -vnc cct201:1 -soundhw es1370 -redir tcp:5555::22

Comment 7 erikj 2009-07-09 19:02:38 UTC
Expanding on the above, here is an email I sent to the kvm list with some
other adjustments that made a difference.

Date: Thu, 9 Jul 2009 13:01:10 -0500
From: Erik Jacobson <erikj>
To: Avi Kivity <avi>
Cc: Erik Jacobson <erikj>, Mark McLoughlin <markmc>,
	kvm.org, Jes Sorensen <jes>
Subject: Re: slow guest performance with build load, looking for ideas

>> Timing with the rotational stuff set to 1...
>>
>> real    14m13.015s
>> user    29m42.162s
>> sys     8m37.416s
>
> (user + sys) / real = 2.7
>
>> And finally, to confirm the numbers on the host with no guest running...
>> The same disk/filesystem, now mounted on the host instead of the guest, gave
>> this timing:
>>
>> real    6m13.398s
>> user    26m56.061s
>> sys     5m34.477s
>>    
>
> (user + sys) / real = 5.2
>
> I got 6.something in a guest!

> Please drop -usbdevice tablet and set the host I/O scheduler to  
> deadline.  Add cache=none to the -drive options.

yes, these changes make a difference.


Before starting qemu-kvm, I did this to change the IO scheduler:
BEFORE:
# for f in /sys/block/sd*/queue/scheduler; do cat $f; done
noop anticipatory deadline [cfq] 
noop anticipatory deadline [cfq] 

SET:
# for f in /sys/block/sd*/queue/scheduler; do echo "deadline" > $f; done

CONFIRM:
# for f in /sys/block/sd*/queue/scheduler; do cat $f; done
noop anticipatory [deadline] cfq 
noop anticipatory [deadline] cfq 


qemu command line.  Note that usbtablet is off and cache=none is used in
drive options:

/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty -pidfile /var/run/libvirt/qemu//f11-test.pid -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on,cache=none -drive file=/dev/sdb,if=virtio,index=1,cache=none -drive file=/var/lib/libvirt/images/test.img,if=virtio,index=2,cache=none -net nic,macaddr=54:52:00:46:48:0e,model=virtio -net user -serial pty -parallel none -usb -vnc cct201:1 -soundhw es1370 -redir tcp:5555::22


# rotation enabled this way in the guest, once the guest was started:
for f in /sys/block/vd*/queue/rotational; do echo 1 > $f; done

Test runs after make clean...
time (make -j12&&  make -j12 modules)

real	10m25.585s
user	26m36.450s
sys	8m14.776s

2nd trial (make clean followed by the same test again.
real	9m21.626s
user	26m42.144s
sys	8m14.532s

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comment 8 Mark McLoughlin 2009-08-07 12:46:55 UTC
okay, I think the summary of all this is some tuning recommendations:

  1) Put virtio-blk devices in rotational mode:

     for f in /sys/block/vd*/queue/rotational; do echo 1 > $f; done

  2) Use -drive cache=none

  3) Set the host I/O schedular to deadline:

     for f in /sys/block/sd*/queue/scheduler; do echo "deadline" > $f; done

I'm going to close this as WORKSFORME, since there's no much we can do in Fedora apart from recommend people use these settings.


Note You need to log in before you can comment on or make changes to this bug.