Bug 755670 - KVM doesn't like qcow2 images with cluster size of 512
KVM doesn't like qcow2 images with cluster size of 512
Product: Fedora
Classification: Fedora
Component: qemu (Show other bugs)
x86_64 Linux
unspecified Severity unspecified
: ---
: ---
Assigned To: Kevin Wolf
Fedora Extras Quality Assurance
Depends On:
  Show dependency treegraph
Reported: 2011-11-21 12:53 EST by joshua
Modified: 2013-01-09 19:33 EST (History)
17 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-03-23 04:36:23 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description joshua 2011-11-21 12:53:30 EST
Description of problem:

KVM Virtual machines have problems with qcow2 images with cluster sizes of 512.
Disk access time is terrible.

Version-Release number of selected component (if applicable):


How reproducible:

Create a virtual disk image:
$sudo qemu-img create -f qcow2 -o cluster_size=512 node1.img 6G

Create a standard virtual machine using virt-manager
Ensure that the virtual drive is using virtio

Do an install from Fedora 16 x86_64 ISO file.  It is takes hours.
Comment 1 joshua 2011-11-21 12:56:22 EST
When creating the VM from the wizard, OS type was Linux, Version was "Fedora 16"
Comment 2 Fedora Admin XMLRPC Client 2012-03-15 13:56:00 EDT
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.
Comment 3 joshua 2012-03-15 20:40:24 EDT
Come on guys, there is clearly a bug here.
Comment 4 Dor Laor 2012-03-18 05:02:04 EDT
Where this install is spending most of the time?
Does it ends much faster w/ a larger cluster size like 1M ?
Comment 5 Kevin Wolf 2012-03-19 12:14:42 EDT
Joshua, this is most likely not a bug, but expected behaviour. Using 512 byte sectors means that you have 128 times as much metadata writes as with the default cluster size. This does hurt performance a lot.

What are the reasons for using such a small cluster size? I haven't seen it used anywhere outside testing or debugging until now. Benchmarks have shown that the default cluster size of 64k is optimal performance-wise for most workloads.

One thing that you should make sure when using small cluster sizes is that you use a cache mode that allows metadata updates to be batched, i.e. one of cache=none/writeback/unsafe. Do you already use one of these?

Another option may be metadata preallocation, though I'm not sure if it achieves what you intended.
Comment 6 joshua 2012-03-22 17:42:24 EDT
I didn't intend anything, it was simply curiosity. It took hours... most the time was spent in the package installation

Note You need to log in before you can comment on or make changes to this bug.