Bug 1031303 - requesting 'fully allocated' raw volume gives bad performance on btrfs
requesting 'fully allocated' raw volume gives bad performance on btrfs
Product: Virtualization Tools
Classification: Community
Component: libvirt (Show other bugs)
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Libvirt Maintainers
Depends On:
  Show dependency treegraph
Reported: 2013-11-16 11:36 EST by Gene Czarcinski
Modified: 2013-11-17 10:56 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-11-17 10:56:07 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Gene Czarcinski 2013-11-16 11:36:16 EST
Description of problem:
If the virtual disk image files are on a btrfs subvolume, they will normally have extreme fragmentation.  For example, even when chattr +C is applied to /var/lib/libvirt/images the current qcow2 sparse-allocated file will wind up with something like 50,000 extents after an install.  If you do not have chattr +C then things are and will continue to be even worse.

However, if the file is "raw" and pre-created with something like:
   dd if=/dev/zero of=/var/lib/libvirt/images/xxx.img bs=1024 count<size>
then the number of extents is low (I got 9 for an 8GB file).  And, because of the chattr +C, they stay that way!

Version-Release number of selected component (if applicable):
Fedora 20-Beta, virt-manager-0.10.0-5.git1ffcc0cc.fc20.noarch

How reproducible:

One additional point, the "default" disk format has changed from "raw" to "qcow2."  While this might be the right choice for most cases, why not have the option to select the format?  Perhaps one of those options could be raw-nonsparse or perhaps just nonsparse which would imply raw.

With this, running kvm on a all-btrfs system is possible.  Without the capability, it can still be done but with more manual effort required.
Comment 1 Gene Czarcinski 2013-11-16 16:13:32 EST
OK, I have figure out how to do it but I would like to leave this open.  It should be easier to do this directly.

The way I did it was to create the VM, then delete the qcow2 disk that got created and create a new virtio disk which came up as a"raw" disk.  Did an install of F20-Beta and it was pretty good.

First, let me say the chattr +C /var/lib/libvirt/images
had been run some time ago.

The new file has 15 extents for an 8GB file.  Not quite as good as the 7 extents I got using dd if=/dev/zero of=/var/lib/libvirt/images/... but more than good enough.

So, the capability is there.  The small problem is to be able to specify it up front and I believe the way to do that is to be able to specify the format to be used.

BTW, nothing critical here about making it in for F20.

If you can point me at where I might look for the code, I might take a shot at coming up with a patch.  If you folks feel strongly on how such an option should be specified, please speak up because I do not care.  I am just interested in the functionality and am willing to invest some time to make it happen.
Comment 2 Cole Robinson 2013-11-16 18:36:22 EST
Also Gene's thread here:


As I am about to mention in that thread, the virt-manager 'new vm' wizard does have an option to 'fully allocate' the new disk image, but it just asks libvirt to do that for us. Libvirt uses fallocate which might not be a good idea on btrfs. So reassigning to libvirt.
Comment 3 Gene Czarcinski 2013-11-17 10:56:07 EST
I did not realize that there was a preference where I could set the storage format to be used by the vm creation wizard for the disk image.  Change the value from the "System default (qcow2)" to "raw" and things work as I want them to on my btrfs system.  This should be documented someplace but I am not sure where that would be.  I will put something up on the libvirt-users mailing list.

Note You need to log in before you can comment on or make changes to this bug.