Hide Forgot
Description of problem: There is an insignificant (relatively) difference in size if metadata is allocated or not - see for 1TB: [root@ykaul-mini tmp]# qemu-img create -f qcow2 one_tb.qcow2 1T Formatting 'one_tb.qcow2', fmt=qcow2 size=1099511627776 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 [root@ykaul-mini tmp]# qemu-img create -f qcow2 -o preallocation=metadata one_tb_w_metadata.qcow2 1T Formatting 'one_tb_w_metadata.qcow2', fmt=qcow2 size=1099511627776 encryption=off cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16 [root@ykaul-mini tmp]# du -ch *.qcow2 208K one_tb.qcow2 161M one_tb_w_metadata.qcow2 160MB for 1TB disk. And for 80G: [root@ykaul-mini tmp]# du -ch 80*.qcow2 196K 80G.qcow2 13M 80G_w_metadata.qcow2 13M total I'm not sure how much of a performance boost it should give - but it should. Even if small, I feel it's worth it (together with qcow2v3).
Is this a dup of BZ #1391859?
(In reply to Yaniv Dary from comment #1) > Is this a dup of BZ #1391859? It was - now I've split this one for qcow2 and the other (Which is actually probably more important!) to raw with fallocate.
Is it a low hanging item to introduce this support to both? Should this be also targeted to 4.2?
(In reply to Yaniv Lavi from comment #3) > Is it a low hanging item to introduce this support to both? > Should this be also targeted to 4.2? I wouldn't for the time being. I don't know where we should take into account this metadata, for example. The performance benefit are unclear.
This request has been proposed for two releases. This is invalid flag usage. The ovirt-future release flag has been cleared. If you wish to change the release flag, you must clear one release flag and then set the other release flag to ?.
(In reply to Red Hat Bugzilla Rules Engine from comment #6) > This request has been proposed for two releases. This is invalid flag usage. > The ovirt-future release flag has been cleared. If you wish to change the > release flag, you must clear one release flag and then set the other release > flag to ?. I don't see the point in pushing it to 4.4. Either we are convinced it's useful, then let's prioritize it, or close-wontfix it. Specifically: 0. You can't use it on top of a backing store (qemu limitation: "Backing file and preallocation cannot be used at the same time") 1. In file-based storage, it doesn't matter - we use raw-sparse mostly. 2. In block-based, it's not practical for VM from template (thin-provisioning) - due to 0 above. 3. So I think it's valuable for thin provisioning - when creating empty thin-provisioned disks. It shouldn't be hard to figure out if doable or not - let's try to decide and act upon it.
I agree with all of the above but Yaniv Lavi asked not to get it in 4.3 for now as we don't know the content yet in terms of capacity, ylavi, as I've said in the bug scrub I'm all in for having this in 4.3
We will discuss it as part of the planning meetings.
Kevin, can you explain the required size on storage when using qemu-img create -f qcow2 -o preallocation=metadata I guess we allocate the L1 and L2 table, so we need one L2 table for image up to 16T, and 2 tables for image up to 32T, or something like this, right? When we allocated multiple L2 tables upfront, are they allocated at the start of the image? The context is creating qcow2 image on a tiny logical volume and extending the logical volume as needed. If we can get all metadata of the image in the first 1G when creating an image, this sounds like useful optimization.
This requires measurements before we change anything. We need to compare performance with preallocated metadata and without.
(In reply to Nir Soffer from comment #10) > Kevin, can you explain the required size on storage when using > > qemu-img create -f qcow2 -o preallocation=metadata > > I guess we allocate the L1 and L2 table, so we need one L2 table for image > up to 16T, and 2 tables for image up to 32T, or something like this, right? With 64k clusters, it's one L2 table per 512 MB. You also get one refcount block per 2 GB with 64k clusters and 16 bit refcounts (the default). L1 table and refcount table also grow as the number of L2 tables and refcount blocks grows, but of course those stay smaller. In the end, you would best ask QEMU itself for examples: $ qemu-img measure -O qcow2 -o preallocation=metadata --size 1T required size: 168034304 fully allocated size: 1099679662080 > When we allocated multiple L2 tables upfront, are they allocated at the > start of the image? > > The context is creating qcow2 image on a tiny logical volume and extending > the logical volume as needed. If we can get all metadata of the image in the > first 1G when creating an image, this sounds like useful optimization. No, not everything is located at the start of the image. You need the full file size upfront for any kind of preallocation, even though most of it stays sparse with preallocation=metadata.
Based on Kevin response we can use metadata preallocation only for file based storage (both sparse and preallocated) or for preallocated block (needed for incremental backup). We still need test performance before we do this work.
But based on commnet 7 - preallocation is not supported with backing file, and we don't create new qcow2 volumes on file storage, we use raw sparse, so this can help only disks created from SDK, when user selected format=cow sparse=true, or qcow2 preallocated disks created for incremental backup.
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly
ok, closing. Please reopen if still relevant/you want to work on it.