Currently we support zero requests, but they are implemented naively, writing zeros to storage. This is very slow, and may allocate space on storage, even on sparse files. We want to use the fastest method for zeroing a range on storage, allocate space on preallocated images, and deallocated zero ranges on sparse images. I.e. Add sparse option to directio.Zero, deallocating zeroed byte ranges instead of allocating space. This option is set for zero operation on a sparse ticket. The sparse option has no effect for block storage. Note: to enable this feature, engine must add "sparse": true to the ticket for sparse disks.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Since first version of imageio, we had no support for sparse images. Uploading a sparse file used to convert holes to actual zeroes sent over wire, and writen to storage fully allocated images. In 1.3.0, we added zero API (See PATCH/zero): http://ovirt.github.io/ovirt-imageio/random-io.html#patch This avoids sending zeros on the wire, but was implemented by writing actual zeros to storage, also creating fully allocated images. In 1.4.3, we re-implemented zero apis using the proper apis, supporting creation of sparse images when using the zero api. There is a new "sparse" property to ticket: { "ops": ["write"], "size": 6442450944, "timeout": 3000, "sparse": true, "url": "file:///path/to/image", "uuid": "test" } If sparse is true, using PATCH/zero api will use fallocate() to punch a hole in the byte range. Here is example upload using imageio examaple upload script: https://raw.githubusercontent.com/oVirt/ovirt-imageio/master/examples/upload ./upload fedora-27.img https://server:54322/images/test # qemu-img info fedora-27.img image: fedora-27.img file format: raw virtual size: 6.0G (6442450944 bytes) disk size: 1.0G # qemu-img info /path/to/image image: /path/to/image file format: raw virtual size: 6.0G (6442450944 bytes) disk size: 1.0G Sparse upload is supported with: - NFS 4.2 - GlusterFS (tested with 3.8.4) File system do not supporting sparseness will fall back to manual zero writing, creating fully allocated images: - NFS 4.1 - NFS 3 This change will be effective when engine mark tickets as sparse, see bug 1615124. This bug is about adding the capability to imageio daemon.
I forgot to mention the status on block storage. Theoretically, we can punch holes in block storage using fallocate(), but this does not mix well with the feature to wipe disks before deleting them. We may be a able to discard areas on block storage during zero only if the user did not select the wipe-after-delete feature, and only if kernel and block driver support FALLOCATE_FL_PUCH_HOLE. Since this is not well supported yet, we don't support this yet. I think we should open RFE for this. We can look at this for 4.3.
We're releasing today 4.2.6 RC2 including v1.4.3 which is referencing this bug. can you please check this bug status?
(In reply to Sandro Bonazzola from comment #5) Bug should be fixed in 1.4.3, but not tested by QE yet.
We have a downstream build, moving to ON_QA
How to test this feature. We have 2 ways to test - using imageio example script, and using virt-v2v. Uploads from the UI or using current upload_disk.py SDK example do not support yet sparseness. ## Testing using imageio example upload script 1. Create an empty raw image truncate -s 100g empty.img 2. Create a 100g raw volume in engine on NFS 4.2/GlusterFS storage domain 3. Create a ticket json: cat ticket.json { "uuid": "test", "size": 107374182400, "url": "file:///rhev/data-center/mnt/server:_path/sd_id/images/img_id/vol_id", "timeout": 3000, "ops": ["write"] } Note: "url" should contain the path to the volume on NFS/GlusterFs storage domain. 2. Install the ticket curl --unix-socket /run/vdsm/ovirt-imageio-daemon.sock \ -X PUT \ --upload-file ticket.json \ http://localhost/tickets/test 3. Upload an image: examples/upload empty.img https://server.address:54322/images/test Expected results: - upload should be very quick, maybe few seconds - the final image actual size should be 0, check using "ls -lhs" or "du" ## Testing using virt-v2v 1. Create sparse test image virt-builder fedora-27 This creates 6g image with 1.7G data and 4.3G of unallocated space 2. Import the image as new vm using virt-v2v virt-v2v \ -i disk fedora-27.img \ -o rhv-upload \ -oc https://engine.address/ovirt-engine/api \ -os storage-domain-name \ -op password \ -of raw \ -oa sparse \ -oo rhv-cafile=ca.pem \ -oo rhv-cluster=cluster-name \ -oo rhv-direct=true You may need to add special repos or install additional packages to get this working, please consult the scale team for the details. You need to have a NFS 4.2 or GlusterFS storage domain in the DC where cluster "cluster-name" exists. Expected results: - fedora-27.img and the uploaded disk should have similar actual size (~1G). Note that the images are not identical, since virt-v2v does some modifications to the image before uploading it, but both of them should have similar actual size.
Note for virt-v2v test - we can create a bigger test image like this: virt-builder fedora-27 --size 100G This creates a 100G fedora 27 image. The image actual size should still be around 1G.
Note: to verify this with virt-v2v, you must use engine that sends the "sparse" property, see bug 1615124.
I have tested the scenario on the following versions with NFSv4.2 : ovirt-imageio-common-1.4.4-0.el7ev.x86_64 ovirt-imageio-daemon-1.4.4-0.el7ev.noarch virt-v2v-1.36.10-6.16.rhvpreview.el7ev.x86_64 As in the expected results required, fedora-27.img and the uploaded disk have similar actual size (~1G) vs. the virtual size which is ~6G.