Description of problem: vdsm use "qemu-img convert" to move/copy the images from one storage domain to another. The convert is done without "compress" option so the compressed cluster will get uncompressed during the operation. So the size of the destination image will be larger than the source. However, RHV uses the output from "qemu-img info" to create the LV which is not enough for the uncompressed image. Hence "qemu-img convert" will fail with an error "No space left on device". Used cfme-rhvm qcow2 image to test. === qemu-img check cfme-rhevm-5.9.3.4-1.x86_64.qcow2 No errors were found on the image. 51654/655360 = 7.88% allocated, 96.31% fragmented, 95.35% compressed clusters Image end offset: 1200685056 qemu-img info cfme-rhevm-5.9.3.4-1.x86_64.qcow2 image: cfme-rhevm-5.9.3.4-1.x86_64.qcow2 file format: qcow2 virtual size: 40G (42949672960 bytes) disk size: 1.1G cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 16 === Uploaded the disk to RHV-M and tried both LSM and offline disk migration. Both will fail during "qemu-img convert" phase. As per the bug 1470435, the issue was fixed in "copyCollapsed" which is used during the VM clone, template creation etc. It uses "qemu-img measure" to calculate the destination image size and the LV is created with this size. However, the sdm_copy_data doesn't use "qemu-img measure" to calculate the destination image size and still use "imageInitialSizeInBytes" to create the destination LV which is causing this issue. Version-Release number of selected component (if applicable): rhvm-4.3.3 vdsm-4.30.13-4.el7ev.x86_64 How reproducible: 100% Steps to Reproduce: 1. Upload a compressed image to a block storage domain. 2. Do LSM or offline disk migration to another block storage domain. Both will fail with the error "No space left on device". Actual results: Storage migration of a compressed image is failing with error "No space left on device" in the block storage domain Expected results: Storage migration of compressed image should work. Additional info:
Not sure we support compressed images, did that scenario used to work in the past?
(In reply to Tal Nisan from comment #2) > Not sure we support compressed images, did that scenario used to work in the > past? I don't think it was working in the past. But we provide appliance images like cloudforms in compressed format and have instructions in the documentation to upload those in RHV. A similar issue was fixed in Bug 1470435 using "qemu-img measure".
Daniel, what do you think?
(In reply to Tal Nisan from comment #4) > Daniel, what do you think? afaik, we don't support it for images chain, i.e. need to support compressed format on images chain.
(In reply to Tal Nisan from comment #2) > Not sure we support compressed images, did that scenario used to work in the > past? According to this BZ 1470435 we do support them, since this BZ was fixed with a note implying we support it. Also, based on bz 1470435 comment 15 and others, it seems like there is no such thing as a compressed qcow image.
Probably the very same thing here, but using images from glance: BZ1727678
Verified. According those steps: 1. Downloaded from web to a host "CFME 5.11.3 Red Hat Virtual Appliance (qcow)" image from here [1] 2. Uploaded the qcow image to ISCSI sd using upload_disk.py [2] 3. Attach the image/disk to a VM created out of rhel8 template. 4. perform LSM + cold disk migration Actual result: Both migrations succeeded. Both DID NOT fail with the error "No space left on device". Tested on: ovirt-engine-4.4.0-0.25.master.el8ev.noarch vdsm-4.40.5-1.el8ev.x86_64 [1]: https://access.redhat.com/downloads/content/167/ver=5.0/rhel---8/5.0/x86_64/product-software [2]: python3 upload_disk.py cfme-rhevm-5.11.3.1-1.x86_64.qcow2 --engine-url https://storage-ge-09.XXX.XXX --username XXX --disk-format qcow2 --disk-sparse --sd-name iscsi_0 -c /root/ca.pem --insecure https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3246
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days