Description of problem: Prior importing the KVM VM the virtual size is larger than disk size in qemu-img info output but after importing the VM on RHVM the virtual size shows less than actual size. Version-Release number of selected component (if applicable): RHV 4.2.3 How reproducible: We were able to re-produced the issue in our environment. Prior importing VM below is qemu-img info output. # qemu-img info RHV4.2_HE1 image: RHV4.2_HE1 file format: qcow2 virtual size: 100G (107374182400 bytes) disk size: 5.1G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false After importing the VM to RHVM we can see virtual size is smaller then actual size: $ qemu-img info /rhev/data-center/mnt/dell-pe-c5220-12.gsslab.pnq.redhat.com\:_RHV41_HE__DATA/21ae95db-0f97-4c06-bb60-e3ba541400f0/images/9f874f8d-a210-45e0-93f8-ad9711fe86cd/c4edafba-e1a0-44c0-b0ba-61c11a147913 image: /rhev/data-center/mnt/dell-pe-c5220-12.gsslab.pnq.redhat.com:_RHV41_HE__DATA/21ae95db-0f97-4c06-bb60-e3ba541400f0/images/9f874f8d-a210-45e0-93f8-ad9711fe86cd/c4edafba-e1a0-44c0-b0ba-61c11a147913 file format: qcow2 virtual size: 100G (107374182400 bytes) disk size: 102G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false Steps to Reproduce: 1. Shutdown the VM in KVM . 2. Check the virtual size and disk size using qemu-img info 3. Add KVM as external provider in RHVM. 4. Import the VM from KVM. 5. Check the Virtual size and Actual size from RHVM portal under Storage Domains sub tab disk's 6. Also check qemu-img info for import VM.
So we've converted a thin provisioned disk to an allocated one - and it still qcow2 and can grow if needed. Besides not being very optimal, what's the issue here?
The problem is that after import, the virtual disk size is less than physical size. Besides sparsing does not work for that disk. So, after import, virtual machines on RHV occupy much more capacity than on the bare kvm host.
I think the issue is the way we import from KVM. We copy the image from libvirt, and libvirt does not support sparseness, so we and with a much larger file full of zeros. I also fixed the title. The virtual size is not the issue, it did not change after the import. The actual size is the issue. Since this is a virt flow someone from the virt team should look at this.
Hi Richard, Jarda, can you comment on c#15? Looks like we need to do some changes here if we cannot keep the sparseness during this process. Thanks! Martin
(In reply to Nir Soffer from comment #15) > I think the issue is the way we import from KVM. We copy the image from > libvirt, > and libvirt does not support sparseness, so we and with a much larger file > full > of zeros. What does "copy the image from libvirt" mean in practice here? Do you mean something like the virDomainBlockPeek API? Anyway I suggest running ‘virt-sparsify --in-place’ on the result to recover sparseness.
(In reply to Richard W.M. Jones from comment #17) We use VirtStreamNew or virtDomainBlockPeek (not sure why we use both). In both case libvirt does not have a way to get only the data, so we upload the entire image to storage, using imageio Receive api, which is the backend for PUT requests. Maybe we can use virt-sparsify, I think a better way to do this is to support zero detection during upload, so you will get sparse image. We consider this for future version of imageio, see bug 1616436, but I think for long term we should use NBD for this flow. It can work like this: 1. Libvirt expose a volume using NBD with TLS-PSK 2. Vdsm run qemu-img convert, converting image from NBD server on libvirt side to oVirt volume on the hypervisor. If we want to support conversion by 3rd party code not running as vdsm, we can change this to: 2. Vdsm starts qemu-nbd with TLS-PSK exposing the oVirt volume using NBD protocol 3. 3rd party run qemu-img convert, converting the image from NBD server on libvirt side to NBD server on the hypervisor side. The second option will be supported in 4.3, since we need it for backup purposes. I don't think we should invest time on any short term solution that we will want to drop later when we have NBD support. Basically this should be 4.3 RFE. Eric, do you think this plan is feasible on libvirt side?
This issue is currently assign to Sprint 4 for the oVirt team. Can we narrow down which option will allow us to move forward that would meet a MVP to resolve the Sparseness issue of libVirt? This will at least provide us with some estimate of how much of this issue can be addressed in Sprint 4 even if it just amounts to planning.
(In reply to Nir Soffer from comment #18) > (In reply to Richard W.M. Jones from comment #17) > We use VirtStreamNew or virtDomainBlockPeek (not sure why we use both). In > both > case libvirt does not have a way to get only the data, so we upload the > entire > image to storage, using imageio Receive api, which is the backend for PUT > requests. > > Maybe we can use virt-sparsify, I think a better way to do this is to > support zero > detection during upload, so you will get sparse image. > > We consider this for future version of imageio, see bug 1616436, but I think > for > long term we should use NBD for this flow. > > It can work like this: > > 1. Libvirt expose a volume using NBD with TLS-PSK That would be a new libvirt feature, but is one that makes sense (basically, a way to make libvirt expose storage volumes via qemu-nbd). > 2. Vdsm run qemu-img convert, converting image from NBD server on libvirt > side to > oVirt volume on the hypervisor. I thought work was recently done on libvirt to support sparse volume copies - but it's been a while since I paid close attention to what is or is not capable of sparse transfers on the libvirt side, so you'll need input from Michal, as author of sparse streams. If libvirt sparse streams already solve the problem, then why do we need an additional solution of an NBD export? > > If we want to support conversion by 3rd party code not running as vdsm, we > can > change this to: > > 2. Vdsm starts qemu-nbd with TLS-PSK exposing the oVirt volume using NBD > protocol > 3. 3rd party run qemu-img convert, converting the image from NBD server on > libvirt > side to NBD server on the hypervisor side. > > The second option will be supported in 4.3, since we need it for backup > purposes. > > I don't think we should invest time on any short term solution that we will > want > to drop later when we have NBD support. > > Basically this should be 4.3 RFE. > > Eric, do you think this plan is feasible on libvirt side? Having libvirt expose a storage volume via an NBD export may be reasonable, but it is a new feature orthogonal to anything that sparse virStream functions can already perform.
(In reply to Eric Blake from comment #20) > (In reply to Nir Soffer from comment #18) > > (In reply to Richard W.M. Jones from comment #17) > > We use VirtStreamNew or virtDomainBlockPeek (not sure why we use both). In > > both > > case libvirt does not have a way to get only the data, so we upload the > > entire > > image to storage, using imageio Receive api, which is the backend for PUT > > requests. > > > > Maybe we can use virt-sparsify, I think a better way to do this is to > > support zero > > detection during upload, so you will get sparse image. > > > > We consider this for future version of imageio, see bug 1616436, but I think > > for > > long term we should use NBD for this flow. > > > > It can work like this: > > > > 1. Libvirt expose a volume using NBD with TLS-PSK > > That would be a new libvirt feature, but is one that makes sense (basically, > a way to make libvirt expose storage volumes via qemu-nbd). Agreed, this is a new feature and looks to me like a big cannon, i.e. big feature to fix this 'small' bug. > > > 2. Vdsm run qemu-img convert, converting image from NBD server on libvirt > > side to > > oVirt volume on the hypervisor. > > I thought work was recently done on libvirt to support sparse volume copies > - but it's been a while since I paid close attention to what is or is not > capable of sparse transfers on the libvirt side, so you'll need input from > Michal, as author of sparse streams. If libvirt sparse streams already > solve the problem, then why do we need an additional solution of an NBD > export? > Yes, libvirt supports sparse streams since v3.4.0 (merged in May 2017). For instance: virsh -c qemu+tcp://burns/system vol-download --sparse --pool default --vol sparse.img --file sparse.img; ls -lhs sparse.img 4.0K -rw-r--r-- 1 root root 21G Aug 21 15:33 sparse.img On python level you want to be looking at libvirt.VIR_STORAGE_VOL_DOWNLOAD_SPARSE_STREAM and this example script in general: https://libvirt.org/git/?p=libvirt-python.git;a=blob;f=examples/sparsestream.py;h=e960c408e5a9379532daa4052f04873ca60581d8;hb=HEAD
Steven, do you want o look at libvirt sparse apis? see comment 21. Using libvirt sparse api is only one part of the solution, the other part is using imageio random I/O api to write data ranges and zero holes. For trying out stuff you can check how directio module is used in: https://github.com/oVirt/ovirt-imageio/blob/37001ad65b10a00748442e839127151f4be8ac5d/daemon/ovirt_imageio_daemon/server.py#L218 Note that directio is not public api and vdsm should use this code as is, since we may change this code without notice. We can work on creating public api for 4.3 if we think that this is the right way.
1. To test this feature, one needs to first set up a KVM Host. This may be performed via the Virtual Machine Manager which can connect to a remote machine for saving KVM VMs. One can check the Volume and Disk size of the VM image file via the following command: qemu-img info <file Image> One can find the image file here: /var/lib/libvirt/images A sample output is as follows: [root@sla-leonard ~]# qemu-img info /var/lib/libvirt/images/generic.qcow2 image: /var/lib/libvirt/images/generic.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 3.3M cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false 2. Enter the WebAdmin oVirt Open Visualization Manager. Enter the Compute >> Virtual Machines tab. At the far right on the screen after the Create Snapshot button are three vertical dots. Please press on the button with the three vertical dots and choose Import on the drop down list. Choose the Data Center, Change the Source Drop Down List to KVM (Via Libvirt). Enter the URI, User and Password of the KVM Host and click on the Load button. Note: at least one VM has to be running in the Data Center. The URI should be in the following format (replacing HostName with the IP Address or Host Name of the KVM Host): qemu+ssh://root@HostName/system 3. One should see the list of virtual machines available in the “Virtual Machines on Source” List Box on the left side. Please choose the VMs to be imported to oVirt, press the right arrow button so that the VMs chosen appear in the “Virtual Machines to Import” List Box on the right side and press the Next button. 4. Ensure the Allocation Policy is set to “Thin Provision” and choose the OK button. 5. Wait for the Import to succeed. 6. One can check the file on the storage domain to verify the Disk Size matches the original.
*** This bug has been marked as a duplicate of bug 1625543 ***
Changing Doc Type to No Doc Update, since this is closed as a duplicate.