## Description of problem: For some reason when trying to download disk images via the GUI, the 'Download' button is not always available. Sometimes it's greyed out when the VM is not running. And sometimes it's there even though the VM is running. ## Version-Release number of selected component (if applicable): ovirt-engine-4.3.3.7-0.1.el7.noarch ## How reproducible: consistently, but only some VM's are affected ## Steps to Reproduce: 1. Try to download a VM disk from the disk tab 2. If the button is not available, check if the VM is actually running or not 3. ## Actual results: For some VM's the button is there even if it's running. If you try to download, you get an error (as expected) For other VM's the button is no there, even though the VM is not running ## Expected results: If the VM is not running, the download button should be available If the VM is running, the download button should not be available ## Additional info: Using download_disk.py [1], I can download the disk image even if the button is not available. Therefore I assume this is GUI issue. Are their any other factors that influence whether a disk is available for download or not? [1] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk.py
sync2jira
The download button availability is not affected by the status of the VM. I.e. we check whether the disk is attached to a running VM only in the backend validation. For the download button availability in the GUI we check the following: * Disk not during transfer already. * Disk status is OK. * Disk's actual size > 0. * Disk doesn't have a parent. As for downloads applicable from the rest-api and not the webadmin, do you remember which disks could be downloaded? If those are disks that are based on a template (i.e. created using thin provisioning from a template) this is the expected behaviour for now (until we support volume collapse during download). I.e. to allow api users to download the entire disk chain (template layer and leaf layer), we support downloading disks with template using the api. For the GUI, we'll enable downloading those disks once supporting auto collapse during transfer - so the user could get the entire disk (including the template layer).
We can enable download of entire disk in raw format for any disk, including all snapshots from the UI. But this will be very inefficient, sending gigabytes of zeroes over the wire, and creating fully allocated image full of zeroes. Any advanced option will be available only via the SDK. We will have a command line tool for uploading and downloading images, replacing upload_disk.py and download_disk.py examples scripts in ovirt sdk. See bug 1626262.
Implementing this for the UI does make sense because browsers do not support sparseness, and cannot download the data efficiently. But this feature is already in RHV 4.4 using the SDK. This will download entire disk contents including all snapshots to raw image efficiently: $ echo -n my-password > password $ python3 download_disk.py --engine-url https://engine/ \ --username admin@internal \ --password-file password \ --cafile ca.pem \ --format raw \ d352447e-d16c-4a1d-9b2a-c9c08dbff5c3 \ disk.raw This will download the same disk to collapsed qcow2 image: $ python3 download_disk.py --engine-url https://engine/ \ --username admin@internal \ --password-file password \ --cafile ca.pem \ --format qcow2 \ d352447e-d16c-4a1d-9b2a-c9c08dbff5c3 \ disk.qcow2
The requested functionality is already supported by the REST-API and implemented in the SDK. There is no benefit in supporting it via the UI.
The KCS attached to this bug explains the warkaround available - use API: https://access.redhat.com/solutions/4310681
(In reply to Nir Soffer from comment #7) > Any advanced option will be available only via the SDK. We will have a > command > line tool for uploading and downloading images, replacing upload_disk.py and > download_disk.py examples scripts in ovirt sdk. See bug 1626262. Nir, is it still the case or is there an easy way to get a reasonable download+collapse functionality also via the webadmin?
(In reply to Arik from comment #19) > (In reply to Nir Soffer from comment #7) > > Any advanced option will be available only via the SDK. We will have a > > command > > line tool for uploading and downloading images, replacing upload_disk.py and > > download_disk.py examples scripts in ovirt sdk. See bug 1626262. > > Nir, is it still the case or is there an easy way to get a reasonable > download+collapse functionality also via the webadmin? No. We can enable the nbd backend when using the UI, but this will stream raw guest data to the browser, including the unallocated areas, and will be extremely inefficient. For example, let say you have 500g disk with 50g of data in several snapshots. When downloading using download_disk.py, you will download 50g of data, and create raw sparse or qcow2 file on the client side. When downloading from the UI, you will download 50g of data and 450g of zeroes and create preallocated raw image of 500g on the client side. Because there is no way to stream qcow2 format, collapsing a chain requires a temporary file. In theory can do this: 1. we can download the disk using the SDK internally to a qcow2 image stored on the engine host, or copy the disk on the hosts to a temporary disk. 2. When the internal operation was completed, we can let the user download the temporary file or disk as a qcow2 image. 3. When the download is finished, we can clean up the temporary files/disks. We discussed these options when image transfer was added, and rejected them because of the complexity and the need for temporary storage, and the limited use case.
(In reply to Nir Soffer from comment #20) > (In reply to Arik from comment #19) > > (In reply to Nir Soffer from comment #7) > > > Any advanced option will be available only via the SDK. We will have a > > > command > > > line tool for uploading and downloading images, replacing upload_disk.py and > > > download_disk.py examples scripts in ovirt sdk. See bug 1626262. > > > > Nir, is it still the case or is there an easy way to get a reasonable > > download+collapse functionality also via the webadmin? > > No. We can enable the nbd backend when using the UI, but this will stream > raw guest data to the browser, including the unallocated areas, and will > be extremely inefficient. > > For example, let say you have 500g disk with 50g of data in several > snapshots. > > When downloading using download_disk.py, you will download 50g of data, and > create raw sparse or qcow2 file on the client side. > > When downloading from the UI, you will download 50g of data and 450g of > zeroes > and create preallocated raw image of 500g on the client side. Is that relevant to the fact the disk is composed of several volumes or is it a general issue we'll have when going through the browser (e.g., with a single volume qcow disk)? > Because there is no way to stream qcow2 format, collapsing a chain requires > a temporary file. In theory can do this: > > 1. we can download the disk using the SDK internally to a qcow2 image stored > on the engine host, or copy the disk on the hosts to a temporary disk. > > 2. When the internal operation was completed, we can let the user download > the temporary file or disk as a qcow2 image. > > 3. When the download is finished, we can clean up the temporary files/disks. > > We discussed these options when image transfer was added, and rejected them > because of the complexity and the need for temporary storage, and the limited > use case. Yes, that's what I suspected - we were in this exact situation when exporting a VM to OVA, we started by collapsing the volumes to a temporary volume and then downloading it but then replaced that code with qemu-img convert that writes to the offset within the OVA. Makes sense
(In reply to Arik from comment #21) > Yes, that's what I suspected - we were in this exact situation when > exporting a VM to OVA, we started by collapsing the volumes to a temporary > volume and then downloading it but then replaced that code with qemu-img > convert that writes to the offset within the OVA. Makes sense Forgot to add the question to this part - so how do we do this when downloading a collapsed disk via the API?
(In reply to Arik from comment #21) > (In reply to Nir Soffer from comment #20) > > (In reply to Arik from comment #19) > > > (In reply to Nir Soffer from comment #7) > > > > Any advanced option will be available only via the SDK. We will have a > > > > command > > > > line tool for uploading and downloading images, replacing upload_disk.py and > > > > download_disk.py examples scripts in ovirt sdk. See bug 1626262. > > > > > > Nir, is it still the case or is there an easy way to get a reasonable > > > download+collapse functionality also via the webadmin? > > > > No. We can enable the nbd backend when using the UI, but this will stream > > raw guest data to the browser, including the unallocated areas, and will > > be extremely inefficient. > > > > For example, let say you have 500g disk with 50g of data in several > > snapshots. > > > > When downloading using download_disk.py, you will download 50g of data, and > > create raw sparse or qcow2 file on the client side. > > > > When downloading from the UI, you will download 50g of data and 450g of > > zeroes > > and create preallocated raw image of 500g on the client side. > > Is that relevant to the fact the disk is composed of several volumes or is > it a general issue we'll have when going through the browser (e.g., with a > single volume qcow disk)? The UI does not use the nbd backend, so download of single qcow2 disk is kind of ok. You download the image as is, in the same way it is stored on storage. For file based domain, you get the exact qcow2 image as we have on storage. For block based domain, you get the qcow2 image and zero padding until the end of the logical volume. In 4.4 this could be 1g of padding, and in 4.5 it can be up to 2.5g of padding. For raw disk, this is the same issue, you are going to download the entire disk, and create a preallocated image on the client side.
(In reply to Arik from comment #22) > (In reply to Arik from comment #21) > > Yes, that's what I suspected - we were in this exact situation when > > exporting a VM to OVA, we started by collapsing the volumes to a temporary > > volume and then downloading it but then replaced that code with qemu-img > > convert that writes to the offset within the OVA. Makes sense > > Forgot to add the question to this part - so how do we do this when > downloading a collapsed disk via the API? When downloading from the API, the client has access to the extents API, so it can download only the data extents. A relatively simple client can download only the data, and create a raw sparse file. A more advanced client like imageio client does much more: - Download only the data extents - Convert zeroes in data extent to a hole on the client side - Store the raw data in a qcow2 format on the client side - Use multiple connections to speed up the download - Download as single snapshot as qcow2 image as on top of another image So if you use download_disk.py or download_disk_snapshot.py example, for the same example of 500g disk, you will get 50g qcow2 disk on the client side. For the case of raw disk, or chain backed up by raw disk, downlaod_disk.py or download_disk_snapshot.py will download the entire disk data (since raw disk is always full allocated) but downloaded zeroes will be converted to a hole on the client side. For example downloading 500g empty raw disk will create a 500g empty qcow2 image (200k) on the client side. All the interesting stuff require a smart client an can never be done with a browser.
Ack, thanks Nir. This makes perfect sense So based on the last comments, nothing has changed in regards to downloading multi-volume disks via the browser and users should still rather use more advanced command line tools for this.