Description of problem:
When creating a new RHV VM disk on file based storage, the file is zeroed out when the allocation policy is Preallocated. This makes perfect sense when creating a new VM from scratch and installing an OS, etc and is expected and the correct behavior.
However if one wants to create a place-holder disk image that will be replaced by a disk image from a backup for example, zeroing out the disk just consumes time and provides no added benefit.
Ideally when a Preallocated disk/image is created via REST API call, have an option that can be passed with the call to simply create a place holder file (perhaps via using touch or file system call) and create other normal metadata stored within RHV, etc.
This would only be necessary when the storage domain is nfs/local/other file based storage / not block based.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. VIA REST API Call to RHV-M: Create VM
2. VIA REST API Call to RHV-M: Create and Attach New Disk Image to VM with Allocation Policy of Preallocated on nfs/local/other file based storage. dd or other tool is used to zero out the file which takes time.
3. VIA REST API Call to RHV-M: Use previous disk/image/snapshot backup/image to overwrite the place-holder disk/image/file crated in step 2. Before this can be done, you have to wait for the disk in step 2 to be zeroed, this can be a while depending on how large the disk is. It's unnecessary to zero out a disk/image that will be overridden any ways.
Some amount of time is needed for the preallocated disk to be created.
If there is an option added to the disk create API Call to not zero out a placeholder disk, RHV would just generate and record the metadata details and create a zero byte place-holder "Preallocated" disk all of which should take a few seconds at most.
This would only be for creating disks via the REST API calls and would not need to be exposed via the UI. The main purpose would be for doing whole VM restores where the VM and associated disk images need to be recovered though integration with 3rd party solutions that manage the VM details, disk snapshots/images, etc.
There could be other use cases.
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly
ok, closing. Please reopen if still relevant/you want to work on it.